The viral Google Gemini Nano Banana AI trend, popular for creating quirky portraits, is raising serious digital privacy concerns among experts.
The viral Google Gemini Nano Banana 3D figurine trend may be delighting Instagram feeds with its quirky edits and retro Bollywood-style portraits, but experts are warning that behind the fun lies a serious threat: digital privacy. By encouraging users to upload their most personal images-especially faces-platform-driven AI trends are creating vast pools of sensitive data, raising questions about how such images are stored, processed, and potentially misused.
A User’s “Creepy” Encounter
The dangers were recently highlighted by Instagram user @jhalakbhawani, who shared her discomfort after trying the AI saree portrait trend. She explained that Gemini had generated an image of her in a saree, which at first seemed harmless and even flattering. However, on closer inspection, she noticed that the AI edit included a mole on her left hand-exactly where she has one in real life.
“The original photo I uploaded didn’t show the mole at all. How did Gemini know? It’s very scary and creepy. Please be careful about what you upload,” she warned in her post.
Her revelation quickly sparked debate online. Some dismissed her claim, while others argued it was a wake-up call about the hidden risks of AI image-generation platforms.
Experts Weigh In on the Privacy Risks
Cybersecurity experts have expressed growing concern about how platforms like Gemini handle sensitive uploads. Saikat Datta, CEO of DeepStrat, explained that every time facial images are shared online, issues of identity management arise.
“The platform may retain and process these images for analytics or model improvement. Even anonymised, there is always the risk of misuse. If the system or linked databases are hacked, your personal data could leak into the wrong hands,” he noted, pointing to dangers like identity theft and the creation of fake documents.
Fellow expert Dr. Anil Rachamalla, founder and CEO of the End Now Foundation, emphasised the psychological dimension. “Trends like AI image generation not only put privacy at stake but also distort perceptions of beauty. Once people start seeing themselves through AI’s lens, it can lead to false expectations, misrepresentation, and bias.”
He further warned about misuse in the form of deepfakes and synthetic avatars. “Apps like MyFace showed us how user data can be hijacked without consent. The same could happen here. Deepfakes are already being used for scams, impersonations, and fraud. Detection isn’t universal, making regulation very hard. The safest measure is digital awareness at the user level.”
From 3D Figurines to Retro Portraits
The Nano Banana trend itself has evolved rapidly. What started as 3D figurines quickly expanded into Pinterest-like vintage saree edits and cinematic-inspired portraits, all AI-generated and hyper-shareable. But the more popular these trends become, the more images are being funneled into AI systems-raising the probability of data leaks and unethical repurposing.
Google’s Position on Ownership
In its AI usage policies, Google clarifies that it does not claim ownership over AI-generated content. However, it reserves the right to create “the same or similar content for others” and stresses that users alone are responsible for how generated images are used or shared. In other words, while the company offers creative tools, the onus of privacy and legal compliance lies squarely with the users.
Free vs Paid: How Many Images Can Be Generated?
Initially, Google’s free-tier Gemini Nano Banana tool allowed users to create up to 100 images per day, while Pro and Ultra subscribers could make up to 1,000 images daily. But recently, the company has walked back this clarity. Its support pages now simply note: “Gemini Apps limits may change. If capacity changes, limits for users without a Pro or Ultra plan may be restricted before paid users.”