A viral AI saree trend on Instagram has taken a dark turn after a user discovered a disturbing detail in her AI-generated image. The AI seemingly knew about a mole on her hand that wasn’t visible in the original photo.
Google Gemini’s viral “Banana AI saree trend” has taken Instagram by storm, with countless users sharing dreamy 90s Bollywood-style edits featuring chiffon sarees, flowing fabric in the wind, and warm golden-hour backdrops. But while the trend has become a glamorous obsession for many, one user’s “creepy” experience has sparked concern about just how safe these AI-generated edits really are.
A Startling Discovery
An Instagram user named Jhalakbhawani recently shared her unsettling encounter with Gemini. In her post, she explained how she decided to try out the trend by uploading a photo of herself in a green full-sleeve suit and typing in the prompt. What she got back-at first-was a flattering AI-generated saree image that she even uploaded proudly to her Instagram.
But on closer inspection, something unusual caught her eye. The AI-generated picture featured a small mole on her left hand. Shockingly, she revealed that she does have a mole in that exact spot in real life. Yet, the original image she uploaded had no visible mole.
This detail left her disturbed. “How did Gemini know that I have a mole on this part of my body?” she asked in her post, calling the experience “scary and creepy.” She went on to warn her followers: “Please be careful. Whatever you upload on social media or AI platforms, make sure you stay safe.”
Her revelation sparked heavy debate on Instagram-some users raised concerns about the depth of data AI tools can access, while others dismissed the incident as coincidence or even accused her of seeking engagement through viral content.
Safety Concerns Around Gemini Nano Banana Tool
The controversy has turned the spotlight onto AI image-editing safety. While companies like Google and OpenAI stress that they’ve developed safeguards, experts argue user caution remains the most important factor.
Google’s “Nano Banana” edits, for example, are embedded with an invisible watermark called SynthID as well as metadata tags. According to Google, this watermark ensures the images are clearly identifiable as AI-generated. Although invisible to the human eye, special tools can detect it and authenticate the image’s origin.
Can Invisible Watermarks Really Help?
Critics, however, are skeptical. At present, tools to detect SynthID watermarks aren’t broadly available to the general public, leaving most users unable to determine whether an image is truly AI-generated. Plus, experts say watermarking can be manipulated or removed altogether.
“Watermarking alone will never be enough,” warned UC Berkeley professor Hany Farid. Ben Colman, CEO of Reality Defender, echoed the sentiment, noting that watermark systems often “fail in real-world usage from the very start.” Many in the tech community argue that watermarks must be combined with other defensive technologies if we are to stand a chance against the rise of realistic deepfakes.
For now, while the Banana AI saree edits continue to enchant Instagram feeds, the conversation around safety is only growing louder-reminding users to remain cautious with what they share and to question how much AI might be able to learn about them, even beyond the pixels they upload.