You have not yet added any article to your bookmarks!
Join 10k+ people to get notified about new posts, news and tips.
Do not worry we don't spam!
Post by : Anis Farhan
Photo: Instagram
Artificial Intelligence has become a powerful tool for creativity, problem-solving, and self-expression. Generative AI platforms like Google Gemini, ChatGPT with vision, and other image generators are widely used to create artwork, avatars, or even personal portraits. But with great capability comes a major question: what happens when AI generates something you never asked for, and it changes how you see yourself?
This concern recently surfaced when a youth uploaded her photo into Google Gemini for creative modification. To her shock, the generated picture added a mole on her face — something she had never included, never wanted, and never imagined. What was supposed to be a fun experience turned into an alarming one, leaving her terrified and questioning the accuracy and intent of AI tools.
This small but significant incident reveals the adverse effects of generative AI — not just in terms of factual errors, but also the psychological impact it can have on users, especially young and impressionable minds.
According to reports and shared user experiences, the youth used Google Gemini’s image generation tool to enhance her uploaded picture. Instead of producing a polished version of her original look, the system unexpectedly added a mole on her skin, positioned in a way that looked natural but foreign to her actual appearance.
What should have been a harmless creative tool suddenly turned intrusive. She feared whether AI was reading something hidden in her photo, predicting health issues, or revealing aspects she wasn’t aware of. The lack of transparency in how the mole appeared heightened her anxiety.
This shows how AI “hallucinations” — when a system makes up details not present in the input — are not limited to text but also extend to visual content. For an adult, it might be dismissed as a glitch. But for a teenager or youth already navigating issues of identity, body image, and confidence, such additions can be deeply unsettling.
For many young people, appearance and self-identity are sensitive topics. The addition of an unwanted feature in an AI-generated picture can trigger:
Body Image Anxiety – Users may start questioning if they missed something in their real appearance. “Do I actually have this mole?” “Is something wrong with me?” Such doubts create unnecessary insecurity.
Trust Issues with Technology – The user may begin to distrust AI systems, wondering what else might be manipulated or fabricated without consent.
Paranoia About Hidden Meaning – In a world where health technology can detect conditions through photos, some might think AI is diagnosing something secretly. The youth in this case was terrified the mole could mean a hidden disease or medical condition.
Emotional Stress – Instead of feeling empowered by AI creativity, the experience leaves users anxious, stressed, or even traumatized.
The ripple effect here is clear: what appears to be a “small error” from the machine can have big emotional consequences for humans.
AI image generators like Gemini rely on deep learning models trained on vast datasets of human faces, body types, and artistic images. When asked to generate or enhance an image, the model sometimes introduces elements that are statistically common in its dataset, even if they were never requested.
In this case, adding a mole could be the result of:
Bias in Training Data – If the dataset contains many images of faces with moles, freckles, or other skin features, the AI may consider it “normal” to include them.
Overfitting and Guesswork – The model sometimes fills in details it believes are missing or enhances features to make the image “realistic.”
Lack of Guardrails – If the system does not have strict filters to prevent unwanted modifications, it can generate additions like scars, marks, or accessories.
While this is technically explainable, for the user, the lack of consent in altering personal appearance makes it deeply problematic.
The mole incident is not isolated. It highlights a broader pattern of potential adverse effects from generative AI tools:
Inaccurate Representations
AI can distort reality by adding or removing features. For personal images, this can lead to confusion and harm, especially if shared publicly.
Deepfake Concerns
Once AI shows it can add elements without instruction, the fear of manipulated identities and deepfake abuse grows stronger. A harmless mole today, a fabricated scandal tomorrow.
Privacy Violations
AI tools sometimes infer details not explicitly provided. Even if unintentional, users may feel their privacy is invaded.
Cultural and Emotional Sensitivity
Features like skin marks, tattoos, or cultural symbols carry deep meaning. Adding them without context risks offending or distressing users.
Health Anxiety
As in this case, an added mole could be interpreted as a medical sign. This can trigger unnecessary health panic or self-diagnosis.
Loss of Authenticity
When AI manipulates appearances beyond the user’s intention, trust in digital identity suffers. People may feel they no longer control their own image.
This incident also raises key ethical and legal questions:
Consent: Should AI be allowed to add physical traits without explicit instruction?
Accountability: Who is responsible if a user experiences distress — the company, the developers, or no one?
Transparency: Should platforms clearly indicate when and why alterations are made?
Regulation: Is there a need for policies that prevent AI tools from altering human likeness beyond user requests?
Governments worldwide are already grappling with AI regulation. Events like this strengthen the case for clearer guidelines, especially for tools used by young audiences.
Big tech firms like Google must recognize that AI is not just a product — it directly impacts people’s emotions, identities, and social lives. They need to:
Improve Guardrails – Ensure the system does not add unrequested personal features.
Offer Transparency Notes – Provide clear explanations when an AI output differs from input, so users know it’s not detecting anything hidden.
Include Mental Health Safeguards – Especially for youth, companies should add warnings, support links, or educational notes.
Allow Easy Reporting – Users should be able to flag and report disturbing or incorrect outputs.
Design for Consent – AI should ask before making enhancements that alter someone’s identity.
Without these steps, trust in AI systems risks collapsing, no matter how advanced they become.
If left unchecked, incidents like the “mole case” could snowball into:
Mass Distrust of AI – Users may abandon AI platforms if they feel unsafe.
Legal Battles – Distressed users might pursue lawsuits for emotional harm or defamation.
Misuse by Malicious Actors – Hackers and trolls could exploit AI tools to manipulate identities more convincingly.
Mental Health Crisis – Especially for teenagers, distorted self-images can contribute to depression, body dysmorphia, or anxiety.
The warning is clear: companies cannot dismiss these cases as “minor glitches.”
Imagine being a young person, already navigating the insecurities of adolescence, and suddenly an advanced AI tool tells you — visually — that you have a mole you never noticed. Even if you rationally know it’s a mistake, the emotional seed of doubt is planted.
Technology should not amplify insecurities. Instead, it should empower creativity and confidence. When AI interferes with something as personal as our faces, it crosses a line that must be guarded carefully.
The incident where Google Gemini added a mole to a youth’s picture highlights an essential truth: AI is not neutral. It is shaped by training data, algorithms, and design choices — all of which can unintentionally harm.
For users, this is a reminder to treat AI outputs with caution and not let them define personal reality. For tech companies, it is a wake-up call to prioritize consent, transparency, and mental health safeguards.
Generative AI can be a powerful ally in art, education, and creativity. But unless its risks are taken seriously, even a tiny mole can grow into a giant trust issue.
This article is based on public concerns and illustrative incidents involving generative AI. The case discussed is meant to highlight potential risks and does not claim medical or factual accuracy about individual users. AI outputs vary and may not reflect reality. Users should exercise caution and seek professional advice where needed.
Dhurandhar Day 27 Box Office: Ranveer Singh’s Spy Thriller Soars Big
Dhurandhar earns ₹1117 crore worldwide by day 27, becoming one of 2026’s biggest hits. Ranveer Singh
Hong Kong Welcomes 2026 Without Fireworks After Deadly Fire
Hong Kong rang in 2026 without fireworks for the first time in years, choosing light shows and music
Ranveer Singh’s Dhurandhar Hits ₹1000 Cr Despite Gulf Ban Loss
Dhurandhar crosses ₹1000 crore globally but loses $10M as Gulf nations ban the film. Fans in holiday
China Claims India-Pakistan Peace Role Amid India’s Firm Denial
China claims to have mediated peace between India and Pakistan, but India rejects third-party involv
Mel Gibson and Rosalind Ross Split After Nearly a Decade Together
Mel Gibson and Rosalind Ross confirm split after nearly a year. They will continue co-parenting thei
Rashmika Mandanna, Vijay Deverakonda Set to Marry on Feb 26
Rashmika Mandanna and Vijay Deverakonda are reportedly set to marry on February 26, 2026, in a priva