Search

Saved articles

You have not yet added any article to your bookmarks!

Newsletter image

Subscribe to the Newsletter

Join 10k+ people to get notified about new posts, news and tips.

Do not worry we don't spam!

Adverse Effects of Google Gemini: When AI Adds What You Never Asked For

Adverse Effects of Google Gemini: When AI Adds What You Never Asked For

Post by : Anis Farhan

Photo: Instagram

When AI Crosses the Line

Artificial Intelligence has become a powerful tool for creativity, problem-solving, and self-expression. Generative AI platforms like Google Gemini, ChatGPT with vision, and other image generators are widely used to create artwork, avatars, or even personal portraits. But with great capability comes a major question: what happens when AI generates something you never asked for, and it changes how you see yourself?

This concern recently surfaced when a youth uploaded her photo into Google Gemini for creative modification. To her shock, the generated picture added a mole on her face — something she had never included, never wanted, and never imagined. What was supposed to be a fun experience turned into an alarming one, leaving her terrified and questioning the accuracy and intent of AI tools.

This small but significant incident reveals the adverse effects of generative AI — not just in terms of factual errors, but also the psychological impact it can have on users, especially young and impressionable minds.


How the Incident Unfolded

According to reports and shared user experiences, the youth used Google Gemini’s image generation tool to enhance her uploaded picture. Instead of producing a polished version of her original look, the system unexpectedly added a mole on her skin, positioned in a way that looked natural but foreign to her actual appearance.

What should have been a harmless creative tool suddenly turned intrusive. She feared whether AI was reading something hidden in her photo, predicting health issues, or revealing aspects she wasn’t aware of. The lack of transparency in how the mole appeared heightened her anxiety.

This shows how AI “hallucinations” — when a system makes up details not present in the input — are not limited to text but also extend to visual content. For an adult, it might be dismissed as a glitch. But for a teenager or youth already navigating issues of identity, body image, and confidence, such additions can be deeply unsettling.


Psychological Impact on Young Users

For many young people, appearance and self-identity are sensitive topics. The addition of an unwanted feature in an AI-generated picture can trigger:

  1. Body Image Anxiety – Users may start questioning if they missed something in their real appearance. “Do I actually have this mole?” “Is something wrong with me?” Such doubts create unnecessary insecurity.

  2. Trust Issues with Technology – The user may begin to distrust AI systems, wondering what else might be manipulated or fabricated without consent.

  3. Paranoia About Hidden Meaning – In a world where health technology can detect conditions through photos, some might think AI is diagnosing something secretly. The youth in this case was terrified the mole could mean a hidden disease or medical condition.

  4. Emotional Stress – Instead of feeling empowered by AI creativity, the experience leaves users anxious, stressed, or even traumatized.

The ripple effect here is clear: what appears to be a “small error” from the machine can have big emotional consequences for humans.


Technical Reasons: Why Did Gemini Add the Mole?

AI image generators like Gemini rely on deep learning models trained on vast datasets of human faces, body types, and artistic images. When asked to generate or enhance an image, the model sometimes introduces elements that are statistically common in its dataset, even if they were never requested.

In this case, adding a mole could be the result of:

  • Bias in Training Data – If the dataset contains many images of faces with moles, freckles, or other skin features, the AI may consider it “normal” to include them.

  • Overfitting and Guesswork – The model sometimes fills in details it believes are missing or enhances features to make the image “realistic.”

  • Lack of Guardrails – If the system does not have strict filters to prevent unwanted modifications, it can generate additions like scars, marks, or accessories.

While this is technically explainable, for the user, the lack of consent in altering personal appearance makes it deeply problematic.


Broader Adverse Effects of AI Image Generation

The mole incident is not isolated. It highlights a broader pattern of potential adverse effects from generative AI tools:

  1. Inaccurate Representations
    AI can distort reality by adding or removing features. For personal images, this can lead to confusion and harm, especially if shared publicly.

  2. Deepfake Concerns
    Once AI shows it can add elements without instruction, the fear of manipulated identities and deepfake abuse grows stronger. A harmless mole today, a fabricated scandal tomorrow.

  3. Privacy Violations
    AI tools sometimes infer details not explicitly provided. Even if unintentional, users may feel their privacy is invaded.

  4. Cultural and Emotional Sensitivity
    Features like skin marks, tattoos, or cultural symbols carry deep meaning. Adding them without context risks offending or distressing users.

  5. Health Anxiety
    As in this case, an added mole could be interpreted as a medical sign. This can trigger unnecessary health panic or self-diagnosis.

  6. Loss of Authenticity
    When AI manipulates appearances beyond the user’s intention, trust in digital identity suffers. People may feel they no longer control their own image.


Ethical and Legal Questions

This incident also raises key ethical and legal questions:

  • Consent: Should AI be allowed to add physical traits without explicit instruction?

  • Accountability: Who is responsible if a user experiences distress — the company, the developers, or no one?

  • Transparency: Should platforms clearly indicate when and why alterations are made?

  • Regulation: Is there a need for policies that prevent AI tools from altering human likeness beyond user requests?

Governments worldwide are already grappling with AI regulation. Events like this strengthen the case for clearer guidelines, especially for tools used by young audiences.


Responsibility of Tech Companies

Big tech firms like Google must recognize that AI is not just a product — it directly impacts people’s emotions, identities, and social lives. They need to:

  1. Improve Guardrails – Ensure the system does not add unrequested personal features.

  2. Offer Transparency Notes – Provide clear explanations when an AI output differs from input, so users know it’s not detecting anything hidden.

  3. Include Mental Health Safeguards – Especially for youth, companies should add warnings, support links, or educational notes.

  4. Allow Easy Reporting – Users should be able to flag and report disturbing or incorrect outputs.

  5. Design for Consent – AI should ask before making enhancements that alter someone’s identity.

Without these steps, trust in AI systems risks collapsing, no matter how advanced they become.


Real-World Risks If Ignored

If left unchecked, incidents like the “mole case” could snowball into:

  • Mass Distrust of AI – Users may abandon AI platforms if they feel unsafe.

  • Legal Battles – Distressed users might pursue lawsuits for emotional harm or defamation.

  • Misuse by Malicious Actors – Hackers and trolls could exploit AI tools to manipulate identities more convincingly.

  • Mental Health Crisis – Especially for teenagers, distorted self-images can contribute to depression, body dysmorphia, or anxiety.

The warning is clear: companies cannot dismiss these cases as “minor glitches.”


The Human Perspective

Imagine being a young person, already navigating the insecurities of adolescence, and suddenly an advanced AI tool tells you — visually — that you have a mole you never noticed. Even if you rationally know it’s a mistake, the emotional seed of doubt is planted.

Technology should not amplify insecurities. Instead, it should empower creativity and confidence. When AI interferes with something as personal as our faces, it crosses a line that must be guarded carefully.


Conclusion: Lessons from the Mole Incident

The incident where Google Gemini added a mole to a youth’s picture highlights an essential truth: AI is not neutral. It is shaped by training data, algorithms, and design choices — all of which can unintentionally harm.

For users, this is a reminder to treat AI outputs with caution and not let them define personal reality. For tech companies, it is a wake-up call to prioritize consent, transparency, and mental health safeguards.

Generative AI can be a powerful ally in art, education, and creativity. But unless its risks are taken seriously, even a tiny mole can grow into a giant trust issue.

Disclaimer

This article is based on public concerns and illustrative incidents involving generative AI. The case discussed is meant to highlight potential risks and does not claim medical or factual accuracy about individual users. AI outputs vary and may not reflect reality. Users should exercise caution and seek professional advice where needed.

Sept. 18, 2025 9:31 p.m. 574

Anthony Joshua Released from Hospital After Tragic Accident
Jan. 1, 2026 3:06 p.m.
Anthony Joshua has been discharged after a car crash that claimed two close friends. He is recovering at home with minor injuries and deep emotions
Read More
Bulgaria to Embrace Euro as National Currency Starting January 1
Jan. 1, 2026 3:05 p.m.
Bulgaria will switch to the euro from January 1, enhancing economic stability and fostering EU unity and trade opportunities.
Read More
BTS Returns With First Album in Over Three Years After Hiatus
Jan. 1, 2026 3 p.m.
BTS prepares to release their first album since 2022's Proof, returning after a hiatus caused by members completing mandatory military service
Read More
Deadly Explosion at Crans-Montana Bar Kills at Least 10 People
Jan. 1, 2026 2:51 p.m.
At least 10 people died after a powerful explosion and fire hit a crowded bar in Crans-Montana early morning. Many others suffered serious burns
Read More
Sovereign Wealth Funds Break $15 Trillion Mark, AI Investments Surge
Jan. 1, 2026 2:38 p.m.
In 2025, sovereign wealth funds hit a record $15 trillion, bolstered by strong market trends and substantial AI investments from the Middle East.
Read More
Sydney Rings in New Year With Peace Message After Deadly Bondi Attack
Jan. 1, 2026 2:37 p.m.
Sydney welcomed the New Year with a strong message of peace and unity as hundreds of thousands gathered along the harbour for the city’s world-famous fireworks,
Read More
Trump Confirms National Guard Troop Withdrawal from Key U.S. Cities Amid Crime Concerns
Jan. 1, 2026 1:36 p.m.
President Trump has ordered the withdrawal of National Guard units from Chicago, Los Angeles, and Portland, warning of potential returns if crime surges.
Read More
Dhurandhar Day 27 Box Office: Ranveer Singh’s Spy Thriller Soars Big
Jan. 1, 2026 1:29 p.m.
Dhurandhar earns ₹1117 crore worldwide by day 27, becoming one of 2026’s biggest hits. Ranveer Singh’s spy thriller breaks records globally
Read More
UAE Rings in 2026 with Dazzling Drone and Firework Shows
Jan. 1, 2026 1:17 p.m.
UAE celebrates 2026 with record-setting drone displays and fireworks across various emirates, showcasing its innovative spirit.
Read More
Trending News