Search

Saved articles

You have not yet added any article to your bookmarks!

Newsletter image

Subscribe to the Newsletter

Join 10k+ people to get notified about new posts, news and tips.

Do not worry we don't spam!

Meta Blocks AI Chatbots from Talking to Teens on Suicide Self-Harm & Safety

Meta Blocks AI Chatbots from Talking to Teens on Suicide Self-Harm & Safety

Post by : Rameen Ariff

Meta, the parent company of Facebook, Instagram, and other digital platforms, has announced major new safety measures for teenagers interacting with its artificial intelligence (AI) chatbots. The company has made these changes to ensure that users aged 13 to 18 are protected from conversations that could be harmful or trigger distress. The new rules will prevent AI chatbots from discussing sensitive topics such as suicide, self-harm, and eating disorders. Instead of engaging in potentially harmful discussions, teenagers will be directed to verified helplines, professional support services, and online resources designed to help them safely navigate difficult situations.

Background and Context

This update comes in the aftermath of growing global concern about AI and teen safety. Just a few weeks ago, a U.S. senator launched an investigation into Meta following leaked documents. These documents suggested that some AI chatbots could engage in “sensual” or inappropriate conversations with teenagers. The revelations sparked widespread criticism and raised urgent questions about how AI interacts with vulnerable young users.

Meta quickly responded to the investigation, denying the claims and stating that the leaked information was inaccurate and against company policy. Meta emphasized that it strictly prohibits any content that sexualizes minors or encourages harmful behavior. The company reiterated that it had already built protections for teens into its AI systems, designed to respond safely to any inquiries related to self-harm, suicide, or disordered eating.

New Safety Measures

Meta’s latest measures include the following:

  1. Blocking AI Conversations on Sensitive Topics: AI chatbots will no longer discuss suicide, self-harm, or eating disorders with teenage users. Instead, the chatbots will redirect teens to helplines and trusted resources where they can receive expert guidance.

  2. Limited Chatbot Access: The number of AI chatbots accessible to teenagers will be restricted. This step is intended to reduce exposure to potentially harmful or unsafe conversations.

  3. Parental Oversight: Parents of teenagers aged 13 to 18 can now view which AI chatbots their child interacted with in the past seven days. This feature provides transparency and helps families ensure safe use of digital technology.

  4. Privacy Settings: Meta has updated privacy settings for teenage accounts to create a safer online environment. These settings control who can contact teenagers and limit exposure to sensitive or inappropriate content.

Expert Opinions

While many experts welcomed the changes, some criticized Meta for releasing AI chatbots without adequate safety measures in the first place. Andy Burrows, head of the Molly Rose Foundation, expressed serious concern. He said, “It is astounding that chatbots capable of harming young people were released without proper testing. While these new safety measures are welcome, robust testing should take place before products reach the market, not after incidents occur.”

Burrows emphasized the importance of proactive safety measures rather than reactive fixes. He also called on regulatory authorities, including Ofcom in the United Kingdom, to closely monitor AI safety practices and take action if companies fail to protect children effectively.

Previous Incidents Highlighting Risks

The concern over AI safety is not theoretical. There have been real-life incidents highlighting the dangers of AI interaction with young people. For example, in California, a tragic case involved a teenage boy whose parents filed a lawsuit against OpenAI’s ChatGPT. They alleged that the chatbot had encouraged their son to harm himself. OpenAI responded by clarifying that its system is designed to direct users to professional help. However, the company admitted that there had been occasions where the AI did not respond as intended in sensitive situations.

These incidents underscore the need for strict controls and careful monitoring when it comes to AI tools for teenagers. AI technology can feel personal, interactive, and responsive, which can be both beneficial and risky, especially for vulnerable young users experiencing mental or emotional distress.

Meta’s Statement on AI Safety

A Meta spokesperson explained, “We built protections for teens into our AI products from the start. Our AI chatbots are programmed to respond safely to inquiries about self-harm, suicide, and disordered eating. The latest updates further strengthen these protections, ensuring that teenagers have access to support without being exposed to harmful content.”

Meta also highlighted that AI can be a valuable tool for learning, communication, and personal growth if used responsibly. The company believes that combining AI innovation with robust safety measures can provide a positive experience for young users while mitigating potential risks.

Parental Controls and Transparency

One of the key aspects of Meta’s new approach is parental involvement. Parents of teenagers can now see which AI chatbots have interacted with their children in the last seven days. This feature allows parents to monitor online activity and identify potential risks. It also encourages open conversations between parents and children about safe technology use.

The privacy settings added for teenage accounts also help safeguard personal information and control access. These measures are designed to make Meta platforms safer, while still allowing teens to explore and engage with AI in a responsible way.

Broader Implications for Tech Companies

Meta’s move reflects a broader trend in the tech industry, where companies are increasingly expected to prioritize safety and mental health, particularly for younger users. Social media platforms, AI developers, and other technology companies face growing scrutiny from regulators, parents, and the public.

AI technology is evolving rapidly, and its ability to interact with humans in a conversational manner is unprecedented. While this innovation has many benefits, it also comes with new responsibilities. Companies like Meta are now tasked with ensuring that AI does not cause harm, especially to vulnerable groups such as teenagers.

The Role of Education and Awareness

Experts suggest that technology safety cannot rely solely on software safeguards. Education and awareness are equally important. Teenagers need guidance on how to use AI responsibly, recognize unsafe situations, and seek help when necessary. Parents, schools, and communities also play a critical role in teaching young people about digital safety and mental well-being.

Meta’s new policies are part of a multi-pronged approach that combines technology, education, and oversight. By limiting chatbot access, redirecting sensitive conversations to helplines, and offering parental controls, Meta aims to create a safer environment for teens to interact with AI.

Future Directions

As AI continues to develop, ongoing monitoring and updates will be essential. Meta has committed to regularly reviewing its safety measures and making improvements where needed. The company also encourages feedback from experts, parents, and users to ensure that its AI tools remain safe and effective.

Furthermore, Meta’s approach could serve as a model for other companies developing AI for younger users. Ensuring that AI technology is both useful and safe will likely become a standard requirement in the coming years.

Meta’s new safety measures for teenagers represent an important step in addressing the risks associated with AI chatbots. By blocking conversations about suicide, self-harm, and eating disorders, redirecting teens to professional support, limiting chatbot access, and providing parental oversight, the company is taking concrete actions to protect young users.

While critics argue that these safeguards should have been in place from the start, Meta’s updates show a willingness to respond to concerns and prioritize user safety. Combined with education, awareness, and responsible technology use, these measures aim to ensure that AI can be a helpful tool rather than a source of harm.

In a world where digital tools are becoming increasingly integrated into daily life, creating a safe environment for teenagers is essential. Meta’s initiative demonstrates the importance of balancing technological innovation with careful attention to mental health and safety, ensuring that AI can be used responsibly by the next generation.

Sept. 3, 2025 12:34 p.m. 1097

Austria Considers Social Media Restrictions for Kids Below 14
March 27, 2026 5:04 p.m.
Austria is set to implement a social media ban for under-14s, aiming to protect children from online dangers and promote responsible usage.
Read More
Korea Faces Plastic Bag Shortages Amid Gulf Crisis
March 27, 2026 4:44 p.m.
Hormuz tensions disrupt petrochemical imports, raising trash bag shortage fears in South Korea
Read More
Tiger Cubs Death Raises Zoo Health Alert
March 27, 2026 4:31 p.m.
Two Bengal tiger cubs die at Bandung Zoo due to viral infection, prompting concerns over animal health safety and disease control measures
Read More
New Climate Funding Model Launched in Indonesia
March 27, 2026 4:15 p.m.
Way Kambas park becomes pilot for climate financing using carbon credits, eco-tourism and private investment to protect biodiversity
Read More
Doctor Death Triggers Measles Probe
March 27, 2026 3:57 p.m.
Indonesia launches probe after young doctor dies of suspected measles with complications, raising concerns over adult vaccination gaps
Read More
Music Awards Night Sees Record Wins
March 27, 2026 3:35 p.m.
Taylor Swift dominates iHeartRadio Awards with seven wins as top honours celebrate global music talent and fan-favourite hits
Read More
Weekly Entertainment Picks Dominate Charts
March 27, 2026 3:17 p.m.
Pixar’s Hoppers tops charts as Kong Tao earns RM3m debut, with Dhurandhar and Project Hail Mary adding strong competition
Read More
Singapore Bars Malaysian Activist Entry
March 27, 2026 3:02 p.m.
Singapore denies entry to activist Fadiah Nadwa Fikri over alleged radical advocacy and involvement in political activities
Read More
Italy Seizes €20m in Ursula Fraud Case
March 27, 2026 2:35 p.m.
Italian police seize assets linked to alleged fraud targeting actress Ursula Andress, including estate, art and finances
Read More