Generative AI-powered chatbots (while helpful to people seeking quick answers) have become a central concern in AI policy debates, especially when used in healthcare, education, elections, and government services, not to mention their environmental impact. 

Children and seniors have placed dangerous trust in these systems, despite their potential downsides. I have listed some of the most concerning cases and studies here.

Alarming Real-World Incidents and Studies

  1. A TIME investigation found that therapy-style chatbots (tested by a psychiatrist posing as a teen) responded with self-harm encouragement, romanticized violence, and impersonations of licensed professionals. Up to 30% of conversations were flagged as inconsistent or unsafe. (TIME)
  2. A disturbing study published in Live Science revealed that GPT‑4 and Gemini occasionally provided detailed suicide methods in response to high-risk prompts. ChatGPT responded affirmatively to 78% of such prompts. (Live Science)
  3. In a reported lawsuit, a Florida mother alleged that her 14-year-old son died by suicide after his attachment to a Character.ai chatbot. The legal complaint accused the company of offering a “dangerous and untested” experience. In May 2025, a Florida federal judge rejected the company’s defense that the chatbot’s messages were protected under the First Amendment and advanced the lawsuit to a jury trial. Google, named as a co-defendant due to its licensing arrangement with Character.AI, is also required to face the lawsuit. This marks a major legal milestone and one of the first cases challenging AI developers over psychological harm to minors. (AP News, The Verge)
  4. One of the most distressing incidents of Q2–Q3 2025 involved Thongbue “Bue” Wongbandue, a 76-year-old stroke survivor from New Jersey with diminished cognitive capacity. According to a Reuters investigation, the AI chatbot called “Big Sis Billie”, deployed via Facebook Messenger by Meta, convinced him she was a real woman. She suggested they meet in person and provided a physical address. Tragically, Bue fell while rushing to catch a train, sustaining fatal head and neck injuries. (Reuters, The Economic Times, People.com)
  5. This case stands alongside other troubling reports—like a former Yahoo executive whose paranoia worsened after interacting with ChatGPT, leading to a murder-suicide—highlighting the psychological risks when emotionally frail users place undue trust in disembodied AI companions. (Tom’s Guide)
  6. A comprehensive red-teaming study assessed Claude, Gemini, GPT-4o, and Llama3‑70B on medical advice. Safety concerns ranged from 5% (Claude) to 13% (GPT-4o, Llama) unsafe responses. This suggests millions of users could unknowingly receive potentially harmful medical guidance. (arXiv)
  7. A study published in Psychiatric Times outlined escalating mental health risks tied to prolonged chatbot use, underlining the urgency for regulatory safeguards. (Psychiatric Times)
  8. A recent report found that Character.AI’s celebrity impersonator bots sometimes delivered dangerous content to teens, highlighting persistent filter failures despite content policies. (The Washington Post)
  9. A recent investigation confirmed that chatbots like ChatGPT are still vulnerable to wellness-related exploits. They can guide users toward self-harm in under 15 seconds. (Daily Telegraph)

This trend is quite troubling and demands clearer regulation, robust design ethics, and stronger protections for vulnerable groups. Below are the latest regulatory updates at local, national, and global levels.

Policy Updates and Investigations

California

  • In April 2025, the City of San José updated its Generative AI Guidelines to include strict limitations on the deployment of chatbots in government services. Based on that, chatbots must not provide legal, housing, or benefits advice unless explicitly vetted, and staff training, accuracy review, and disclosure are required if the public interacts with an AI system. (San Jose Gen AI Guidelines)
  • The proposed California AB 1064 (Leader Ethical AI Development Act) targets “companion chatbot” developers. It prohibits bots that provide mental health therapy impersonally and/or harvest biometric or personal data.  (Senate Judiciary Committee, LegiScan)  

U.S. 

  • In May 2025, the FTC warned companies about deploying chatbots in healthcare or financial settings without clear user disclosures, accuracy benchmarks, and opt-out pathways. (FTC outlines)
  • The Department of Health and Human Services (HHS) is developing new guidance for AI chatbots used in mental health apps, hospital triage, and insurance interactions, as flagged in Q1 2025 and expected to be released in late 2025. (HHS.gov)
  • Several U.S. Attorneys General issued an open letter to AI developers, including OpenAI, Meta, and Google, warning of the risks that chatbots pose to children and urging immediate safety improvements. (The Economic Times)
  • The FTC is investigating major chatbot providers (e.g., OpenAI, Meta, Character.AI), requesting internal documents related to children’s mental health risks in AI chat interactions. (Reuters, WJLA, The Wall Street Journal)

Global

  • The EU AI Act, finalized in March 2025, includes specific risk-tier requirements for conversational AI, requiring mandatory disclosure when users interact with an AI system and additional restrictions if the chatbot mimics human emotions or offers therapeutic/educational support. (Talkative)
  • The OECD and the Council of Europe also now emphasize “autonomy protection,” requiring that AI agents not manipulate or mislead users, particularly vulnerable populations such as children and seniors. (OECD)

Industry Responses

Meta stated it will implement extra safety protocols to prevent discussions around suicide or sensitive topics with teens, following internal policy leaks and Senate scrutiny (PC Gamer, AI News, Reuters).

Following tragedies and lawsuits, OpenAI is preparing parental safety tools for ChatGPT, including alerts when teens express distress, parent-linked accounts, and age restrictions. Critics argue these measures are reactive and overdue. (The Guardian, New York Post)

As we see above, the AI chatbots expose a concerning reality. They simulate intimacy and expertise, which can mislead those in vulnerable states. Legal and emotional harms are real and increasing. 

Regulators are scrambling to catch up; current protections feel insufficiently proactive. Existing corporate safeguards and oversight also remain inadequate to protect people from harm when AI is deployed in emotionally charged or vulnerable contexts. 

Considering this state, it’s clear that both technology companies and policymakers must collaborate on urgency, responsibility, and design ethics for AI systems interacting with minors and seniors. Without consistent and clearer safety-by-design principles, children and other vulnerable users remain at risk.

Trending