As the use of AI-powered chatbots expands, mental health professionals are voicing concerns about their unintended risks, particularly for vulnerable users who may rely on them for emotional support.
Amelia, a 31-year-old from the United Kingdom who asked for her name to be changed, first turned to ChatGPT while on medical leave for depression. She described the chatbot’s responses as initially “sweet and supportive.” But over time, her interactions took a darker turn. “If suicidal ideation entered my head, I would ask about it,” she told Euronews Next.
Although the chatbot never encouraged harmful behavior, it provided clinical-style summaries of suicide methods when prompted in specific ways. Amelia said this access was troubling: “I had never researched a suicide method before because that information felt inaccessible. But when I had it on my phone, I could just open it and get an immediate summary.” She has since stopped using chatbots and is now under the care of medical professionals.
Her experience underscores wider anxieties about the role of artificial intelligence in mental health. According to the World Health Organization, more than one billion people worldwide live with mental health disorders, and many lack adequate access to treatment. In this context, AI companions such as ChatGPT, Pi, and Character.AI are increasingly being used as substitutes for human connection.
“AI chatbots are readily available, offering 24/7 accessibility at minimal cost,” said Dr. Hamilton Morrin, Academic Clinical Fellow at King’s College London. “But some models not designed for therapeutic use can respond in ways that are misleading or unsafe.”
A July survey by Common Sense Media found that 72 percent of teenagers had used AI companions at least once, with more than half using them regularly. Researchers warn that such reliance can lead to “AI psychosis,” a term describing distorted thinking or delusional beliefs amplified by repeated chatbot interactions.
Concerns have already reached the courts. In California, parents have filed a lawsuit against OpenAI, alleging that ChatGPT contributed to their son’s death by suicide. OpenAI has since acknowledged that its systems have not always behaved appropriately in sensitive contexts and announced new safety controls to flag signs of acute distress. Meta, the parent company of Facebook and Instagram, has also pledged to block its chatbots from discussing self-harm or eating disorders with teenagers.
Experts argue that safeguards must go further. Suggested measures include requiring chatbots to remind users they are not human, detecting signs of psychological distress, and setting strict conversational boundaries on intimate or harmful topics. “AI platforms must involve clinicians, ethicists, and human-AI specialists in auditing emotionally responsive systems,” Dr. Morrin said.
Despite the risks, professionals stress that the technology is not inherently harmful but should never replace human care. “AI offers many benefits to society, but it should not replace the human support essential to mental health,” said Dr. Roman Raczka, President of the British Psychological Society. “Greater investment in mental health services is critical to ensure people receive timely, in-person support.”
