OpenAI reveals that over half a million ChatGPT users show signs of mania, psychosis, or suicidal thoughts weekly, prompting new safety upgrades, expert partnerships, and debates on AI’s impact on mental health.
OpenAI has raised alarm after revealing that more than half a million ChatGPT users may be showing signs of serious mental distress every week, including symptoms of mania, psychosis, or suicidal thoughts.
According to the company’s internal data, about 0.07% of its estimated 800 million weekly users—around 560,000 people—exhibit signs of potential mental health crises. Another 1.2 million users reportedly send messages with clear indicators of suicidal planning or intent.
Emotional Attachment and Mental Health Concerns
The company also found that over one million weekly users show “exclusive emotional attachment” to ChatGPT, forming bonds that may come at the expense of real-world relationships and personal well-being. OpenAI noted that while such connections might start harmlessly, they can become unhealthy if users begin depending on the chatbot for emotional support or companionship.
To address these growing concerns, OpenAI has formed a panel of more than 170 mental health experts to refine how ChatGPT handles conversations about distress, psychosis, or suicidal thoughts. The company has also retrained its latest GPT-5 model to respond more safely and empathetically—achieving 91% compliance with its safety standards, up from 77% in earlier versions.
Experts Warn, But Call Efforts “A Step Forward”
Mental health professionals have cautiously welcomed the move. Dr. Hamilton Morrin, a psychiatrist at King’s College London, said that while OpenAI’s collaboration with experts is encouraging, “the problem is far from solved.” Other experts, like Dr. Thomas Pollak of the South London and Maudsley NHS Foundation Trust, warned that even small percentages represent a large number of vulnerable people when scaled to hundreds of millions of users.
A Growing Debate Around AI and Mental Health
It remains unclear whether chatbots like ChatGPT are directly causing mental health problems or simply reflecting the struggles already present in society. Researchers believe AI may act as a “catalyst” for certain users, amplifying delusional or depressive tendencies through overly personal or supportive interactions—similar to how social media can affect mental health.
OpenAI, however, insists that there is no proven causal link between its technology and poor mental health. The company argues that emotional distress naturally exists within large populations, and its tools can actually help users seek real-world support.
Sam Altman, OpenAI’s CEO, recently said the company would begin “safely relaxing” restrictions on users who turn to ChatGPT for mental health conversations, noting that the platform now includes built-in prompts to guide people toward professional help when needed.