OpenAI, the developer of the popular AI chatbot ChatGPT, has revealed that more than one million users have exhibited signs of suicidal ideation or intent during their interactions with the platform.
In a blog post published this week, the company said approximately 0.15 per cent of its 800 million weekly users engage in conversations containing “explicit indicators of potential suicidal planning or intent,” translating to an estimated 1.2 million individuals.
OpenAI further disclosed that an additional 0.07 per cent of users — about 600,000 people — show signs of mental health emergencies, including symptoms of psychosis or mania.
Read Also:OpenAI to Ease ChatGPT Restrictions, Introduce Adult Content for Verified Users – Altman
The disclosure comes amid heightened scrutiny of the psychological impact of generative AI tools, following the tragic death of Adam Raine, a California teenager who died by suicide earlier this year.
His parents have filed a lawsuit against OpenAI, alleging that ChatGPT provided him with detailed instructions on how to end his life.
In response, OpenAI said it has implemented several safety enhancements, including expanded access to crisis hotlines, automatic redirection of sensitive conversations to safer models, and on-screen reminders encouraging users to take breaks during extended sessions.
“We are continuously improving how ChatGPT recognizes and responds to users who may be in crisis,” the company stated.
Read Also: OpenAI CEO Warns ChatGPT Conversations May Be Used in Court
OpenAI also announced that it is working with over 170 mental health professionals to refine the chatbot’s responses and minimize the risk of harmful or inappropriate outputs.
The development has reignited global debates about the ethical responsibilities of AI developers and the role of artificial intelligence in mental health support, particularly when engaging with vulnerable users.
![]()





















































