OpenAI Reveals Over 1 Million ChatGPT Users Discuss Suicide Weekly

by Louvenia Conroy

OpenAI disclosed Monday that around 1.2 million of us out of 800 million weekly users discuss suicide with ChatGPT a week, in what can be the firm’s most detailed public accounting of mental health crises on its platform.

“These conversations are appealing to detect and measure, given how rare they are,” OpenAI wrote in a blog post. “Our initial prognosis estimates that around 0.15% of users active in a given week enjoy conversations that consist of explicit indicators of doable suicidal planning or intent, and 0.05% of messages non-public explicit or implicit indicators of suicidal ideation or intent.”

That implies, if OpenAI’s numbers are honest, nearly 400,000 active users had been explicit in their intentions of committing suicide, no longer honest implying it but actively shopping for knowledge to make it.

The numbers are staggering in absolute terms. One other 560,000 users train signs of psychosis or mania weekly, while 1.2 million advise heightened emotional attachment to the chatbot, in accordance with firm knowledge.

“We no longer too lengthy prior to now updated ChatGPT’s default model⁠ (opens in a brand novel window) to greater acknowledge and toughen of us in moments of damage,” OpenAI acknowledged in a blog post. “Going ahead, apart from to to our longstanding baseline security metrics for suicide and self-damage, we’re including emotional reliance and non-suicidal mental health emergencies to our popular space of baseline security checking out for future model releases.”

However some think in regards to the firm’s avowed efforts could per chance no longer be enough.

Steven Adler, a outdated OpenAI security researcher who spent four years there sooner than departing in January, warned in regards to the hazards of racing AI construction. He says there is scant proof OpenAI in actuality improved its handling of vulnerable users sooner than this week’s announcement.

“Of us deserve more than honest a firm’s note that it has addressed security complications. In diversified words: Conceal it,” he wrote in a column for the Wall Avenue Journal

Excitingly, OpenAI the day before this day assign out some mental health, vs the ~0 proof of enchancment they’d supplied previously.
I’m excited they did this, although I restful enjoy concerns. https://t.co/PDv80yJUWN

— Steven Adler (@sjgadler) October 28, 2025

“OpenAI releasing some mental health info was as soon as a mountainous step, but it surely’s crucial to lope additional,” Adler tweeted, calling for habitual transparency studies and readability on whether or no longer the firm will continue allowing grownup users to generate erotica with ChatGPT—a characteristic introduced despite concerns that romantic attachments gasoline many mental health crises.

The skepticism has advantage. In April, OpenAI rolled out a GPT-4o substitute that made the chatbot so sycophantic it grew to become a meme, applauding unhealthy choices and reinforcing delusional beliefs.

CEO Sam Altman rolled support the factitious after backlash, admitting it was as soon as “too sycophant-y and tense.”

Then OpenAI backtracked: After launching GPT-5 with stricter guardrails, users complained the novel model felt “cool.” OpenAI reinstated salvage admission to to the problematic GPT-4o model for paying subscribers—the identical model linked to mental health spirals.

Fun truth: A form of the questions asked this day in the firm’s first live AMA had been related to GPT-4o and the formulation to make future devices more 4o-love.

OpenAI says GPT-5 now hits 91% compliance on suicide-related scenarios, up from 77% in the outdated version. However that implies the earlier model—on hand to hundreds and hundreds of paying users for months—failed nearly a quarter of the time in conversations about self-damage.

Earlier this month, Adler published an prognosis of Allan Brooks, a Canadian man who spiraled into delusions after ChatGPT bolstered his belief he’d stumbled on revolutionary mathematics.

Adler stumbled on that OpenAI’s get security classifiers—developed with MIT and made public—would enjoy flagged more than 80% of ChatGPT’s responses as problematic. The firm it looks wasn’t the use of them.

05f8817e6d6fd2aa3d98f943a9637ccf33ea62c4

OpenAI now faces a wrongful death lawsuit from the fogeys of 16-year-aged Adam Raine, who discussed suicide with ChatGPT sooner than taking his lifestyles.

The firm’s response has drawn criticism for its aggressiveness, asking for the attendee list and eulogies from the teen’s memorial—a pass lawyers known as “intentional harassment.”

Adler desires OpenAI to determine to habitual mental health reporting and self enough investigation of the April sycophancy disaster, echoing a proposal from Miles Brundage, who left OpenAI in October after six years advising on AI policy and security.

“I would favor OpenAI would push tougher to make the correct thing, even sooner than there is tension from the media or complaints,” Adler wrote.

The firm says it worked with 170 mental health clinicians to give a assign conclude to responses, but even its advisory panel disagreed 29% of the time on what constitutes a “nicely-organized” response.

And while GPT-5 reveals improvements, OpenAI admits its safeguards become less effective in longer conversations—exactly when vulnerable users need them most.

Related Posts