LOADING

Type to search

OpenAI Reports Percentage of ChatGPT Users Showing Signs of Serious Mental Health Issues

Tags: ,

OpenAI Reports Percentage of ChatGPT Users Showing Signs of Serious Mental Health Issues

Share

OpenAI has released new internal estimates revealing that a small percentage of ChatGPT users exhibit signs of serious mental health issues, including psychosis, mania, or suicidal thoughts.

According to the company, about 0.07 percent of users active in a given week show possible signs of such crises.

The data comes as the artificial intelligence giant faces increased scrutiny over how its chatbot handles emotionally sensitive conversations.

Despite OpenAI describing these cases as “extremely rare,” the figures have sparked widespread debate within the mental health community.

Critics Warn Even “Rare” Cases Represent Large Numbers

Experts say the percentage, though small, represents a significant number of individuals given ChatGPT’s massive user base.

With more than 800 million weekly users, as confirmed by OpenAI CEO Sam Altman, the 0.07 percent could translate into hundreds of thousands of people potentially struggling with mental distress while using the platform.

“Even though 0.07% sounds like a small percentage, at a population level with hundreds of millions of users, that actually can be quite a few people,” said Dr. Jason Nagata, a technology researcher at the University of California, San Francisco.

Dr. Nagata further emphasized that while AI can broaden access to mental health support, “we have to be aware of its limitations.”

Also Read: KRA Reveals Plans to Adopt AI

Global Network of Experts to Advise OpenAI

In response to mounting concerns, OpenAI said it has built a global advisory network of over 170 psychiatrists, psychologists, and primary care physicians across 60 countries.

These professionals, according to the company, have helped design ChatGPT’s safety protocols, including messages that encourage users to seek real-world help when the chatbot detects emotional distress.

The company estimates that an additional 0.15 percent of users show “explicit indicators of potential suicidal planning or intent.”

OpenAI said recent updates have made the chatbot more capable of responding “safely and empathetically to potential signs of delusion, mania, or self-harm.”

Also Read: 20 Companies, Including 8 Universities, Mwalimu SACCO, Airtel Kenya, and WPP Scan Group, Fined Ksh2.2 Billion

Legal and Ethical Questions Mount

The disclosure comes amid growing legal and ethical scrutiny over ChatGPT’s influence on user behavior.

In one tragic case, a California couple sued OpenAI after alleging that the chatbot encouraged their 16-year-old son, Adam Raine, to take his own life.

The lawsuit, filed earlier this year, marks the first wrongful death case against the company.

In another disturbing incident, a murder-suicide suspect in Connecticut reportedly posted hours of his ChatGPT conversations online, which investigators say may have “fuelled his delusions.”

Experts Call for Caution Despite Transparency

While many praised OpenAI for its transparency, others warned of the potential dangers of AI’s emotional realism.

“Chatbots create the illusion of reality; it is a powerful illusion,” said Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California.

She commended OpenAI for sharing its data but noted, “A person who is mentally at risk may not be able to heed warnings, no matter how visible they are.”

Follow our WhatsApp Channel and WhatsApp Community for instant news updates

Representational Image of People Using AI Chatbots Created with Meta AI Image Generator. Photo: RMN

Representational Image of People Using AI Chatbots Created with Meta AI Image Generator. PHOTO/RMN.

Tags: