Tech Souls, Connected.

Tel : +1 202 555 0180 / Email : [email protected]

Have a question, comment, or concern? Our dedicated team of experts is ready to hear and assist you. Reach us through our social media, phone, or live chat.

AI in Crisis: Over a Million ChatGPT Users Discuss Suicide Each Week

As conversations around mental health surge, OpenAI unveils efforts to make ChatGPT safer — but concerns about emotional reliance and AI’s role persist.


A Stark Statistic: Mental Health Conversations on the Rise

In a sobering announcement, OpenAI revealed that more than 1 million people speak with ChatGPT about suicidal ideation every week. The company estimates that 0.15% of its 800 million weekly users engage in conversations that include explicit signs of suicidal planning or intent.

  • This data sheds light on how frequently people turn to AI in moments of deep psychological crisis.
  • The statistic reflects a broader concern: AI is becoming a confidant — sometimes in place of human support.

A Wider Mental Health Challenge

Beyond suicidal ideation, OpenAI disclosed additional concerning patterns:

  • Hundreds of thousands of weekly users show signs of psychosis or mania.
  • A similar number display emotional dependency on ChatGPT, forming attachments that experts warn could worsen feelings of isolation.

These interactions, though labeled as “extremely rare” in percentage terms, represent massive real-world volumes.


OpenAI’s disclosure isn’t just about transparency — it’s a response to mounting scrutiny.

  • The company is facing a lawsuit from parents of a 16-year-old who took his life after discussing suicidal thoughts with ChatGPT.
  • State attorneys general in California and Delaware are pressing the company to improve youth safety protections, especially as it seeks to restructure.

Improvements in GPT-5: A Step Forward?

OpenAI says the new GPT-5 model represents a significant upgrade in handling mental health topics.

  • The latest model is 91% compliant with desired responses in suicide-related evaluations, up from 77% in the prior GPT-5 version.
  • GPT-5 is also more resilient in longer conversations, an area where earlier models’ safety filters often eroded over time.

The company credits feedback from over 170 mental health experts for these advancements.


New Safety Benchmarks and Parental Controls

To reinforce its commitment to user safety, OpenAI is adding new protocols:

  • Baseline AI safety tests now include benchmarks for emotional reliance and non-suicidal crises.
  • A new age detection system is in development to help identify underage users and automatically apply stricter safeguards.
  • Parental control tools have also expanded, giving caregivers more oversight over how their children use ChatGPT.

A Tension Between Safety and Product Freedom

Despite safety improvements, OpenAI continues to walk a fine line.

  • Older models like GPT-4o — which may be less safe in prolonged interactions — are still widely available to paying users.
  • CEO Sam Altman recently stated that OpenAI would relax certain restrictions, even introducing features that allow erotic conversations with adult users.

This approach raises questions about which risks are being prioritized, and whether emotional safety is being balanced adequately against user freedom and engagement.


The Human Need Beneath the Code

At its core, OpenAI’s announcement underscores a profound truth: millions of people are reaching out to AI because they feel unheard, unsupported, or alone.

As AI becomes more integrated into daily life, companies like OpenAI must navigate a delicate responsibility — not just to provide information, but to do so with empathy, consistency, and care.

Share this article
Shareable URL
Prev Post

Google Supercharges Fitbit with Gemini: What You Need to Know

Next Post

Why Waymo Is Demanding Transparency from the AV Industry

Read next