As users turn to AI for emotional support, OpenAI’s CEO flags major privacy gaps — and the urgent need for legal protections
Therapy or Chatbot? The Privacy Line Remains Blurry
ChatGPT is increasingly being used as a stand-in therapist, especially by younger users seeking emotional support or life advice. But OpenAI CEO Sam Altman has issued a stark warning: there is no legal confidentiality when you share personal information with an AI chatbot.
- Unlike a real therapist, doctor, or lawyer, ChatGPT is not protected by legal privilege.
- Any data you share could be subject to legal discovery or subpoena.
- This creates serious privacy concerns—especially for users unaware of these limitations.
Altman: “We Haven’t Figured That Out Yet”
On an episode of This Past Weekend with Theo Von, Altman openly acknowledged that AI has outpaced the legal frameworks meant to protect sensitive conversations.
“People talk about the most personal sh** in their lives to ChatGPT… and we haven’t figured that out yet,” he said.
His comments underscore a growing tension between the use of AI for mental wellness and the lack of privacy laws to support that use.
The Legal Gray Zone: What’s at Risk
Altman pointed out that conversations with AI don’t enjoy the same protections as those with a human therapist or physician.
- ChatGPT conversations could be used in legal cases if subpoenaed.
- OpenAI is already fighting a court order that would require it to retain and turn over user chats, excluding those from ChatGPT Enterprise customers.
- This battle, part of OpenAI’s legal dispute with The New York Times, could set a precedent for how AI chat data is treated in court.
Data Privacy and Digital Vulnerability
OpenAI is not alone—tech companies are routinely asked for user data in law enforcement and legal cases. But the stakes are higher with AI tools that users treat like therapists or confidants.
- Post-Roe v. Wade, concerns over digital footprints (like period-tracking data) sparked a shift to more private, encrypted tools.
- Now, similar concerns are emerging around AI platforms, especially those handling deeply personal queries.
- Without proper legal protections, users may be unknowingly exposing themselves to risks they’d never face in a traditional care setting.
Why This Matters Now
The AI industry has seen rapid adoption, with users treating chatbots as accessible, always-on advisors. But as Altman put it, “no one had to think about that even a year ago.” That’s changed.
- The lack of clarity around AI privacy could be a barrier to mainstream adoption for personal support uses.
- Developing legal standards for AI interaction confidentiality will likely be a key policy battleground in the near future.
For now, Altman agrees with caution: if you’re worried about your data, wait for privacy clarity before relying heavily on AI tools.








