OpenAI CEO Sam Altman requested users of ChatGPT to understand the danger to privacy before trusting it with emotional support. He clarified that there is no legal confidentiality one can observe during an AI chat similar to doctors, lawyers, or therapists.

No doctor‑patient privilege in AI chats
Altman said that individuals are discussing very intimate details with ChatGPT when they need advice on relationships, mental health, or career choices, and this has been seen on a podcast by comedian Theo Von. He compared them with the usual form of therapy, where doctor-patient privilege does not allow compelled disclosure in court. With ChatGPT, however, OpenAI might be forced by law to provide chat transcripts in case a court issues a judgment.
Ongoing legal battles over ChatGPT data
OpenAI is currently appealing a court order in its dispute with The New York Times that would force the company to retain chat logs of hundreds of millions of users. Altman described the order as an overreach that threatens user privacy. If upheld, it could compel AI companies to provide private conversations for legal discovery or law enforcement investigations.
Implications for sensitive use cases
Users who rely on ChatGPT for mental health support, legal counsel, or medical guidance face uncertainty. Without a clear policy or legal framework on AI confidentiality, they risk having their private conversations exposed in lawsuits or investigations. Altman called for new rules to grant AI chats the same privacy protections as traditional professional services.
Rising privacy concerns in digital services
The debate over AI privacy echoes past concerns when Roe v. Wade was overturned. People moved period‑tracking data to encrypted platforms to avoid legal scrutiny. This example shows how digital records can endanger personal freedoms when legal protections lag behind technology.

What users should consider
Prior to addressing any personal issues, a user can decide to avoid sensitive issues until there are concise privacy protection provisions in case of using ChatGPT. Perpetrators of confidential relations should find licensed therapists who will be guided by ethical and legal professional confidentiality. Users will seek service providers who ensure personal communication without external pressure as the policy ensuring privacy by AI improves.
Altman made the point that it is wise to clarify the legal aspects before confiding in AI. In the coming months, some new controls may appear that can specify how companies protect the privacy of users in the era of artificial intelligence.