
New doubts are rising on the privacy considerations of ChatGPT after Sam Altman, the CEO of OpenAI. Cautioned users that chats with the AI model are neither legally shared nor confidential. Even deleted conversations are stored on the server for up to 30 days. That brings the risk to the security of data to a high level. ChatGPT is used more often to perform sensitive tasks. Such as giving and keeping advice and journaling, legal analysts are wondering how vulnerable the users are. Such exposures follow their comparison with open-source versions of such programs, such as Qwen3-Coder. This rekindled the debate of how data should be more transparent and accountable in AI products that people use.
Conversations With ChatGPT Are Not Confidential Under the Law
Sam Altman recently cautioned users that ChatGPT should not be treated like a therapist, doctor, or lawyer. Conversations with it are not protected by any form of legal confidentiality. In a July 2025 podcast appearance, Sam Altman explained. That information shared with ChatGPT could be subpoenaed and used in court. This differs from protected professional-client relationships, where legal privilege prevents disclosure.
His remarks come as there are more and more such instances of individuals. Including those with deeply personal requests, ranging from emotional distress to business plans. Making such requests under the erroneous belief that it was being done on a private basis. In the absence of legal protection, it is possible to access everything said to ChatGPT. Either by OpenAI or by third parties, and subject to a request by the law. Sam Altman admitted that on an everyday level, AI can help conduct the tasks, but it should never be used to substitute a human professional when sensitive or confidential work is involved. According to the legal experts, this presents a grey area where the users can literally expose themselves without the intention to do so.
Data Retention, AI Risks, and Open-Source Comparisons
Adding to the concern, OpenAI retains deleted or temporary chats for up to 30 days, and potentially longer for legal compliance. This undermines assumptions that users have control over their data. Critics argue that this kind of “soft deletion” could allow government access or legal action using what users thought was private. It also leaves room for internal review by AI trainers.
The post by Gina Acosta highlighted symbolic imagery of a trapped user behind bars, a visual metaphor for how AI systems can capture more than just input text. In contrast, open-source AI like Alibaba’s Qwen3-Coder, which reportedly outperforms ChatGPT on coding tasks, offers transparency in how data is handled. While it’s unclear whether open-source tools are safer overall, they at least offer visibility into how models are trained and deployed.
The debate centers not just on AI’s utility but on who controls the data and how it’s used. Users are being urged to think twice before sharing personal or sensitive information with ChatGPT, especially in the absence of legal reform.
Users Urged to Be Cautious as AI Use Expands
ChatGPT’s growing role in everyday life, from brainstorming to emotional support, makes privacy an urgent issue. With conversations legally admissible and retained beyond deletion, users may be exposing themselves without realizing it. Sam Altman’s warning is a wake-up call: AI is not a substitute for confidential professional services. Until stronger privacy frameworks are in place, users should treat every message as public, not private. This growing awareness could shift user behavior and spark broader demand for regulation, especially as open-source alternatives challenge Big AI on transparency and control.