
OpenAI has appealed a court order requiring it to keep ChatGPT user data indefinitely in a New York Times lawsuit. The company argues the NYT Data Preservation Order conflicts with privacy commitments and imposes an unreasonable burden on daily operations. This case reveals rising tension between data privacy and legal discovery in lawsuits involving AI and digital content generation. It also raises broader concerns about intellectual property, user trust, and how AI technologies should be governed going forward.
Privacy Concerns Highlighted in OpenAI’s Appeal
OpenAI has formally opposed a federal court order requiring it to retain ChatGPT output logs indefinitely, citing privacy concerns. The company argues the ruling contradicts its long-standing commitment to protecting user privacy and responsible data handling practices. The preservation order stems from The New York Times’ lawsuit, which alleges OpenAI and Microsoft used its content without permission. The case accuses both companies of training AI systems with copyrighted material, raising major questions about fair use in AI development.
The disputed NYT Data Preservation Order, issued on May 13, requires OpenAI to preserve and isolate all output data normally deleted. U.S. District Judge Sidney Stein approved the order after The New York Times requested data retention for the ongoing lawsuit. Moreover, the judge cited the preserved data’s relevance to the case, pending further instructions and developments in the court proceedings.
In a post on X (formerly Twitter), OpenAI CEO Sam Altman reaffirmed the company’s stance on protecting user privacy. He stated,
Recently, the NYT asked a court to force us not to delete any user chats. We think this was an inappropriate request that sets a bad precedent. We are appealing the decision. We will fight any demand that compromises our users’ privacy; this is a core principle.
Reinforcing the company’s stance, OpenAI Chief Operating Officer Brad Lightcap, in an official blog post, characterized the court’s directive as disproportionate. He indicated that The New York Times’ demand exceeded reasonable boundaries and reiterated that the appeal reflects OpenAI’s ongoing commitment to user privacy and trust. He stated,
The New York Times and other plaintiffs have made a sweeping and unnecessary demand in their baseless lawsuit against us: retain consumer ChatGPT and API customer data indefinitely. This fundamentally conflicts with the privacy commitments we have made to our users. It abandons long-standing privacy norms and weakens privacy protections. We strongly believe this is an overreach by the New York Times. We’re continuing to appeal this order so we can keep putting your trust and privacy first.
NYT’s Position and Broader Copyright Battle
The legal conflict began in December 2023, when The New York Times filed a lawsuit against OpenAI and Microsoft. The lawsuit claims both companies used millions of Times articles to train large language models like ChatGPT and Bing Chat. It alleges they did so without proper authorization, raising serious concerns about copyright violations in AI training practices.
In April, Judge Stein ruled to keep key parts of the case, rejecting OpenAI and Microsoft’s dismissal request. He found the Times had presented plausible evidence that both companies encouraged users to generate outputs containing copyrighted content. The court cited several widely circulated examples where ChatGPT allegedly reproduced material resembling New York Times content to support its decision.
On June 3, OpenAI filed an application asking the court to revoke the data preservation order, stating that compliance would compromise its users’ privacy. As of now, The New York Times has not publicly commented on the recent court documents.
Looking Ahead
As OpenAI appeals the order, the case highlights the urgent need for clearer rules balancing AI growth, copyright, and user privacy. The court’s final decision will likely influence data retention practices and redefine how AI companies handle sensitive user information. It may set lasting legal standards that shape the future relationship between AI innovation, intellectual property, and individual digital rights.
OpenAI faces high stakes, not just legal risks, but also the challenge of preserving user trust and operational flexibility in development.