Chatbots, Copyrights, and the Courts: The Latest in Litigation Developments in the Cases Against OpenAI

Chatbots, Copyrights, and the Courts: The Latest in Litigation Developments in the Cases Against OpenAI

January 5, 2026

Chatbots, Copyrights, and the Courts: The Latest in Litigation Developments in the Cases Against OpenAI

By: Abbey Block

Litigation Update: Open AI’s Discovery Woes and Fair Use Defenses in Infringement Lawsuits

Since its formation in 2015, the artificial intelligence company “Open AI” – most known for its creation of the widely used chatbot, “ChatGPT” – has faced its fair share of legal disputes. Two of the most notorious lawsuits, one filed by the New York Times and the other by a class of prominent fiction authors, are moving full steam ahead, illustrating the complex interplay of legal rights, litigation tools, and technological innovation. This blog post examines the most recent developments in the lawsuits, and considers what these developments mean for the parties, and the future of artificial intelligence more broadly.

Judge Orders Open AI to Disclose Chat Logs to New York Times

By way of background, in 2023, the New York Times filed a lawsuit against Microsoft and Open AI (hereinafter collectively referred to as “OpenAI” or the “defendants”), accusing them of engaging in copyright infringement and misappropriation.  Specifically, the news organization alleges that the defendants’ “generative artificial intelligence (“GenAI”) tools rely on large language-models that were built by copying and using millions of the [plaintiffs’] copyrighted” works including, news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more. Through this use of the publishers’ content, the plaintiff contends the defendants are getting a “free-ride” on the publisher’s “massive investment in journalism by using it to build substantive products without permission or payment.”

The latest dispute to emerge from the litigation revolves around discovery – the exchange of information between the parties in a lawsuit. Generally, civil discovery is broadly permissible, allowing parties to request the disclosure of documents, records, and information relevant to the claims and defenses in the case and considered reasonably likely to lead to admissible evidence. Pursuant to these principles and as part of the discovery process, the news outlet requested that OpenAI turn over more than 20 million ChatGPT logs for review (ironically, a review of this massive quantity of data and records will likely require the assistance of some form of artificial intelligence).

The Defendants pushed back, arguing that disclosure of these materials was unwarranted and inappropriate given that (1) the majority of the chat logs had nothing to do with the allegations of copyright infringement at issue in the lawsuit, and (2) the disclosure would result in an invasion of users’ privacy. “To be clear,” OpenAI argued, “anyone in the world who has used ChatGPT in the past three years must now face the possibility that their personal conversations will be handed over to The Times to sift through at will in a speculative fishing expedition.” These arguments were rejected by the court, which determined that there were adequate safeguards already in place to protect users’ private information contained within the data.

Judge Orders OpenAI to Turn Over Internal Communications Concerning the Deletion of Data

In a separate lawsuit filed against OpenAI in September of 2023, dozens of prominent fiction authors and the Authors Guild made similar claims, alleging that OpenAI “copied” their “works wholesale, without permission or consideration” and then used the copyrighted works to train their LLMS “to output human-seeming text responses to users’ prompts and queries.” OpenAI has, unsurprisingly, denied these allegations, arguing in pertinent part that its use of the authors’ content was permissible under principles of “fair use” and any violation of the authors’ intellectual property rights was neither intentional nor willful.

Discovery has been ongoing in the case, and OpenAI was already required to turn over a significant quantity of records. Of particular interest to the plaintiffs were the company’s employees’ internal Slack communications, which discussed the company’s intentional deletion of certain “books” from its database which, the plaintiffs allege, were used to train its LLM’s. The defendants objected to the disclosure of this information, arguing that the chats were protected from discovery under the attorney-client privilege.

Just days before Thanksgiving, Judge Ona Wang ordered OpenAI to hand over records of internal communications amongst the company’s employees and its attorneys. The majority of the contested messages, the Judge concluded, were not protected by the attorney-client  privilege, notwithstanding the fact that some of the messages were created at the direction of the company’s lawyers, who were also copied on the communications.

But Judge Wang allowed for disclosure of the records on another basis – waiver. The order explained that waiver of attorney client privilege can occur when a party “asserts a claim that in fairness requires examination of protected communications,” such as the defense that a party was acting in “good faith.” Drawing analogy to a case involving securities fraud, the Court explained that when a party puts its “knowledge of the law and the basis for [its] understanding of what the law required [at] issue” it may “impliedly waive attorney-client privilege over communications ‘with counsel regarding the legality of [its] schemes.’”

The Court found that OpenAI had waived “privilege over all communications regarding the ‘reasons’ for deletion” of the data “by putting its good faith and state of mind at issue.” The Court reasoned that “there is a fundamental conflict where a party asserts a good faith defense based on advice of counsel but then block inquiry into their state of mind by asserting attorney-client privilege.”  Because the defendants’ state of mind was at issue in the case, the Judge ruled that they may not “selectively use attorney-client privilege to restrict” the plaintiffs’ “inquiry into evidence concerning OpenAI’s purported good faith.” Relying on this reasoning, the Court ordered disclosure of “all communications in 2022 related to the reasons” for the deletions at issue.

The judge’s ruling was undoubtedly a huge blow to OpenAI, but also has meaningful implications for future litigants who claim they lacked the requisite intent to commit the bad acts of which they are accused – a defense asserted in many civil and criminal white-collar cases. Although there are undoubtedly exceptions to the attorney-client privilege (e.g., the crime-fraud exception) rarely do judges stretch the boundaries this far. Judge Wang’s order could open the door for limitless prodding (by plaintiffs and the government) into internal communications between attorney and client any time the defendant’s state of mind is at issue in the case. Simply put, a ruling addressing a discovery issue in OpenAI’s case could have consequential effects for litigants more broadly.

How Will OpenAI’s Defenses Stack Up?

In both cases, OpenAI attempts to defend against the plaintiffs’ allegations by arguing that its technology should be viewed as innovative, rather than violative of the plaintiffs’ intellectual property rights.  By way of example, in its motion to dismiss the New York Times’ claims, OpenAI argued that a central question in the case – and dozens of others around the country – is “whether it is fair use under copyright law to use publicly accessible content to train generative AI models to learn about language, grammar, and syntax, and to understand the facts that constitute humans’ collective knowledge.”  OpenAI argues that under the fair use doctrine, the “non-consumptive use of copyrighted material (like large language model training) is protected.” The issue of fair use was similarly raised as a defense in the defendants’ answer to the complaint in the Authors’ Guild case and goes hand-in-hand with its “good faith” argument.

The doctrine of “fair use” is an exception to the Copyright Act, which “permits the use of copyrighted work ‘for purposes such as criticism, comment, news reporting, teaching . . . scholarship, or research,” and enables “courts to avoid rigid application of the copyright statute when, on occasion, it would stifle the very creativity which that law is designed to foster.”[1] Under the doctrine, the use of the copyrighted material must be “transformative” and “alter the original” with “new expression, meaning, or message.”[2] OpenAI argues that its use of “publicly accessible content” to train generative AI models should fall under the fair use exception given that it is part of a longstanding tradition of using copyrighted content “as part of a technological process” for the creation of “new, different, and innovative products.”

Although the fair use doctrine is premised upon principles of creativity and innovation, it is not without limits. An individual’s “fair use” of copyrighted work must “not excessively damage the market for the original by providing the public with a substitute for that original work.”[3] Given these principles, OpenAI will likely argue that its use of the authors’ and Times’ original writings is not intended to replace those works; but rather, to train its technology to produce new content. To this end, when queried concerning a New York Times article, or a particular chapter of a specific book, the chatbot will not produce verbatim excerpts from the plaintiffs’ works, but rather responds with concise summaries, analysis, commentaries, and suggestions, and generally attributes its analysis to the original sources upon which it relied.

Given the transformative nature of the chatbot’s outputs, OpenAI has a strong argument that its use of the copyrighted materials falls under the fair use doctrine. The chatbot’s functionality is akin to an academic researcher reporting on preexisting literature and data in a thesis or article. While the works cited weren’t originally created by the researcher, the synthesis, analysis, and presentation of that information constitute a “transformative” use of the original data. Similarly, when ChatGPT provides a summary of a New York Times article, or a chapter of a beloved author’s book, it is synthesizing and analyzing the content – not merely regurgitating it verbatim. This can hardly be described as a wholesale attempt to “replace” the original work. If this were true, services such as “Spark Notes” or “Wikipedia” would be out of business.

Conclusion

It seems unlikely that litigation against artificial intelligence companies – like OpenAI – will end any time soon. Indeed, as recently as December 23, 2025, the New York Times filed a lawsuit against “Perplexity” (another generative AI platform), similarly alleging that “Perplexity has unlawfully copied, distributed, and displayed millions of Times stories, videos, podcasts, images and other works to power its product and tools.” Although that case is in its earliest stages, it seems likely that Perplexity will similarly argue that its reliance on the copyrighted works constitutes “fair use” under the Copyright Act.

Thus, the success (or lack thereof) of OpenAI’s “fair use” defense will undoubtedly influence the way in which cases against other generative AI platforms are litigated. Perhaps more importantly, if OpenAI’s fair use defense is unsuccessful, artificial intelligence companies may need to re-think the ways in which they train their large language models – illustrating the influence that litigation may have on innovation and the future development of technology for years to come.

 

[1] Green v. U.S. Dep’t of Justice, 111 F.4th 81, 87 (D.D.C. 2024) (citing 18 U.S.C. § 107).

[2] Cariou v. Prince, 714 F.3d 694, 706 (2d Cir. 2013) (quoting Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 579 (1994)).

[3] Authors Guild, Inc. v. HathiTrust, 755 F.3d 87, 95 (2d Cir. 2014).

Abbey Block

Abbey Block

Abbey Block found her path in law as a journalism major, coupling her passion for advocacy through writing with her litigation experience to create persuasive, effective arguments.

Prior to joining Ifrah Law, Abbey served as a judicial law clerk in Delaware’s Kent County Superior Court, where she was exposed to both trial and appellate court litigation. Her work included analyzing case law, statutes, pleadings, depositions and hearing transcripts to draft bench memoranda and provide recommendations to the judge.

California Court OKs Warrantless Search of Cell Phone
White-Collar Crimes |
Jan 19, 2011

California Court OKs Warrantless Search of Cell Phone

By: Ifrah Law
Why Is an Assault on Congress Member a Federal Crime?
White-Collar Crimes |
Jan 12, 2011

Why Is an Assault on Congress Member a Federal Crime?

By: Ifrah Law
Federal Strike Forces Will Boost Medicare Fraud Enforcement
White-Collar Crimes |
Dec 17, 2010

Federal Strike Forces Will Boost Medicare Fraud Enforcement

By: Ifrah Law
ISPs Take Note: Court Rules E-mails Have Full 4th Amendment Protection
White-Collar Crimes |
Dec 15, 2010

ISPs Take Note: Court Rules E-mails Have Full 4th Amendment Protection

By: Ifrah Law

Subscribe to Ifrah Law’s Insights