Shanghai,China-July 14th 2023: character.ai, openAI ChatGPT and Runway RunwayML app icon on screen. Assorted AI software applications

OpenAI’s Legal Troubles Mount as New York Times Lawsuit Escalates Alongside SEC Investigation

OpenAI’s Legal Troubles Mount as New York Times Lawsuit Escalates Alongside SEC Investigation

March 4, 2024

OpenAI’s Legal Troubles Mount as New York Times Lawsuit Escalates Alongside SEC Investigation

By: Jake Gray

On February 28th, 2024, the Wall Street Journal reported that the Securities and Exchange Commission (”SEC”) is investigating OpenAI’s internal communications following the board’s ousting and re-introduction of OpenAI CEO Sam Altman in November 2023. Importantly, the SEC’s scrutiny of OpenAI adds to the mounting legal and regulatory challenges faced by the company.

As rationale for Altman’s expulsion, the board cryptically stated that Altman hadn’t been “consistently candid in his communications.” Now, the SEC is looking into whether the company’s investors were misled as either a result of the fiasco or a result of the cause of it, i.e. the board’s claims that Altman wasn’t consistently candid in communications. The Manhattan U.S. Attorney’s Office is conducting a criminal investigation about the same matter as well.[1]

The internal turmoil spawned a new set of potential legal challenges that OpenAI will need to navigate alongside pending copyright issues in several other lawsuits and impending regulatory frameworks, one of the former which is against the newspaper and journalism titan, the New York Times (“NYT” or “The Times”). The case against the Times is the most consequential generative AI suit to date as its outcome has the potential to obstruct how and what data generative AIs can use in their models commercially without a license.

Legal Troubles Leading up to New York Times

In Authors Guild v. OpenAI Inc., the Authors Guild filed a class-action lawsuit against OpenAI in the U.S. District Court for the Southern District of New York in September 2023 on behalf of authors such as George R.R. Martin, John Grisham, and Jodi Picoult.[2] The Authors Guild is seeking an injunction to block OpenAI from continuing to use the authors’ works to train ChatGPT, as well as unspecific monetary damages and a financial penalty of up to $150,000 per infringed work.

Two similar lawsuits were filed against Meta and OpenAI on the same bases in July, on behalf of comedian/actress Sarah Silverman and two other authors.[3] Both were filed in the U.S. District Court for the Northern District of California and seek class-action status as well as unspecified monetary damages.

In a partial win for OpenAI, U.S. District Judge Araceli Martinez-Olguin dismissed parts of the suit, granting most of OpenAI’s motion to dismiss many of the writers’ claims for now. Therein, Judge Martinez-Olguin rejected arguments that the content generated by ChatGPT infringes the authors’ copyrights and that the company unjustly enriched itself with their work. The main complaint that OpenAI directly violated the authors’ copyrights were not dismissed and thus remain to be considered.[4]

New York Times v. OpenAI

In December 2023, the New York Times filed a potentially pivotal lawsuit against Microsoft and OpenAI, the company behind ChatGPT, in the federal district court in Manhattan, alleging that OpenAI infringed on NYT’s copyright in its use of NYT content to train its artificial intelligence products. Emphasizing the importance of independent journalism to democracy, NYT cites the fact that NYT content “was given particular emphasis when building [OpenAI’s] LLMs [Large Language Model, but, in other words, the brain underpinning the AI processing of data and production of output]—revealing a preference that recognizes the value of those works” while at the same time threatening the NYT’s ability to provide those works as a service through direct competition with NYT.[5]

On January 8th, 2024, OpenAI published a blog post titled “OpenAI and journalism,” in which it defended itself from some of the allegations on the basis of four points.[6] First, OpenAI emphasized its current partnerships with several news organizations such as Associated Press and the American Journalism Project, purportedly as evidence of a collaborative, rather than antagonistic relationship with the journalism industry. Second, OpenAI claimed that training LLM models on publicly accessible Internet materials constitutes fair use, a widely endorsed principle protecting limited use of copyrighted materials without permission of the owner, and that it leads the AI industry in providing an opt-out process for publishers to prevent OpenAI tools from accessing the publishers’ websites, which the New York Times itself used. Third, OpenAI claimed that the “regurgitation” illustrated in the lawsuit is a bug rather than a feature of ChatGPT, one which the company makes continual efforts to squash. Finally, and most notably, OpenAI claimed that the NYT suit is meritless and that the news company is “not telling the full story,” alleging the organization manipulated the model in an effort to produce regurgitation examples as evidence of copyright infringement.[7]

According to the Times’ complaint, in April 2023, NYT raised intellectual property concerns with Microsoft and OpenAI to explore the possibility of commercial terms and technological guardrails “that would allow a mutually beneficial value exchange” between the parties. The negotiations did not produce a resolution.[8]

To summarize the Times’ complaint, the company alleges that ChatGPT’s ability to reproduce segments of Times articles essentially verbatim demonstrates copyright infringement beyond fair use exceptions. The news organization argues this regurgitation undermines its paywall and derives value from Times journalism without license and compensation, thereby benefitting Microsoft and OpenAI, and harming the Times and the tradition high-quality journalism alike.

And just two weeks after the dismissal in the Silverman case in California, OpenAI filed its motion to dismiss in the Times case in federal court, on February 26th, 2024. OpenAI argues primarily that ChatGPT is not in any way “a substitute for a subscription to the New York Times.”

OpenAI’s motion seeks to dismiss four claims:

  • Partial dismissal of the claim of direct copyright infringement to the extent it is based on acts of reproduction that occurred more than three years before the complaint;
  • Full dismissal of the claim of contributory infringement for failure to allege that OpenAI had actual knowledge of the specific acts of direct infringement alleged;
  • Full dismissal of the claim of copyright management information removal; and
  • Full dismissal of the claim of unfair competition by misappropriation on grounds of Copyright Act preemption.

In response to the motion dismiss, lead counsel for the Times said in a statement that “OpenAI did not dispute in its filing that it ‘copied millions of The Times’s works to build and power its commercial products without [the Times’] permission.’”[9]

The Times’ lawsuit presents the most consequential of the AI-copyright lawsuits, as it represents a significant case for alleging that generative AI infringes on copyright when it scrapes and ingests publicly available content from the Internet without a relevant license to use it commercially.

Conclusion

Copyright and intellectual property law continue to be at the fore of legal and regulatory issues around AI innovation. OpenAI emerged as the market leading generative AI platform in late 2021 and early 2022. Since its meteoric rise, controversies concerning the use of AI in labor, art, science, the humanities, and other characteristically human activities have been abundant. OpenAI naturally finds itself at the center of such controversies. How and to what extent the company, as well as others in the AI space such as Anthropic, navigate pending and future legal and regulatory issues will be instructive in determining certain limits of future AI use, particularly in the context of copyrighted materials and the extent of permitted “fair use.”

See Abbey Block’s insightful commentary on the evolving AI regulatory landscape specifically in regards to the U.S. and E.U.’s approaches here. Also, see Nicole Kardell’s brief sampling of proposed AI oversight and enforcement here.

[1] https://openai.com/blog/sam-altman-returns-as-ceo-openai-has-a-new-initial-board

[2] https://www.theguardian.com/books/2023/sep/20/authors-lawsuit-openai-george-rr-martin-john-grisham

[3] https://www.theguardian.com/books/2024/feb/14/two-openai-book-lawsuits-partially-dismissed-by-california-court

[4] https://www.theguardian.com/books/2024/feb/14/two-openai-book-lawsuits-partially-dismissed-by-california-court

[5] https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf

[6] https://openai.com/blog/openai-and-journalism

[7] Ibid.

[8] https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf

[9] https://www.nytimes.com/2024/02/27/technology/openai-new-york-times-lawsuit.html

Jake Gray

Jake Gray

Jake Gray is a graduate of Columbia University and an established technology researcher, currently working in the betting and futures space as a consultant to a variety of operators. He frequently writes about online gaming and sports betting laws.

Related Practice(s)
Other Posts
Botnet ZeroAccess Hit With Complaint by Microsoft, but Will This Slow the Malware Industry Down?
Dec 19, 2013

Botnet ZeroAccess Hit With Complaint by Microsoft, but Will This Slow the Malware Industry Down?

By: Ifrah Law
Google Glass Sounds Exciting — But What About Privacy?
Jul 18, 2013

Google Glass Sounds Exciting — But What About Privacy?

By: Ifrah Law
Congress Continues to Examine Data Brokers’ Practices
Nov 13, 2012

Congress Continues to Examine Data Brokers’ Practices

By: Michelle Cohen
Cybersecurity a Desirable Goal, but Does Obama Proposal Go Too Far?
Aug 28, 2012

Cybersecurity a Desirable Goal, but Does Obama Proposal Go Too Far?

By: Steven Eichorn

Subscribe to Ifrah Law’s Insights