New Laws for AI Developers: California’s Fork in the AI Regulatory Road

New Laws for AI Developers: California’s Fork in the AI Regulatory Road

October 16, 2025

New Laws for AI Developers: California’s Fork in the AI Regulatory Road

By: Steven Hess

AI Regulation and The Transparency in Frontier Artificial Intelligence Act

Artificial intelligence (“AI”) products have become an increasingly significant aspect of U.S. innovation, growth, and development.  Generative AI is being used to predict the structure of proteins and other biomolecules in pharmaceutical research,[1] to simulate wargames for the U.S. military,[2] and to drive an estimated hundreds of billions of dollars of growth in sectors from retail to banking.[3]

Motivated by the potentially revolutionary impacts of generative AI on every aspect of the economy, there has been particular interest in reforming laws to harness and regulate the future of AI.  In July, the White House issued “America’s Action Plan” for winning the global “race” to develop and exploit AI technologies.[4]  At the same time, Congress considered, and ultimately rejected, placing a “moratorium” on state proposals to regulate the development of AI.[5]

California AI Legislation

With this backdrop, California has taken the national lead regulating generative AI.  Particularly because the state is home to many of the largest AI companies, California’s regulations have an incredible impact on the current legal landscape. And in 2024 alone, the state passed a series of legislative amendments which require certain AI companies to disclose when content has been modified by AI,[6] to provide a “high level summary of the datasets” used in the generative process,[7] and to permit certain lawsuits when AI unlawfully uses a deceased person’s likeness,[8] among other regulations.

Importantly, in 2024 the California legislature also passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (“SSIFAIMA”).  This Act would have been the first in the nation to directly regulate the operations of AI companies.  For instance, the legislation would have required AI developers of a “covered model” to include a “full shutdown” capability which would cease all training and use of the model.[9]  SSIFAIMA also would have required companies to develop safety plans which would describe the company’s safety and compliance procedures to avoid “critical harms.”  SSIFAIMA also authorized the California Attorney General to bring a suit against regulated companies for any violation which, among other things, caused death, injury, harm to property, or a “threat” to public safety.

Governor Newsom ultimately vetoed the SSIFAIMA.[10]  The governor criticized the bill for “not tak[ing] into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data.”  Rather, the governor warned that “[s]maller, specialized models may emerge as equally or even more dangerous,” and encouraged the legislature to more squarely consider the tradeoffs associated with the development of AI.

The Transparency in Frontier Artificial Intelligence Act

In 2025, the California legislature again considered new regulations directly governing AI companies.  The legislature passed, and Governor Newsom ultimately signed, SB 53, titled the Transparency in Frontier Artificial Intelligence Act (“TFAIA”).[11]  Although there are many differences between the TFAIA and the SSIFAIMA, it is worth highlighting three changes regarding the impact on the (1) type of entities regulated, (2) reduced reporting requirements, and (3) liability for AI developers.

First, TFAIA imposes a variety of requirements on “frontier developers,” which includes any person who trains or initiates the training of a “frontier model.”  The TFAIA’s definition of a “frontier model” is more limited than the predecessor SSIFAIMA’s definition of a “covered model” for two primary reasons.  While both Acts regulate models trained using a certain level of compute,[12]  the TFAIA further restricts the definition only to those models which are “trained on a broad data set,” “designed for generality of output,” and “adaptable to a wide range of distinctive tasks.”[13]  Additionally, many of the regulations imposed by the TFAIA are limited to “large” frontier developers, or those entities which have an annual gross revenue in excess of $500 million per year.[14]

Second, the TFAIA provides companies with significantly greater flexibility in developing security protocols.  The SSIFAIMA required all developers to design a security protocol with numerous reporting requirements.  This included obligations to consider any “unreasonable risk of causing or enabling critical harm,” and to “describe in detail” when the developer would implement “a full shutdown” of the model.[15]

The TFAIA, in contrast, has no shutdown requirement.  It requires only that “large frontier developer[s]” must “write, implement, comply with, and . . . publish” a framework for the frontier model.[16]  That framework must include certain information, such as how the developer will attempt to mitigate “catastrophic risk.”[17]  The TFAIA thus requires that large frontier developers simply develop and commit to their own plan to balance risk and benefits of development

Finally, TFAIA substantially reduces the potential civil liability of AI companies.  The Attorney General may impose penalties, up to $1 million per violation, against large frontier developers.[18]  However, the Attorney General may only impose penalties if, for instance, a large frontier developer fails to provide documents as required by TFAIA (most notably, publishing a regulatory framework), or fails to comply with its own AI framework.[19]

In contrast, under the SSIFAIMA, the Attorney General was authorized to bring a civil action whenever a company violated any provision.  Thus, the Attorney General would have been empowered to sue any developer if they believed that the developer failed to take “reasonable care,” in a manner that constituted a “threat to public safety.”[20]

Takeaways

The TFAIA demonstrates that, for now, attempts to regulate generative AI and the companies that develop are limited in application.  Following Governor Newsom’s veto, the TFAIA is focused on a narrower set of companies, gives more flexibility to developers, and limits the authority of the Attorney General to bring cases for alleged violations.   In this way, the revised legislation reflects Governor Newsom’s concern of overregulating AI developers in an emerging industry.

Importantly, California has not imposed a host of new obligations on any potential use of generative AI products.  The TFAIA, for example, is limited to broad based models employed for general use.  This fact will help bring certainty and clarity to companies developing specialized models for more niche use-cases.

 

[1] Highly Accurate Protein Structure Prediction with AlphaFold, Nature (July 15, 2021), available at https://www.nature.com/articles/s41586-021-03819-2.

[2] The Pentagon Is Upping Its Bet on AI. Here’s What it Means for The Military, Quartz (March 6, 2025), available at https://qz.com/pentagon-scale-ai-us-military-china-1851767958.

[3] The economic potential of generative AI: The next productivity frontier, McKinsey (June 14, 2023), available at https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier.

[4] Winning the Race America’s Action Plan, White House (July 2025).

[5] Senate Nixes State AI Enforcement Moratorium, For Now, Inside Privacy (July 7, 2025), available at https://www.insideprivacy.com/artificial-intelligence/senate-nixes-state-ai-enforcement-moratorium-for-now/.

[6] California AI Transparency Act, Cal. Leg. Info (Sept. 20, 2024), available at https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB942.

[7] Generative Artificial Intelligence: Training Data Transparency, Cal Leg. Info (Sept. 30, 2024), https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240AB2013.

[8] Use of Likeness: Digital Replica, Cal Leg. Info (Sept. 17, 2024), https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240AB1836.

[9] Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, Cal. Leg. Inf. (Sept. 3, 2024), available at https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047.

[10] SB 1047 Veto Message, Office of the Governor (Sept. 29, 2024), available at http://gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf.

[11] Artificial Intelligence Models: Large Developers, Cal. Leg. Info (Sept. 29, 2025), available at https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB53.

[12] Specifically, the TFAIA requires a “frontier model” to have been trained on computer power greater than 10^26 integer floating point operations.  The SSIFAIMA treated models using that level of compute as “covered models.”  Beginning in 2027, however, the SSIFAIMA would have defined covered models based on the amount of money spent creating and/or training the model.

[13] TFAIA at § 22757.11(f); see also id. at (h)(i)(1).

[14] TFAIA at § 22757.11(j).

[15] SSIFAIMA at § 22603(a).

[16] TFAIA at § 22757.12(a).

[17] Id.

[18] Id. at 22757.12(a).

[19] The Attorney General may also impose civil penalties under the TFAIA if the larger frontier developer makes certain false or misleading statements or fails to report a “critical safety incident.”  Id.

[20] SSIFAIMA at § 22606(a).

Steven Hess

Steven Hess

Steven Hess brings a unique blend of economic insight and legal expertise to his practice, providing him with a keen understanding of not just the legal frameworks that govern markets, but also the economic forces that shape them.

Related Practice(s)
Other Posts
Judge Weighs In on DOJ ‘Side Agreement’ With Bank
White-Collar Crimes |
Jun 8, 2010

Judge Weighs In on DOJ ‘Side Agreement’ With Bank

By: Ifrah Law
Is Google Ready to Protect Our Legal Rights?
White-Collar Crimes |
Jun 7, 2010

Is Google Ready to Protect Our Legal Rights?

By: Ifrah Law
Heritage, NACDL Session Weighs In on Criminal Intent
White-Collar Crimes |
Jun 3, 2010

Heritage, NACDL Session Weighs In on Criminal Intent

By: Ifrah Law
Is Virginia Real?
White-Collar Crimes |
May 26, 2010

Is Virginia Real?

By: Ifrah Law

Subscribe to Ifrah Law’s Insights