Key Takeaways
- A new law in California requires large AI model developers to publish frameworks on their websites that include how the company responds to critical safety incidents and assesses and manages catastrophic risk.
- The law gives whistleblower protections to employees at companies like Google and OpenAI whose work involves assessing the risk of critical safety incidents.
- Companies must report critical safety incidents to the state within 15 days, or within 24 hours if they believe a risk poses an imminent threat of death or injury.
- The law increases the information that AI makers must share with the public, including in a transparency report that must include the intended uses of a model, restrictions or conditions of using a model, and how a company assesses and addresses catastrophic risk.
- The effectiveness of the law depends heavily on the government agencies tasked with enforcing it and the resources they are allocated to do so.
Introduction to the Law
The state of California has recently signed a law that will require tech companies that create large, advanced artificial intelligence models to share more information about how the models can impact society and give their employees ways to warn the rest of us if things go wrong. As stated by Rishi Bommasani, part of a Stanford University group that tracks transparency around AI, "You can write whatever law in theory, but the practical impact of it is heavily shaped by how you implement it, how you enforce it, and how the company is engaged with it." This law, which began as Senate Bill 53, aims to address the catastrophic risk posed by advanced AI models and requires large AI model developers to publish frameworks on their websites that include how the company responds to critical safety incidents and assesses and manages catastrophic risk.
Provisions of the Law
The law defines catastrophic risk as an instance where the tech can kill more than 50 people through a cyber attack or hurt people with a chemical, biological, radioactive, or nuclear weapon, or an instance where AI use results in more than $1 billion in theft or damage. It addresses the risks in the context of an operator losing control of an AI system, for example because the AI deceived them or took independent action, situations that are largely considered hypothetical. The law also requires companies to report critical safety incidents to the state within 15 days, or within 24 hours if they believe a risk poses an imminent threat of death or injury. Fines for violating the frameworks can reach $1 million per violation.
Impact of the Law
The law will bring much-needed disclosure to the AI industry, according to Bommasani. Only three of 13 companies his group recently studied regularly carry out incident reports and transparency scores his group issues to such companies fell on average over the last year, according to a newly issued report. The law was influential even before it went into effect, with the governor of New York, Kathy Hochul, crediting it as the basis for the AI transparency and safety law she signed Dec. 19. As reported by City & State New York, the law will be "substantially rewritten next year largely to align with California’s language." This highlights the potential for the law to have a broader impact beyond California and to influence the development of AI transparency and safety laws in other states.
Limitations and Implementation
Despite the positive impact of the law, critics argue that it falls short in several areas. It does not include in its definition of catastrophic risk issues like the impact of AI systems on the environment, their ability to spread disinformation, or their potential to perpetuate historical systems of oppression like sexism or racism. The law also does not apply to AI systems used by governments to profile people or assign them scores that can lead to a denial of government services or fraud accusations, and only targets companies that make $500 million in annual revenue. Additionally, the transparency measures stop short of full public visibility, with incident reports submitted to the Office of Emergency Services not being available to the public via records requests.
Future Developments
Some elements of the law don’t kick in until next year, including the requirement for the Office of Emergency Services to produce a report about critical safety incidents the agency receives from the public and large frontier model makers. This report may give more clarity into the extent to which AI can mount attacks on infrastructure or models act without human direction, but it will be anonymized so which AI models pose this threat won’t be known to the public. Furthermore, Assembly Bill 2013, a bill that became law in 2024 and also takes effect Jan. 1, requires companies to disclose additional details about the data they use to train AI models, which may provide additional transparency. Overall, the effectiveness of the law will depend on the implementation and enforcement by government agencies, as well as the engagement of companies with the law. As Bommasani noted, the practical impact of the law will be shaped by how it is implemented, enforced, and engaged with by companies.
Scared of artificial intelligence? New law forces makers to disclose disaster plans
