The architect of California’s SB 1047, the most contentious AI safety bill of 2024, has introduced a new AI bill that could potentially disrupt the status quo in Silicon Valley.
On Friday, California State Senator Scott Wiener unveiled a new legislative proposal that aims to safeguard employees at prominent AI research labs, granting them the freedom to express concerns if they believe their company’s AI systems pose a significant threat to society. The proposed bill, SB 53, also seeks to establish a public cloud computing cluster, dubbed CalCompute, to provide researchers and startups with the necessary computational resources to develop AI that benefits the public.
Senator Wiener’s previous AI safety bill, California’s SB 1047, was a highly debated and contentious piece of legislation in 2024. SB 1047 sought to mitigate the risk of large AI models causing catastrophic events, such as loss of life or cyberattacks resulting in damages exceeding $500 million. However, Governor Gavin Newsom ultimately vetoed the bill in September.
The discussion surrounding Wiener’s previous bill quickly became heated in 2024. Some Silicon Valley leaders argued that SB 1047 would undermine America’s competitive edge in the global AI landscape, claiming that the bill was driven by unrealistic fears of AI systems triggering doomsday scenarios reminiscent of science fiction. In response, Senator Wiener alleged that certain venture capitalists had launched a “propaganda campaign” against his bill, citing Y Combinator’s claim that SB 1047 would lead to the imprisonment of startup founders, a statement that experts deemed misleading.
In essence, SB 53 distills the least contentious aspects of SB 1047 – including whistleblower protections and the establishment of a CalCompute cluster – into a new AI bill.
Notably, Wiener is not backing away from addressing existential AI risks in SB 53. The proposed bill specifically safeguards whistleblowers who believe their employers are developing AI systems that pose a significant threat. The bill defines this threat as a “foreseeable or material risk that a developer’s development, storage, or deployment of a foundation model will result in the death of, or serious injury to, more than 100 people, or more than $1 billion in damage to rights in money or property.”
SB 53 restricts developers of cutting-edge AI models – likely including OpenAI, Anthropic, and xAI, among others – from retaliating against employees who disclose sensitive information to California’s Attorney General, federal authorities, or other employees. Under the bill, these developers would be required to provide feedback to whistleblowers on certain internal processes that have raised concerns.
Regarding CalCompute, SB 53 would establish a task force to develop a public cloud computing cluster. This task force would comprise representatives from the University of California, as well as other public and private researchers. It would provide recommendations for building CalCompute, determining its size, and identifying which users and organizations should have access to it.
It is still early in the legislative process for SB 53, which must be reviewed and passed by California’s legislative bodies before reaching Governor Newsom’s desk. State lawmakers will undoubtedly be monitoring Silicon Valley’s response to SB 53.
However, 2025 may prove to be a more challenging year for passing AI safety bills compared to 2024. Although California enacted 18 AI-related bills in 2024, it appears that the AI doom movement has lost momentum.
Vice President J.D. Vance indicated at the Paris AI Action Summit that America’s priority lies in AI innovation, rather than AI safety. While the CalCompute cluster established by SB 53 could be seen as advancing AI progress, it remains uncertain how legislative efforts addressing existential AI risks will fare in 2025.
Source Link