California Governor Gavin Newsom Vetoes Landmark Bill on AI Safety Regulations
California Governor Gavin Newsom vetoed a bill establishing safety measures for large AI models.
California Governor Gavin Newsom made waves in the technology sector on Sunday by vetoing a pivotal piece of legislation aimed at implementing unprecedented safety measures for large artificial intelligence (AI) models. The decision comes as a significant setback to efforts aimed at regulating the burgeoning industry that has been evolving swiftly with minimal oversight.
Vetoing Legislative Efforts
The vetoed bill was poised to introduce some of the first regulations on large-scale AI models in the nation, a move that supporters believed could set a precedent for AI safety regulations across the country. Earlier this month, during a speech at the Dreamforce conference, Governor Newsom emphasized the need for California to take the lead in regulating AI amid federal inaction. However, he expressed concerns that the proposal could negatively impact the industry by imposing rigid requirements.
Concerns Over Bill's Provisions
In a statement, Governor Newsom articulated reservations about the bill's approach, stating, "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making, or the use of sensitive data." He criticized the bill for applying stringent standards to even basic functions of AI systems, arguing that such a blanket approach was not the most effective way to protect the public from potential threats posed by the technology.
Alternative Path Forward
In lieu of the vetoed legislation, Governor Newsom announced a partnership with industry experts, including the renowned AI pioneer Fei-Fei Li, to collaboratively develop guardrails around powerful AI models. This alternative approach seeks to address the potential risks associated with AI without imposing the stringent requirements outlined in the vetoed bill.
The Scope of the Vetoed Legislation
The legislation aimed to mitigate potential risks by mandating companies to test their AI models and publicly disclose their safety protocols. This proactive measure sought to prevent potential misuse of AI models, such as manipulating critical infrastructure or facilitating the development of harmful substances. Additionally, the bill included provisions for whistleblower protections for workers.
Congressional Insights
Supporters of the legislation, including prominent figures like Elon Musk and the AI company Anthropic, argued that the proposed measures could have introduced much-needed transparency and accountability in the realm of large-scale AI models. They underscored the importance of understanding AI models' behaviors and decision-making processes, an area where clarity is still lacking.
Financial Implications
Notably, the bill specifically targeted AI systems that require an investment exceeding $100 million for development. While no existing AI models currently meet this financial threshold, experts have indicated that this could change within the next year due to the escalating investment scale within the industry. Concerns regarding the concentration of power in private companies controlling such advanced AI systems were voiced by former OpenAI researcher Daniel Kokotajlo, who resigned in April over perceived negligence towards AI risks.
The discourse surrounding AI regulation is not confined to California; it extends to a broader global landscape. Observers have noted that the United States is lagging behind Europe in implementing regulations aimed at mitigating AI-related risks. While the California proposal may not have been as comprehensive as European regulations, proponents believed it represented a crucial first step in establishing guardrails around a rapidly evolving technology that raises concerns about job displacement, misinformation, privacy invasions, and automation bias.
The Existing Voluntary Framework
In response to growing concerns, numerous leading AI companies voluntarily committed to adhering to safeguards established by the White House. These safeguards entail testing and sharing information about AI models. Supporters of the vetoed bill contended that mandating developers to comply with requirements akin to these voluntary commitments would have enhanced accountability and transparency within the AI sector.
However, the proposed legislation encountered resistance from various quarters, including former U.S. House Speaker Nancy Pelosi, who expressed reservations about its potential impact on California's tech landscape. Concerns were raised that the bill could stifle innovation and deter AI developers from investing in large models or sharing open-source software.
Share news