Calif. Gov. Vetoes AI Safety Bill Aimed at Big Tech Players

  /     /     /  
Publicated : 23/11/2024   Category : security


Calif. Gov. Vetoes AI Safety Bill Aimed at Big Tech Players


Critics viewed the bill as seeking protections against nonrealistic doomsday fears, but most stakeholders agree that oversight is needed in the GenAI space.



California Gov. Gavin Newsom (D) has vetoed SB-1047, a bill that would have imposed what some perceived as overly broad — and unrealistic — restrictions on developers of advanced artificial intelligence (AI) models.
In doing so, Newsom likely disappointed many others — including leading AI researchers, the Center for AI Security (CAIS), and the Screen Actors Guild — who perceived the bill as establishing much-needed safety and privacy guardrails around AI model development and use.
While well-intentioned, SB-1047 does not take into account whether an AI system is deployed in high-risk environments, or involves critical decision-making or the use of sensitive data,
Newsom wrote
. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.
Newsoms veto announcement contained references to 17 other AI-related bills that he signed over the past month governing the use and deployment of generative AI (GenAI) tools in the state, which is a category that includes chatbots such as ChatGPT, Microsoft Copilot, Google Gemini, and others.
We have a responsibility to protect Californians from the potentially catastrophic risks of GenAI deployment, he acknowledged. But he made clear that SB-1047 was not the vehicle for those protections. We will thoughtfully — and swiftly — work toward a solution that is adaptable to this fast-moving technology and harnesses its potential to advance the public good.
There are
numerous other proposals
at the state level, seeking similar control over AI development amid concerns about other
countries overtaking the US
on the AI front.
California State senators Scott Wiener, Richard Roth, Susan Rubio, and Henry Stern proposed
SB-1047
as a measure that would impose some oversight over companies like OpenAI, Meta, and Google, which are all pouring hundreds of millions of dollars into developing AI technologies.
At the core of the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act are stipulations that would have required companies that develop large language models (LLMs) — which can cost more than $100 million to develop — to ensure their technologies enable no critical harm. The bill defined critical harm as incidents involving the use of AI technologies to create or use chemical, biological, nuclear, and other weapons of mass destruction, or those causing mass casualties, mass damage, death, bodily injury and other harm.
To enable that, SB-1047 would have required covered entities to comply with specific administrative, technical, and physical controls to prevent unauthorized access to their models, misuse of their models, or unsafe modifications to their models by others. The bill included a particularly controversial clause that would have required the OpenAIs, Googles, and Metas of the world to implement nuclear-like failsafe capabilities to enact a full shutdown of their LLMs in certain circumstances.
The bill won broad bipartisan support and easily passed Californias state Assembly and Senate earlier this year. It headed to Newsoms desk for signing in August. At the time,
Weiner cited the support
of leading AI researchers such as
Geoffrey Hinton
(a former AI researcher at Google), professor
Yoshua Bengio,
and entities such as
CAIS
.
Even Elon Musk, whose own xAI company would have been subjected to SB-1047, came out in support of the bill
in a post on X
saying Newsom should probably pass the bill given the potential
existential risks of runaway AI,
which he and others have been flagging for many months.
Others, however, perceived the bill as based on unproven doomsday scenarios about the potential for AI to wreak havoc on society.
In an open letter
, a coalition that included several entities including the Bay Area Council, Chamber of Progress, TechFreedom, and Silicon Valley Leadership Group called the bill fundamentally flawed.
The group claimed that the harms that SB-1047 sought to protect against were completely theoretical, with no basis in fact. Moreover, the latest independent academic research concludes, large language models like ChatGPT cannot learn independently or acquire new skills, meaning they pose no existential threat to humanity. The coalition also took issue with the fact that the bill would hold developers of large AI models responsible for what others do with their products.
Arlo Gilbert, CEO of data-privacy firm Osano, is among those who views Newsoms decision to veto the bill as a sound one
.
I support the governors decision, Gilbert says. While Im a great proponent for AI regulation, the proposed SB-1047 is not the right vehicle to get us there.
As Newsom has identified, there are gaps between policy and technology, and the balance between doing the right thing and supporting innovation is one that merits a cautious approach, he says. From a privacy and security perspective, small startups or smaller companies that would have been exempt from this rule can actually present a greater risk of harm due to their relative access to resources to protect, monitor, and disgorge data from their systems, Gilbert notes.
In an emailed statement, Melissa Ruzzi, director of artificial intelligence at AppOmni, identified SB-1047 as raising issues that need attention now: We all know AI is very new and there are challenges in writing laws around it. We cannot expect the first laws to be flawless and perfect — this will most likely be an iterative process, but we have to start somewhere.
She acknowledged that some of the biggest players in the AI space, such as Anthropic and Google, have put a big focus on ensuring their technologies do no harm. But to make sure all players will follow the rules, laws are needed, she said. This removes the uncertainty and fear from end users about AI being used in an application.

Last News

▸ Veritabile Defecte de Proiectare a Securitatii in Software -> Top 10 Software Security Design Flaws ◂
Discovered: 23/12/2024
Category: security

▸ Sony, XBox Targeted by DDoS Attacks, Hacktivist Threats ◂
Discovered: 23/12/2024
Category: security

▸ There are plenty of online tools for reporting bugs. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Calif. Gov. Vetoes AI Safety Bill Aimed at Big Tech Players