Skip to main content

The California Legislature is back to work for 2024, and one of the items at the top of their agenda will be artificial intelligence (AI) regulation at the state level. AI regulation has become even more of a focus on a national level in recent months. While we recently published this summary of the Biden Executive Order on October 31, 2023, California has prioritized regulation of AI applications.

California is leading the AI boom in the United States, specifically the San Francisco Bay Area. In a California State Executive Order issued last year regarding responsible AI policies and regulation, Governor Newsom noted, “California has established itself as the world leader in GenAI innovation with 35 of the world’s top 50 AI companies and a quarter of all AI patents, conference papers, and companies globally.” Axios also points out that San Francisco has the highest number of new job postings in GenAI, and the city boasts 20 of the best-funded AI companies (more than the rest of America combined.)

Given the prominence of AI companies in the state and the rate at which the technology is advancing and being adopted, does this mean California should have some heightened level of responsibility regarding regulation? More than one piece of AI legislation is set to be introduced in the legislature this year, as well as multiple other regulatory initiatives at the state level.

Below is a look at some of the AI regulatory measures that will be introduced by Assemblymembers in California.


First, Assemblymember Rebecca Bauer-Kahan, who chairs the Assembly Privacy and Consumer Protection Committee, has stated plans to reintroduce a bill targeting discrimination in AI. This would prohibit companies from using AI algorithms that discriminate against people and require companies developing these algorithms to evaluate them and disclose any potential discriminatory risks.

Safety and Transparency

State Sen. Scott Wiener plans to introduce broader legislation aimed at industry-wide safety and transparency standards. He has stated that SB294 would target the most significant risks in public safety and security, including AI-generated bioweapons, cyberattacks, and misinformation campaigns. This is still not finalized, but it will be an effort to regulate AI in the state more broadly.


In a more targeted approach, Assemblymember Ash Kalra is looking to enact further protections for actors and artists regarding AI, which was a large part of the negotiations during the actors’ strike last year. The bill would limit studios’ ability to utilize AI to replicate the work of an actor or artist and address language vagaries in contracts that might otherwise enable such unintended practices.


There is an effort by the California Privacy Protection Agency (CPPA) to regulate automated decision-making technology (ADMT) in the state. The proposed regulations were introduced in November of 2023, signaling the agency’s plans to regulate automated tools leveraging AI and facial recognition significantly.

The proposed rules require companies to issue a notice to consumers that they utilize this technology, and they would allow California residents to stop the use of their personal data from ADMT. Employers would also be required to notify job applicants if a decision regarding their employment was based on ADMT.

Under current leadership, California is demonstrating that it is keen to lead the way when it comes to regulating AI applications at the state level. It was one of the first to pave the way for state-level privacy and data protection regulations, with other states soon following. Stay tuned for updates as these efforts move through the legislature and other agencies.

Author Louis Lehot

More Insights by Louis Lehot