A California Effect for Artificial Intelligence

Link post

I just finished writing a 50-page document exploring a few ways that the State of California could regulate AI with the goal of producing a de facto California Effect. You can read the whole thing as a Google doc here, as a pdf here, or as a webpage here,[1] or you can read the summary and a few key takeaways below. I’m also including some thoughts on my theory of impact and on opportunities for future research.

I built off work by Anu Bradford, as well as a recent GovAI paper by Charlotte Siegmann and Markus Anderljung. This project was mentored by Cullen O’Keefe. I did this research through an existential risk summer research fellowship at the University of Chicago — thank you Zack Rudolph and Isabella Duan for organizing it!

Abstract

The California Effect occurs when companies adhere to California regulations even outside California’s borders because of a combination of California’s large market, its capacity to successfully regulate, its preference for stringent standards, and the difficulty of dividing the regulatory target or moving beyond California’s jurisdiction. In this paper, I look into three ways in which California could regulate artificial intelligence and ask whether each would produce a de facto California Effect. I find it likely (~80%) that regulating training data through data privacy would produce a California Effect. I find it unlikely (~20%) that regulation based on the number of floating-point operations needed to train a model would produce a California Effect. Finally, I find it likely (~80%) that risk-based regulation like that proposed by the European Union would produce a California Effect.

If this seems interesting, please give the full paper a look. There’s a more-detailed 1.5-page executive summary, and then (of course) the document itself.

Key Takeaways

  1. The California Effect is a powerful force-multiplier that lets you have federal-level impact for the low(er) price[2] of state-level effort.[3]

  2. There are ways to regulate AI which I argue would produce a California Effect.

  3. State government in general and California’s government specifically are undervalued by EAs. I believe that EAs interested in politics, structural change, regulation, animal welfare, preventing pandemics, etc, could in some cases have bigger and/​or more immediate impacts on a state level than on a federal level.

  4. There are still plenty of opportunity for further research.

Theory of Impact

My hope — and ultimate theory of impact — is that this paper will help policymakers make better-informed decisions about future AI regulations. I hope to encourage those who believe in regulating artificial intelligence to give more attention to the State of California. At the very least, I hope that people with a broader reach than I have in the AI Governance space will read and even build off this work. I hope I can raise their awareness of the California Effect and ensure that they recognize the disproportionate impact it can have in the race to keep artificial intelligence safe.

Opportunities for further research

Before I list my own thoughts, I will direct readers to the list of further research opportunities that Charlotte Siegmann and Markus Anderljung collected in an announcement for their report on the potential Brussels Effect of the EU AI Act. I’m personally choosing to highlight their fourth and sixth bullet points, which I think would be especially effective (the latter even more so):

  • “Empirical work tracking the extent to which there is likely to be a Brussels Effect. Most of the research on regulatory diffusion focuses on cases where diffusion has already happened. It seems interesting to instead look for leading indicators of regulatory diffusion. For example, you could analyze relevant parliamentary records or conduct interviews, to gain insight into the potential global influence of the EU AI Act, the EU, and legal terms and framings of AI regulation first introduced in the EU discussion leading up to the EU AI Act. [...]

  • Work on what good AI regulation looks like from a TAI/​AGI perspective seems particularly valuable. Questions include: What systems should be regulated? Should general-purpose systems be a target of regulation? Should regulatory burdens scale with the amount of compute used to train a system? What requirements should be imposed on high-risk systems? Are there AI systems that should be given fiduciary duties?”

Interested readers should also peruse the Centre for Governance of AI’s research agenda, which is far more exhaustive than I could ever hope to be.

With other people’s suggestions out of the way, I think there’s a dearth of research into the impact state governments can have, in artificial intelligence governance but especially in other cause areas. State and local governments account for a bit less than half of all government spending in the US, yet they can be far more accessible than the federal government, which accounts for the other half. Especially in the context of AI governance, I would love to see more research into which state-level interventions are possible, anywhere from research funding/​grants to tax breaks.

Interestingly, the California Privacy Rights Act gives the California Privacy Protection Agency the right to “Issu[e] regulations governing access and opt-out rights with respect to businesses’ use of automated decision-making technology, including profiling and requiring businesses’ response to access requests to include meaningful information about the logic involved in such decision-making processes, as well as a description of the likely outcome of the process with respect to the consumer.” However, their proposed regulations do not seem to mention AI or automated decision-making. Though the CPPA is no longer accepting comments on their proposed regulations, it could be useful to look into what it would take to get them to include AI to a greater extent.

In this same vein, it could be useful to look at instances in the past when other states’ regulatory authorities have attempted to regulate online commerce. Were they successful? What would these previous attempts at regulation mean for future attempts to regulate AI? Though I touched upon the California Privacy Protection Agency, it may be the case that such an agency isn’t the right entity to create and enforce these regulations. Which other agencies, e.g. consumer protection, could effectively regulate AI? This could also be worth looking into at the federal level, too.

It could also be a good idea to require registration for training runs, data collection, or even the entirety of model creation. As such, research into prior attempts to require licenses for the creation and use of new technology (e.g. transportation, research technologies, weapons, etc) could be useful.

  1. ^

    Given that the formatting doesn’t translate perfectly from the Google doc, the webpage is probably best for those of you who like to download webpages and run them through text-to-speech software.

  2. ^

    Granted, since California is both the largest state (~40 million citizens) and has the most regulation-happy legislature, “state-level effort” is far from zero. It’s still smaller than the federal government, though.

  3. ^

    This could be a double-edged sword if the California Effect ends up amplifying poorly-crafted or harmful regulation, though. Bad regulation in general and bad AI regulation specifically have the potential to do far more harm than good, and the potential for regulatory diffusion makes this issue all the more relevant.