That sounds like really interesting work. Would love to learn more about it.
“but also because a disproportionate amount of cutting-edge AI work (Google, Meta, OpenAI, etc) is happening in California.” Do you have a take on the mechanism by which this leads to CA regulation being more important? I ask because I expect most regulation in the next few years to focus on what AI systems can be used in what jurisdictions, rather than what kinds of systems can be produced. Is the idea that you could start putting in place regulation that applies to systems being produced in CA? Or that CA regulation is particularly likely to affect the norms of frontier AI companies because they’re more likely to be aware of the regulation?
Just as a caveat, this is me speculating and isn’t really what I’ve been looking into (my past few months have been more “would it produce regulatory diffusion if CA did this?”). With that said, the location in which the product is being produced doesn’t really effect whether regulating that product produces regulatory diffusion—Anu Bradford’s criteria are market size, regulatory capacity, stringent standards, inelastic targets, and non-divisibility of production. I haven’t seriously looked into it, but I think that, even if all US AI research magically switched to, say, New York, none of those five factors would change for CA (though I do think any CA regulation merely targeting “systems being produced in CA” would be ineffective for a similar reason—with remote work being more and more acceptable and the fact that, maybe aside from OpenAI, all these companies have myriad offices outside CA, AI production would be too elastic). In this hypothetical, though, CA still has huge consumer market (both inre: individuals and corporations -- >10% of 2021′s Fortune 500 list is based in CA), it still has more regulatory capacity and stricter regulations than any other US state, and I think that certain components of AI production (e.g. massive datasets, the models themselves) are inelastic and non-divisible enough that CA regulation could still produce regulatory diffusion.
When it comes to why the presence of AI innovation in California makes potential California AI regulation more important, I imagine it being similar to your second suggestion, that “CA regulation is particularly likely to affect the norms of frontier AI companies,” though I don’t necessarily think awareness is the right vehicle for that change. After all, my intuition is that any company within an order of magnitude or two of Google or Meta would have somebody on staff whose job it is to stay abreast of regulation that affects them. I’m far from certain about it, but if I had to put it in words, I’d say that CA regulation could affect the norms of the field more broadly because of California’s unique position at the center of technology and innovation.
To use American stereotypes as analogies, CA enacting AI regulations would feel to me like West Virginia suddenly enacting landmark coal regulation, or Iowa suddenly doing the same with corn. It seems much bigger than New Jersey regulating coal or Maine regulating corn, and it seems to me that WV wouldn’t regulate coal unless it was especially important to do so. (This is a flawed analogy, though, since coal/corn is bigger for WV/IA than AI is for CA.) Either way, if California, the state which most likely stands to reap the greatest share of AI profits, home to Berkeley and Stanford and the most AI innovation in the US (maybe in the world? don’t quote me on that) were to regulate AI, it would send an unmistakable signal about just how important they think that regulation is.
I suspect that it wouldn’t be that hard to train models at datacenters outside of CA (my guess is this is already done to a decent extent today: 1⁄12 of Google’s US datacenters are in CA according to wiki). That models are therefore a pretty elastic regulatory target.
Data as a regulatory target is interesting, in particular if it transfers ownership or power over the data to data subjects in the relevant jurisdiction. That might e.g. make it possible for CA citizens to lodge complaints about potentially risky models being trained on data they’ve produced. I think the whole domain of data as a potential lever for AI governance is worthy of more attention. Would be keen to see someone delve into it.
I like the thought that CA regulating AI might be seen as a particularly credible signal that AI regulation makes sense and that it might therefore be more likely to produce a de jure effect. I don’t know how seriously to take this mechanism though. E.g. to what extent is it overshadowed by CA being heavily Democrat. The most promising way to figure this out in more detail to me seems like talking to other state legislators and looking at the extent to which previous CA AI-relevant regulation or policy narratives has seen any diffusion. Data privacy and facial recognition stand out as most promising to look into, but maybe there’s also stuff wrt autonomous vehicles.
Yeah, I’m really bullish on data privacy being an effective hook for realistic AI regulation, especially in CA. I think that, if done right, it could be the best option for producing a CA effect for AI. That’ll be a section of my report :)
Funnily enough, I’m talking to state legislators from NY and IL next week (each for a different reason, both for reasons completely unrelated to my project). I’ll bring this up.
Thanks!
That sounds like really interesting work. Would love to learn more about it.
“but also because a disproportionate amount of cutting-edge AI work (Google, Meta, OpenAI, etc) is happening in California.” Do you have a take on the mechanism by which this leads to CA regulation being more important? I ask because I expect most regulation in the next few years to focus on what AI systems can be used in what jurisdictions, rather than what kinds of systems can be produced. Is the idea that you could start putting in place regulation that applies to systems being produced in CA? Or that CA regulation is particularly likely to affect the norms of frontier AI companies because they’re more likely to be aware of the regulation?
Just as a caveat, this is me speculating and isn’t really what I’ve been looking into (my past few months have been more “would it produce regulatory diffusion if CA did this?”). With that said, the location in which the product is being produced doesn’t really effect whether regulating that product produces regulatory diffusion—Anu Bradford’s criteria are market size, regulatory capacity, stringent standards, inelastic targets, and non-divisibility of production. I haven’t seriously looked into it, but I think that, even if all US AI research magically switched to, say, New York, none of those five factors would change for CA (though I do think any CA regulation merely targeting “systems being produced in CA” would be ineffective for a similar reason—with remote work being more and more acceptable and the fact that, maybe aside from OpenAI, all these companies have myriad offices outside CA, AI production would be too elastic). In this hypothetical, though, CA still has huge consumer market (both inre: individuals and corporations -- >10% of 2021′s Fortune 500 list is based in CA), it still has more regulatory capacity and stricter regulations than any other US state, and I think that certain components of AI production (e.g. massive datasets, the models themselves) are inelastic and non-divisible enough that CA regulation could still produce regulatory diffusion.
When it comes to why the presence of AI innovation in California makes potential California AI regulation more important, I imagine it being similar to your second suggestion, that “CA regulation is particularly likely to affect the norms of frontier AI companies,” though I don’t necessarily think awareness is the right vehicle for that change. After all, my intuition is that any company within an order of magnitude or two of Google or Meta would have somebody on staff whose job it is to stay abreast of regulation that affects them. I’m far from certain about it, but if I had to put it in words, I’d say that CA regulation could affect the norms of the field more broadly because of California’s unique position at the center of technology and innovation.
To use American stereotypes as analogies, CA enacting AI regulations would feel to me like West Virginia suddenly enacting landmark coal regulation, or Iowa suddenly doing the same with corn. It seems much bigger than New Jersey regulating coal or Maine regulating corn, and it seems to me that WV wouldn’t regulate coal unless it was especially important to do so. (This is a flawed analogy, though, since coal/corn is bigger for WV/IA than AI is for CA.)
Either way, if California, the state which most likely stands to reap the greatest share of AI profits, home to Berkeley and Stanford and the most AI innovation in the US (maybe in the world? don’t quote me on that) were to regulate AI, it would send an unmistakable signal about just how important they think that regulation is.
Do you think that makes sense?
I suspect that it wouldn’t be that hard to train models at datacenters outside of CA (my guess is this is already done to a decent extent today: 1⁄12 of Google’s US datacenters are in CA according to wiki). That models are therefore a pretty elastic regulatory target.
Data as a regulatory target is interesting, in particular if it transfers ownership or power over the data to data subjects in the relevant jurisdiction. That might e.g. make it possible for CA citizens to lodge complaints about potentially risky models being trained on data they’ve produced. I think the whole domain of data as a potential lever for AI governance is worthy of more attention. Would be keen to see someone delve into it.
I like the thought that CA regulating AI might be seen as a particularly credible signal that AI regulation makes sense and that it might therefore be more likely to produce a de jure effect. I don’t know how seriously to take this mechanism though. E.g. to what extent is it overshadowed by CA being heavily Democrat. The most promising way to figure this out in more detail to me seems like talking to other state legislators and looking at the extent to which previous CA AI-relevant regulation or policy narratives has seen any diffusion. Data privacy and facial recognition stand out as most promising to look into, but maybe there’s also stuff wrt autonomous vehicles.
Yeah, I’m really bullish on data privacy being an effective hook for realistic AI regulation, especially in CA. I think that, if done right, it could be the best option for producing a CA effect for AI. That’ll be a section of my report :)
Funnily enough, I’m talking to state legislators from NY and IL next week (each for a different reason, both for reasons completely unrelated to my project). I’ll bring this up.
Great! Looking forward to seeing it!