At the very least, in my view, the picture has changed in an EU-favoring direction in the last year (despite lots of progress in US AI policy), and this should prompt a re-evaluation of the conventional wisdom (in my understanding) that the US has enough leverage over AI development such that policy careers in DC are more impactful even for Europeans.
Interesting! I don’t quite understand what updated you. To me, it looks like, given the EU AI Act is mostly determined at this stage, there is less leverage in the EU, not more. Meanwhile, the approach the US takes to AI regulation still remains uncertain, indicating many more opportunities for impact.
The text of the Act is mostly determined, but it delegates tons of very important detail to standard-setting organizations and implementation bodies at the member-state level.
Q2 2023: Woah, looks like the AI Act might have a lot more stuff aimed at the future AI systems I’m most worried about than I thought! Making that go well now seems a lot more important than it did when it looked like it would mostly be focused on pre-foundation model AI. I hope this passes!
Q3 2023: As I learn more about this, it seems like a lot of the value is going to come from the implementation process, since it seems like the same text in the actual Act could wind up either specifically requiring things that could meaningfully reduce the risks or just imposing a lot of costs at a lot of points in the process without actually aiming at the most important parts, based on how the standard-setting orgs and member states operationalize it. But still, for that to happen at all it needs to pass and not have the general-purpose AI stuff removed.
November 2023: Oh no, France and Germany want to take out the stuff I was excited about in Q2. Maybe this will not be very impactful after all.
December 2023: Oh good, actually it seems like they’ve figured out a way to focus the costs France/Germany were worried about on the very most dangerous AIs and this will wind up being more like what I was hoping for pre-November, and now highly likely to pass!
Interesting! I don’t quite understand what updated you. To me, it looks like, given the EU AI Act is mostly determined at this stage, there is less leverage in the EU, not more. Meanwhile, the approach the US takes to AI regulation still remains uncertain, indicating many more opportunities for impact.
The text of the Act is mostly determined, but it delegates tons of very important detail to standard-setting organizations and implementation bodies at the member-state level.
And your update is that this process will be more globally impactful than you initially expected? Would be curious to learn why.
The shape of my updates has been something like:
Q2 2023: Woah, looks like the AI Act might have a lot more stuff aimed at the future AI systems I’m most worried about than I thought! Making that go well now seems a lot more important than it did when it looked like it would mostly be focused on pre-foundation model AI. I hope this passes!
Q3 2023: As I learn more about this, it seems like a lot of the value is going to come from the implementation process, since it seems like the same text in the actual Act could wind up either specifically requiring things that could meaningfully reduce the risks or just imposing a lot of costs at a lot of points in the process without actually aiming at the most important parts, based on how the standard-setting orgs and member states operationalize it. But still, for that to happen at all it needs to pass and not have the general-purpose AI stuff removed.
November 2023: Oh no, France and Germany want to take out the stuff I was excited about in Q2. Maybe this will not be very impactful after all.
December 2023: Oh good, actually it seems like they’ve figured out a way to focus the costs France/Germany were worried about on the very most dangerous AIs and this will wind up being more like what I was hoping for pre-November, and now highly likely to pass!