This is a very interesting read. I have some feedback (mostly neutral) to help further shape these ideas. Mostly just to prompt and direct some interest extra thought directions. It’s not a list of criticisms, but myself just spitballing ideas on top of your foundations:
A product that carries large negative externalities
You mention aerospace and nuclear as good demonstrations of regulations and to an extent I agree, but a potential weakness here is the strength of regulation comes a lot from the fact that these developers are often either monopsonies or close. That is, there are very few builders of these systems and they have either very few or singular customers, as well as access to top-tier legal talent. AI development is much more diverse, and I think this makes regulation harder. Not saying your idea on this element is bad—it’s very good—but it’s something to bear in mind. This would be an interesting governance category to maybe split into subcategories.
Innovation policy
This is a good idea again, and the food-for-thought relates to the above quite closely too again with the nuclear mention. One thing I’d look into is positive influence on procurement—make it more rewarding for an organisation to buy (or make) safer AI than the financial reward is for not doing that. Policing in England and Wales is experiencing a subtle shift like this right now which has actually been very impactful.
A national security risk
Obviously it’s hard to get detailed in an overview post, but WMDs are regulated in a specific way which doesn’t necessarily marry well to NatSec. There’s some great research right now on how NatSec related algorithms and transparency threats are beginning to be regulated, with some recent trials of regulations.
Preventing competitive dynamics
Not much to say here, as this is outside my expertise area. I’ll leave that for others.
As an instrument of great power conflict
This was an interesting point. One thing I’d highlight is though most of my work is in AI regulation, I’ve done a bunch of Space regulation too and a thing to bear in mind is that space law has aged horribly and is stagnant. One of the main issues is that it was written when there were three space powers (mainly US and USSR, with UK as a US-aligned third space power), and the regulation was written with the idea of a major tech bottleneck to space and the ability for two nations to ‘police’ it all. This is more so true of the Outer Space Treaty than some of the treaties that followed and built on it. Obviously the modern day bears very little resemblance to that which makes things more difficult eg private entities exploring space, modern technology allowing autonomy. Worth thinking about how we would avoid this when we don’t know what the world will look like in 10, 25, 50 years.
Improving consumer welfare
This was a great point. Not much further direction to add here, other than looking at how some successful laws have been future-proofed against AI changes vs how some have been less successful.
Political economy
I’ve done a bunch of work in this area, but haven’t really got anything to add beyond what you’ve put.
Military Technology
One of the major bottlenecks with this category is that most new projects happen far behind closed doors for decades. EA currently lacks much of a MilTech presence, which is a missed opportunity IMO.
All in all this is an interesting way to group AI governance areas, but I think some additional attention could be paid to how the markets behave there and how that affects regulation. Perhaps an extra category for monopsonies or large market suppliers at opposite ends of a spectrum?
This is a very interesting read. I have some feedback (mostly neutral) to help further shape these ideas. Mostly just to prompt and direct some interest extra thought directions. It’s not a list of criticisms, but myself just spitballing ideas on top of your foundations:
A product that carries large negative externalities
You mention aerospace and nuclear as good demonstrations of regulations and to an extent I agree, but a potential weakness here is the strength of regulation comes a lot from the fact that these developers are often either monopsonies or close. That is, there are very few builders of these systems and they have either very few or singular customers, as well as access to top-tier legal talent. AI development is much more diverse, and I think this makes regulation harder. Not saying your idea on this element is bad—it’s very good—but it’s something to bear in mind. This would be an interesting governance category to maybe split into subcategories.
Innovation policy
This is a good idea again, and the food-for-thought relates to the above quite closely too again with the nuclear mention. One thing I’d look into is positive influence on procurement—make it more rewarding for an organisation to buy (or make) safer AI than the financial reward is for not doing that. Policing in England and Wales is experiencing a subtle shift like this right now which has actually been very impactful.
A national security risk
Obviously it’s hard to get detailed in an overview post, but WMDs are regulated in a specific way which doesn’t necessarily marry well to NatSec. There’s some great research right now on how NatSec related algorithms and transparency threats are beginning to be regulated, with some recent trials of regulations.
Preventing competitive dynamics
Not much to say here, as this is outside my expertise area. I’ll leave that for others.
As an instrument of great power conflict
This was an interesting point. One thing I’d highlight is though most of my work is in AI regulation, I’ve done a bunch of Space regulation too and a thing to bear in mind is that space law has aged horribly and is stagnant. One of the main issues is that it was written when there were three space powers (mainly US and USSR, with UK as a US-aligned third space power), and the regulation was written with the idea of a major tech bottleneck to space and the ability for two nations to ‘police’ it all. This is more so true of the Outer Space Treaty than some of the treaties that followed and built on it. Obviously the modern day bears very little resemblance to that which makes things more difficult eg private entities exploring space, modern technology allowing autonomy. Worth thinking about how we would avoid this when we don’t know what the world will look like in 10, 25, 50 years.
Improving consumer welfare
This was a great point. Not much further direction to add here, other than looking at how some successful laws have been future-proofed against AI changes vs how some have been less successful.
Political economy
I’ve done a bunch of work in this area, but haven’t really got anything to add beyond what you’ve put.
Military Technology
One of the major bottlenecks with this category is that most new projects happen far behind closed doors for decades. EA currently lacks much of a MilTech presence, which is a missed opportunity IMO.
All in all this is an interesting way to group AI governance areas, but I think some additional attention could be paid to how the markets behave there and how that affects regulation. Perhaps an extra category for monopsonies or large market suppliers at opposite ends of a spectrum?