Doing longtermist AI policy work feels a little like aiming heavy artillery with a blindfold on. We can’t see our target, we’ve no idea how hard to push the barrel in any one direction, we don’t know how long the fuse is, we can’t stop the cannonball once it’s in motion, and we could do some serious damage if we get things wrong.
Longtermist legal work seems particularly susceptible to the Cannonball Problem, for a few reasons:
Changes to hard law are difficult to reverse—legislatures rarely consider issues more than a couple times every ten years, and the judiciary takes even longer.
At the same time, legal measures which once looked good can quickly become ineffectual due to shifts in underlying political, social or economic circumstances.
Taken together, this means that bad laws have a long time to do a lot of harm, so we need to be careful when putting new rules on the books.
This is worsened by the fact that we don’t know what ideal longtermist governance looks like. In a world of transformative AI, it’s hard to tell if the rule of law will mean very much at all. If sovereign states aren’t powerful enough to act as leviathans, it’s hard to see why influential actors wouldn’t just revert to power politics.
Underlying all of this are huge, unanswered questions in political philosophy about where we want to end up. A lack of knowledge about our final destination makes it harder to come up with ways to get there.
I think this goes some way to explaining why longtermist lawyers only have a few concrete policy asks right now despite admirable efforts from LPP, GovAI and others.
I agree. It seems like a highly impactful thing, with a high level of uncertainty. The normal way of reducing uncertainty is to run small trials. My understanding of this concept from the business world is the idea of Fire Bullets, Then Cannonballs. But (as someone with zero technical competence in AI) I suspect that small trials might simply not be feasible.
The Cannonball Problem:
Doing longtermist AI policy work feels a little like aiming heavy artillery with a blindfold on. We can’t see our target, we’ve no idea how hard to push the barrel in any one direction, we don’t know how long the fuse is, we can’t stop the cannonball once it’s in motion, and we could do some serious damage if we get things wrong.
Longtermist legal work seems particularly susceptible to the Cannonball Problem, for a few reasons:
Changes to hard law are difficult to reverse—legislatures rarely consider issues more than a couple times every ten years, and the judiciary takes even longer.
At the same time, legal measures which once looked good can quickly become ineffectual due to shifts in underlying political, social or economic circumstances.
Taken together, this means that bad laws have a long time to do a lot of harm, so we need to be careful when putting new rules on the books.
This is worsened by the fact that we don’t know what ideal longtermist governance looks like. In a world of transformative AI, it’s hard to tell if the rule of law will mean very much at all. If sovereign states aren’t powerful enough to act as leviathans, it’s hard to see why influential actors wouldn’t just revert to power politics.
Underlying all of this are huge, unanswered questions in political philosophy about where we want to end up. A lack of knowledge about our final destination makes it harder to come up with ways to get there.
I think this goes some way to explaining why longtermist lawyers only have a few concrete policy asks right now despite admirable efforts from LPP, GovAI and others.
I agree. It seems like a highly impactful thing, with a high level of uncertainty. The normal way of reducing uncertainty is to run small trials. My understanding of this concept from the business world is the idea of Fire Bullets, Then Cannonballs. But (as someone with zero technical competence in AI) I suspect that small trials might simply not be feasible.