Doing longtermist AI policy work feels a little like aiming heavy artillery with a blindfold on. We canāt see our target, weāve no idea how hard to push the barrel in any one direction, we donāt know how long the fuse is, we canāt stop the cannonball once itās in motion, and we could do some serious damage if we get things wrong.
Longtermist legal work seems particularly susceptible to the Cannonball Problem, for a few reasons:
Changes to hard law are difficult to reverseālegislatures rarely consider issues more than a couple times every ten years, and the judiciary takes even longer.
At the same time, legal measures which once looked good can quickly become ineffectual due to shifts in underlying political, social or economic circumstances.
Taken together, this means that bad laws have a long time to do a lot of harm, so we need to be careful when putting new rules on the books.
This is worsened by the fact that we donāt know what ideal longtermist governance looks like. In a world of transformative AI, itās hard to tell if the rule of law will mean very much at all. If sovereign states arenāt powerful enough to act as leviathans, itās hard to see why influential actors wouldnāt just revert to power politics.
Underlying all of this are huge, unanswered questions in political philosophy about where we want to end up. A lack of knowledge about our final destination makes it harder to come up with ways to get there.
I think this goes some way to explaining why longtermist lawyers only have a few concrete policy asks right now despite admirable efforts from LPP, GovAI and others.
I agree. It seems like a highly impactful thing, with a high level of uncertainty. The normal way of reducing uncertainty is to run small trials. My understanding of this concept from the business world is the idea of Fire Bullets, Then Cannonballs. But (as someone with zero technical competence in AI) I suspect that small trials might simply not be feasible.
GovAI now has a full-time researcher working on compute governance. Chinchillaās Wild Implications suggests that access to data might also be a crucial leverage point for AI development. However, from what I can tell, there are no EAs working full time on how data protection regulations might help slow or direct AI progress. This seems like a pretty big gap in the field.
Whatās going on here? I can see two possible answers:
Folks have suggested that compute is relatively to govern (eg). Someone might have looked into this and decided data is just too hard to control, and weāre better off putting our time into compute.
Someone might already be working on this that I just havenāt heard of.
If anyone has an answer to this Iād love to know!
This talk by Jade Leung got me thinkingāIāve never seen a plan for what we do if AGI turns out misaligned.
The default assumption seems to be something like āwell, thereās no point planning for that, because weāll all be powerless and screwedā. This seems mistaken to me. Itās not clear that weāll be so powerless that we have absolutely no ability to encourage a trajectory change, particularly in a slow takeoff scenario. Given that most people weight alleviating suffering higher than promoting pleasure, this is especially valuable work in expectation as it might help us change outcomes from āvery, very bad worldā to āslightly negativeā world. This also seems pretty tractableāIād expect ~10hrs thinking about this could help us come up with a very barebones playbook.
Why isnāt this being done? I think there are a few reasons:
Like suffering focused ethics, itās depressing.
It seems particularly speculativeāmost of the āhumanity becomes disempowered by AGIā scenarios look pretty sci-fi. So serious academics donāt want to consider it.
People assume, mistakenly IMO, that weāre just totally screwed if AI is misaligned.
Iām working on some EA-relevant research right now, but Iām finding it hard to stay motivated, so Iām looking for an accountability buddy.
My thought is that we could set ~4hrs a week where we commit to call and work on our respective projects, though Iām happy to be flexible on the amount of time.
If youāre interested, please reach out in the comments or DM me.
The Cannonball Problem:
Doing longtermist AI policy work feels a little like aiming heavy artillery with a blindfold on. We canāt see our target, weāve no idea how hard to push the barrel in any one direction, we donāt know how long the fuse is, we canāt stop the cannonball once itās in motion, and we could do some serious damage if we get things wrong.
Longtermist legal work seems particularly susceptible to the Cannonball Problem, for a few reasons:
Changes to hard law are difficult to reverseālegislatures rarely consider issues more than a couple times every ten years, and the judiciary takes even longer.
At the same time, legal measures which once looked good can quickly become ineffectual due to shifts in underlying political, social or economic circumstances.
Taken together, this means that bad laws have a long time to do a lot of harm, so we need to be careful when putting new rules on the books.
This is worsened by the fact that we donāt know what ideal longtermist governance looks like. In a world of transformative AI, itās hard to tell if the rule of law will mean very much at all. If sovereign states arenāt powerful enough to act as leviathans, itās hard to see why influential actors wouldnāt just revert to power politics.
Underlying all of this are huge, unanswered questions in political philosophy about where we want to end up. A lack of knowledge about our final destination makes it harder to come up with ways to get there.
I think this goes some way to explaining why longtermist lawyers only have a few concrete policy asks right now despite admirable efforts from LPP, GovAI and others.
I agree. It seems like a highly impactful thing, with a high level of uncertainty. The normal way of reducing uncertainty is to run small trials. My understanding of this concept from the business world is the idea of Fire Bullets, Then Cannonballs. But (as someone with zero technical competence in AI) I suspect that small trials might simply not be feasible.
Focusing more on data governance:
GovAI now has a full-time researcher working on compute governance. Chinchillaās Wild Implications suggests that access to data might also be a crucial leverage point for AI development. However, from what I can tell, there are no EAs working full time on how data protection regulations might help slow or direct AI progress. This seems like a pretty big gap in the field.
Whatās going on here? I can see two possible answers:
Folks have suggested that compute is relatively to govern (eg). Someone might have looked into this and decided data is just too hard to control, and weāre better off putting our time into compute.
Someone might already be working on this that I just havenāt heard of.
If anyone has an answer to this Iād love to know!
NB: One reason this might be tractable is that lots of non-EA folks are working on data protection already, and we could leverage their expertise.
No Plans for Misaligned AI:
This talk by Jade Leung got me thinkingāIāve never seen a plan for what we do if AGI turns out misaligned.
The default assumption seems to be something like āwell, thereās no point planning for that, because weāll all be powerless and screwedā. This seems mistaken to me. Itās not clear that weāll be so powerless that we have absolutely no ability to encourage a trajectory change, particularly in a slow takeoff scenario. Given that most people weight alleviating suffering higher than promoting pleasure, this is especially valuable work in expectation as it might help us change outcomes from āvery, very bad worldā to āslightly negativeā world. This also seems pretty tractableāIād expect ~10hrs thinking about this could help us come up with a very barebones playbook.
Why isnāt this being done? I think there are a few reasons:
Like suffering focused ethics, itās depressing.
It seems particularly speculativeāmost of the āhumanity becomes disempowered by AGIā scenarios look pretty sci-fi. So serious academics donāt want to consider it.
People assume, mistakenly IMO, that weāre just totally screwed if AI is misaligned.
Looking for an accountability buddy:
Iām working on some EA-relevant research right now, but Iām finding it hard to stay motivated, so Iām looking for an accountability buddy.
My thought is that we could set ~4hrs a week where we commit to call and work on our respective projects, though Iām happy to be flexible on the amount of time.
If youāre interested, please reach out in the comments or DM me.