I really liked this post, thanks for writing it! I’m much more sympathetic to ideal governance now.
Two fairly off-the-cuff reactions:
First, I would guess that some (most?) of the appeal of utopias is conditional on lack of scarcity? I’m not sure how to interpret Karnofsky’s results here: he notes that his freedom-focused utopia without mention of happiness, wealth or fulfilment was still the third most popular utopia amongst respondents. However, the other top five utopias highlight a lack of scarcity, and even the freedom-focused utopia without conclusion implies a lack of scarcity (“If you aren’t interfering with someone else’s life, you can do whatever you want”). Naively, I’d guess that at least some people value freedom so highly only if they can do what they like with that freedom.
I think this is a problem for creating utopias that respect pluralism—it’s still possible, but I think it’s harder than it appears. I would expect people to have strong opinions about resource allocation, which makes it hard to get to post-scarcity for everyone, which makes it hard to reach a utopia that many people like. The counterargument here is that if we had resource abundance, people would be much happier sharing resources, and this would make getting to a post-scarcity state for everyone much easier, but I’m a little sceptical about this. (Billionaires are quite close to resource abundance, and most of them seem quite keen to hold onto their personal wealth/don’t seem very interested in redistributing their resources?)
It might still be motivating to construct pluralistic utopias which assume post-scarcity for everyone, even if this condition is unlikely to be met in practice, but I’m less confident that utopias which require a fairly unlikely condition will be action-guiding.
Second, I agree that AI ideal governance theories are useful for action. But shouldn’t we care more about how useful for action they are? I’m not sure how useful it is to work on AI ideal governance vs. working on e.g. more applied AI policy, and it seems like you need a stronger claim than “AI ideal governance” is useful to motivate working on it. (Probably >0 people should work on ideal governance? But without a compelling argument that it’s more valuable on the margin than other AI governance work, I’m not sure if many people should work on it.)
Thank you! A few quick thoughts on some great points:
Other than Karnofsky’s piece, I didn’t find too much empirical research trying to understand why people find utopias appealing, but I share the intuition that a lack of scarcity/great wealth is often a plausible reason. There’s an interesting open empirical question about how that relates with people’s views about freedom, and a related normative one about our intuitions regarding the importance of freedom more generally.
Agree that it’s very hard to construct pluralism-respecting utopias and even harder to work out ways to get there. In the post, main aim was to question the idea that the former is impossible.
Definitely think the question of how useful AI ideal governance theories can be is next step of discussion after establishing ways in which they can be helpful. I don’t have too many abstract thoughts on how many people should be working in this sub-field (this may depend on what problem is trying to be solved at a given time) - mainly wanted to establish that, e.g., people interested in policy, shouldn’t see it as totally irrelevant approach.
I really liked this post, thanks for writing it! I’m much more sympathetic to ideal governance now.
Two fairly off-the-cuff reactions:
First, I would guess that some (most?) of the appeal of utopias is conditional on lack of scarcity? I’m not sure how to interpret Karnofsky’s results here: he notes that his freedom-focused utopia without mention of happiness, wealth or fulfilment was still the third most popular utopia amongst respondents. However, the other top five utopias highlight a lack of scarcity, and even the freedom-focused utopia without conclusion implies a lack of scarcity (“If you aren’t interfering with someone else’s life, you can do whatever you want”). Naively, I’d guess that at least some people value freedom so highly only if they can do what they like with that freedom.
I think this is a problem for creating utopias that respect pluralism—it’s still possible, but I think it’s harder than it appears. I would expect people to have strong opinions about resource allocation, which makes it hard to get to post-scarcity for everyone, which makes it hard to reach a utopia that many people like. The counterargument here is that if we had resource abundance, people would be much happier sharing resources, and this would make getting to a post-scarcity state for everyone much easier, but I’m a little sceptical about this. (Billionaires are quite close to resource abundance, and most of them seem quite keen to hold onto their personal wealth/don’t seem very interested in redistributing their resources?)
It might still be motivating to construct pluralistic utopias which assume post-scarcity for everyone, even if this condition is unlikely to be met in practice, but I’m less confident that utopias which require a fairly unlikely condition will be action-guiding.
Second, I agree that AI ideal governance theories are useful for action. But shouldn’t we care more about how useful for action they are? I’m not sure how useful it is to work on AI ideal governance vs. working on e.g. more applied AI policy, and it seems like you need a stronger claim than “AI ideal governance” is useful to motivate working on it. (Probably >0 people should work on ideal governance? But without a compelling argument that it’s more valuable on the margin than other AI governance work, I’m not sure if many people should work on it.)
Thank you! A few quick thoughts on some great points:
Other than Karnofsky’s piece, I didn’t find too much empirical research trying to understand why people find utopias appealing, but I share the intuition that a lack of scarcity/great wealth is often a plausible reason. There’s an interesting open empirical question about how that relates with people’s views about freedom, and a related normative one about our intuitions regarding the importance of freedom more generally.
Agree that it’s very hard to construct pluralism-respecting utopias and even harder to work out ways to get there. In the post, main aim was to question the idea that the former is impossible.
Definitely think the question of how useful AI ideal governance theories can be is next step of discussion after establishing ways in which they can be helpful. I don’t have too many abstract thoughts on how many people should be working in this sub-field (this may depend on what problem is trying to be solved at a given time) - mainly wanted to establish that, e.g., people interested in policy, shouldn’t see it as totally irrelevant approach.