I’m not sure if you intend to do a separate post on it, so I’ll include this feedback here. You argue that:
Conditional on successfully preventing an extinction-level catastrophe, you should expect Flourishing to be (perhaps much) lower than otherwise, because a world that needs saving is more likely to be uncoordinated, poorly directed, or vulnerable in the long run
This seems quite unclear to me. In the supplement you describe one reason it might be false (uncertainty about future algorithmic efficiency). But it seems to me there is a much bigger one: competition.
In general, competition of various kinds seems like it has been one of the most positive forces for human development—competition between individuals for excellence, between scientists for innovation, between companies for cost-effectively meeting consumer wants, and between countries. Historically ‘uncoordinated’ competition has often had much better results than coordination! But it also drives AI risk. A world with a single leviathan seems like it would have higher chances of survival, but also lower flourishing.
In general, competition of various kinds seems like it has been one of the most positive forces for human development—competition between individuals for excellence, between scientists for innovation, between companies for cost-effectively meeting consumer wants, and between countries. Historically ‘uncoordinated’ competition has often had much better results than coordination!
I agree with the historical claim (with caveats below), but I think how that historical success ports over to future expected success is at best very murky.
A few comments here why:
Taking a firm moral stance, it’s not at all obvious that the non-centralised society today really is that great. It all depends on: (i) how animals are treated (including wild animals), if we’re just looking at the here and now; (ii) how existential risk is handled, if we’re also considering the long term. (Not claiming centralised societies would have been better, but they might have been, and even small chances of changes on how e.g. animals are treated would make a big difference.)
And there are particular reasons why liberal institutions worked that don’t apply to the post-AGI world:
There are radically diminishing returns to any person or group having greater resources. So it makes sense to divide up resources quite generally, and means that there are enormous gains from trade.
There are huge gains to be had from scientific and economic developments. So a society that achieves scientific and economic development basically ends up being the best society. Liberal democracy turns out to be great for that. But that won’t be a differentiator among societies in the future.
Different people have very different aims and values, and there are huge gains to preventing conflict between different groups. But (i) future beings can be designed; (ii) I suspect that conflict can be avoided even without liberal democracy.
I’m especially concerned that we have a lot of risk-averse intuitions that don’t port over to the case of cosmic ethics.
For example, when thinking about whether an autocracy would be good or bad, the natural thought is: “man, that could go really badly wrong, if the wrong person is in charge.” But, if cosmic value is linear in resources, then that’s not a great argument against autocracy-outside-of-our-solar-system; some real chance of a near-best outcome is better than a guarantee of an ok future.
And this argument gets stronger if the cosmic value of the future is the product of how well things go on many dimensions; if so, then you want success or failure on each of those dimensions to be correlated, which you get if there’s a single decision-maker.
(I feel really torn on centralisation vs decentralisation, and to be clear I strongly share the pro-liberalism pro-decentralisation prior.)
I don’t intend to do a separate post on this argument, but I’d love more discussion of the it, as it is a bit of a throwaway argument in the series, but potentially Big if true.
I came here to make a similar comment: a lot of my p(doom) hinges on things like “how hard is alignment” and “how likely is a software intelligence explosion,” which seem to be largely orthogonal to questions of how likely we are to get flourishing. (And maybe even run contrary to it, as you point out.)
Fair, but bear in mind that we’re conditioning on your action successfully reducing x-catastrophe. So you know that you’re not in the world where alignment is impossibly difficult.
Instead, you’re in a world where it was possible to make a difference on p(doom) (because you in fact made the difference), but where nonetheless that p(doom) reduction hadn’t happened anyway. I think that’s pretty likely to be a pretty messed up world, because, in the non-messed-up-world, the p(doom) reduction already happens and your action didn’t make a difference.
“Historically ‘uncoordinated’ competition has often had much better results than coordination!” This is so vague and abstract that it’s very hard to falsify, and I’d also note that it doesn’t actually rule out that there have been more cases where coordination got better results than competition. Phrasing at this level of vagueness and abstraction about something this highly politcized strikes me as ideological in the bad way.
I’d also say that I wouldn’t describe the successes of free market capitalism as success of competition but not coordination. Sure, they involve competition between firms, but they also involve a huge amount of coordination (as well as competition) within firms, and partly depend on a background of stable, rule-of-law governance that also involves coordination.
I feel like you are reacting to my comment in isolation, rather than as a response to a specific thing Will wrote. My comment is already significantly more concrete and less abstract than Will’s on the same topic.
When Will says ‘uncoordinated’, he clearly doesn’t mean ‘the OpenAI product team is not good at using Slack’, he means ‘competition between large groups’. Will’s key point is that marginally-saved worlds will be not very good; I am saying that the features that lead to danger here cause good things elsewhere, so marginally saved worlds might be very good. One of these features is competition-between-relevant-units. The ontological question of what the unit of competition is doesn’t seem particularly relevant to this—neither Will not I are disputing the importance of coordination within firms or individuals.
Thanks for sharing!
I’m not sure if you intend to do a separate post on it, so I’ll include this feedback here. You argue that:
This seems quite unclear to me. In the supplement you describe one reason it might be false (uncertainty about future algorithmic efficiency). But it seems to me there is a much bigger one: competition.
In general, competition of various kinds seems like it has been one of the most positive forces for human development—competition between individuals for excellence, between scientists for innovation, between companies for cost-effectively meeting consumer wants, and between countries. Historically ‘uncoordinated’ competition has often had much better results than coordination! But it also drives AI risk. A world with a single leviathan seems like it would have higher chances of survival, but also lower flourishing.
I agree with the historical claim (with caveats below), but I think how that historical success ports over to future expected success is at best very murky.
A few comments here why:
Taking a firm moral stance, it’s not at all obvious that the non-centralised society today really is that great. It all depends on: (i) how animals are treated (including wild animals), if we’re just looking at the here and now; (ii) how existential risk is handled, if we’re also considering the long term. (Not claiming centralised societies would have been better, but they might have been, and even small chances of changes on how e.g. animals are treated would make a big difference.)
And there are particular reasons why liberal institutions worked that don’t apply to the post-AGI world:
There are radically diminishing returns to any person or group having greater resources. So it makes sense to divide up resources quite generally, and means that there are enormous gains from trade.
There are huge gains to be had from scientific and economic developments. So a society that achieves scientific and economic development basically ends up being the best society. Liberal democracy turns out to be great for that. But that won’t be a differentiator among societies in the future.
Different people have very different aims and values, and there are huge gains to preventing conflict between different groups. But (i) future beings can be designed; (ii) I suspect that conflict can be avoided even without liberal democracy.
I’m especially concerned that we have a lot of risk-averse intuitions that don’t port over to the case of cosmic ethics.
For example, when thinking about whether an autocracy would be good or bad, the natural thought is: “man, that could go really badly wrong, if the wrong person is in charge.” But, if cosmic value is linear in resources, then that’s not a great argument against autocracy-outside-of-our-solar-system; some real chance of a near-best outcome is better than a guarantee of an ok future.
And this argument gets stronger if the cosmic value of the future is the product of how well things go on many dimensions; if so, then you want success or failure on each of those dimensions to be correlated, which you get if there’s a single decision-maker.
(I feel really torn on centralisation vs decentralisation, and to be clear I strongly share the pro-liberalism pro-decentralisation prior.)
I don’t intend to do a separate post on this argument, but I’d love more discussion of the it, as it is a bit of a throwaway argument in the series, but potentially Big if true.
Thanks for the response!
I came here to make a similar comment: a lot of my p(doom) hinges on things like “how hard is alignment” and “how likely is a software intelligence explosion,” which seem to be largely orthogonal to questions of how likely we are to get flourishing. (And maybe even run contrary to it, as you point out.)
Fair, but bear in mind that we’re conditioning on your action successfully reducing x-catastrophe. So you know that you’re not in the world where alignment is impossibly difficult.
Instead, you’re in a world where it was possible to make a difference on p(doom) (because you in fact made the difference), but where nonetheless that p(doom) reduction hadn’t happened anyway. I think that’s pretty likely to be a pretty messed up world, because, in the non-messed-up-world, the p(doom) reduction already happens and your action didn’t make a difference.
“Historically ‘uncoordinated’ competition has often had much better results than coordination!” This is so vague and abstract that it’s very hard to falsify, and I’d also note that it doesn’t actually rule out that there have been more cases where coordination got better results than competition. Phrasing at this level of vagueness and abstraction about something this highly politcized strikes me as ideological in the bad way.
I’d also say that I wouldn’t describe the successes of free market capitalism as success of competition but not coordination. Sure, they involve competition between firms, but they also involve a huge amount of coordination (as well as competition) within firms, and partly depend on a background of stable, rule-of-law governance that also involves coordination.
I feel like you are reacting to my comment in isolation, rather than as a response to a specific thing Will wrote. My comment is already significantly more concrete and less abstract than Will’s on the same topic.
When Will says ‘uncoordinated’, he clearly doesn’t mean ‘the OpenAI product team is not good at using Slack’, he means ‘competition between large groups’. Will’s key point is that marginally-saved worlds will be not very good; I am saying that the features that lead to danger here cause good things elsewhere, so marginally saved worlds might be very good. One of these features is competition-between-relevant-units. The ontological question of what the unit of competition is doesn’t seem particularly relevant to this—neither Will not I are disputing the importance of coordination within firms or individuals.
Fair point.