So that makes it sound like we might want to aim for good post-human/transhuman scenarios (if aiming for the good versions specifically is relatively tractable), or for good scenarios in which something non-human is very much in control (like developing a friendly agential AI).
I’m not sure if that follows. I mainly think that the meaning of the question “Will the future be democratic?” becomes much less clear when applied to fully/radically post-human futures. But I’m not sure if I see a natural reason to think that the futures would be ‘politically better’ than futures that are more recognizably human. So, at least at the moment, I’m not inclined to treat this as a major reason to push for a more or less post-human future.
That sounds to me like a 4-in-5 chance of something that might probably itself be an existential catastrophe (global authoritarianism that lasts indefinitely long), or might substantially increase the chances of some other existential catastrophe (e.g., because it’s harder to have a long reflection and so bad values get locked in).… But maybe you don’t see [this possibility] as necessarily that concerning? E.g., maybe you think that something like mild or genuinely enlightened and benevolent authoritarianism accounts for a substantial part of the likelihood of authoritarianism?
On the implications of my prediction for future people:
I definitely think of my prediction as, at least, bad news for future people. I’m a little unsure exactly how bad the news is, though.
Democratic governments are currently, on average, much better for the people who live under them. It’s not always possible to be totally sure of causation, but massacres, famines, serious suppressions of liberties, etc., have clearly been much more common under dictatorial governments than democratic governments. There are also pretty basic reasons to expect democracies to typically better for the people under them: there’s a stronger link between government decisions and people’s preferences. I expect this logic to hold, even if a lot of the specific ways in which dictatorships are on average worse than democracies (like higher famine risk) become less relevant in the future.
At the same time, I’m not sure we should be imagining a dystopia. Most people alive today live under dictatorial governments, and, for most of these people, daily life doesn’t feel like a boot on the face. The average person in Hanoi, for example, doesn’t think of themselves as living in the midst of catastrophe. Growing prosperity and some forms of technological progress are also reasons to expect quality of life to go up over time, even if the political situation deteriorates.
So I just want to clarify that, even though I’m predicting a counterfactually worse outcome, I’m not necessarily predicting a dystopia for most people, or a scenario in which most people’s lives are net negative. A dystopian future is conceivable, but doesn’t necessarily follow from a lack of democracy.
On the implications of my prediction for “value lock-in,” more broadly:
I think the main benefit of democracy, in this case, is that we should probably expect a wider range of values to be taken into account when important decisions with long-lasting consequences are made. Inclusiveness and pluralism of course doesn’t always imply morally better outcomes. But moral uncertainty considerations probably push in the direction of greater inclusivity/pluralism being good, in expectation. From some perspectives, it’s also inherently morally valuable for important decisions to be made in inclusive/pluralistic ways. Finally, I expect the average dictator to have worse values than the average non-dictator.
I actually haven’t thought very hard about the implications of dictatorship and democracy for value lock-in, though. I think I also probably have a bit of a reflexive bias toward democracy here.
I think the main benefit of democracy, in this case, is that we should probably expect a wider range of values to be taken into account when important decisions with long-lasting consequences are made. Inclusiveness and pluralism of course doesn’t always imply morally better outcomes. But moral uncertainty considerations probably push in the direction of greater inclusivity/pluralism being good, in expectation.
It sounds like you mainly have in mind something akin to preference aggregation. It seems to me that a similarly important benefit might be that democracies are likely more conducive to a free exchange of ideas/perspectives and to people converging on more accurate ideas/perspectives over time. (I have in mind something like the marketplace of ideas concept. I should note that I’m very unsure how strong those effects are, and how contingent they are on various features of the present world which we should expect to change in future.)
Did you mean for your comment to imply that idea as well? In any case, do you broadly agree with that idea?
Interesting, thanks! I think those points broadly make sense to me.
So I just want to clarify that, even though I’m predicting a counterfactually worse outcome, I’m not necessarily predicting a dystopia for most people, or a scenario in which most people’s lives are net negative. A dystopian future is conceivable, but doesn’t necessarily follow from a lack of democracy.
I think this is a good point, but I also think that:
The use of the term “dystopia” without clarification is probably not ideal
A future that’s basically like the current-day Hanoi everywhere forever is very plausibly an existential catastrophe (given Bostrom/Ord’s definitions and some plausible moral and empirical views)
(This is a very different claim from “Hanoi is supremely awful by present-day standards”, or even “I’d hate to live in Hanoi myself”)
In my previous comment, I intended for things like “current-day Hanoi everywhere forever” to be potentially included as among the failure modes I’m concerned about
To expand on those claims a bit:
When I use the term “dystopia”, I tend to essentially have in mind what Ord (2020) calls “unrecoverable dystopia”, which is one of his three types of existential catastrophe, along with extinction and unrecoverable dystopia. And he defines an existential catastrophe in turn as “the destruction of humanity’s longterm potential.” So I think the simplest description of what I mean by the term “unrecoverable dystopia” would be “a scenario in which civilization will continue to exist, but it is now guaranteed that the vast majority of the value that previously was attainable will never be attained”.[1]
So this wouldn’t require that the average sentient being has a net-negative life, as long as it’s possible that something far better could’ve happened but now is guaranteed to not happen. And it more clearly wouldn’t require that the average person has a net-negative life, nor that the average person perceives themselves to be in a “catastrophe” or “dystopia”.
Obviously, a world in which the average person or sentient being has a net-negative life would be even worse than a world that’s an “unrecoverable dystopia” simply due to “unfulfilled potential”, and so I think your clarification of what you’re saying is useful. But I already wasn’t necessarily thinking of a world with average net-negative lives (though I failed to clarify this).
[1] That said, Ord’s own description of what he means by “unrecoverable dystopia” seems misleading: he describes it as a type of existential catastrophe in which “civilization [is] intact, but locked into a terrible form, with little or no value”. I assume he means “terrible” and “little to know” when compared against an incredibly excellent future that he considers attainable. But it’d be very easy for someone to interpret his description as meaning the term is only applying to futures that are very net-negative.
I also think “dystopia” might not be an ideal term for what Ord and I want to be referring to, both because it invites confusion and might sound silly/sci-fi/weird.
I’m not sure if that follows. I mainly think that the meaning of the question “Will the future be democratic?” becomes much less clear when applied to fully/radically post-human futures. But I’m not sure if I see a natural reason to think that the futures would be ‘politically better’ than futures that are more recognizably human. So, at least at the moment, I’m not inclined to treat this as a major reason to push for a more or less post-human future.
On the implications of my prediction for future people:
I definitely think of my prediction as, at least, bad news for future people. I’m a little unsure exactly how bad the news is, though.
Democratic governments are currently, on average, much better for the people who live under them. It’s not always possible to be totally sure of causation, but massacres, famines, serious suppressions of liberties, etc., have clearly been much more common under dictatorial governments than democratic governments. There are also pretty basic reasons to expect democracies to typically better for the people under them: there’s a stronger link between government decisions and people’s preferences. I expect this logic to hold, even if a lot of the specific ways in which dictatorships are on average worse than democracies (like higher famine risk) become less relevant in the future.
At the same time, I’m not sure we should be imagining a dystopia. Most people alive today live under dictatorial governments, and, for most of these people, daily life doesn’t feel like a boot on the face. The average person in Hanoi, for example, doesn’t think of themselves as living in the midst of catastrophe. Growing prosperity and some forms of technological progress are also reasons to expect quality of life to go up over time, even if the political situation deteriorates.
So I just want to clarify that, even though I’m predicting a counterfactually worse outcome, I’m not necessarily predicting a dystopia for most people, or a scenario in which most people’s lives are net negative. A dystopian future is conceivable, but doesn’t necessarily follow from a lack of democracy.
On the implications of my prediction for “value lock-in,” more broadly:
I think the main benefit of democracy, in this case, is that we should probably expect a wider range of values to be taken into account when important decisions with long-lasting consequences are made. Inclusiveness and pluralism of course doesn’t always imply morally better outcomes. But moral uncertainty considerations probably push in the direction of greater inclusivity/pluralism being good, in expectation. From some perspectives, it’s also inherently morally valuable for important decisions to be made in inclusive/pluralistic ways. Finally, I expect the average dictator to have worse values than the average non-dictator.
I actually haven’t thought very hard about the implications of dictatorship and democracy for value lock-in, though. I think I also probably have a bit of a reflexive bias toward democracy here.
It sounds like you mainly have in mind something akin to preference aggregation. It seems to me that a similarly important benefit might be that democracies are likely more conducive to a free exchange of ideas/perspectives and to people converging on more accurate ideas/perspectives over time. (I have in mind something like the marketplace of ideas concept. I should note that I’m very unsure how strong those effects are, and how contingent they are on various features of the present world which we should expect to change in future.)
Did you mean for your comment to imply that idea as well? In any case, do you broadly agree with that idea?
Interesting, thanks! I think those points broadly make sense to me.
I think this is a good point, but I also think that:
The use of the term “dystopia” without clarification is probably not ideal
A future that’s basically like the current-day Hanoi everywhere forever is very plausibly an existential catastrophe (given Bostrom/Ord’s definitions and some plausible moral and empirical views)
(This is a very different claim from “Hanoi is supremely awful by present-day standards”, or even “I’d hate to live in Hanoi myself”)
In my previous comment, I intended for things like “current-day Hanoi everywhere forever” to be potentially included as among the failure modes I’m concerned about
To expand on those claims a bit:
When I use the term “dystopia”, I tend to essentially have in mind what Ord (2020) calls “unrecoverable dystopia”, which is one of his three types of existential catastrophe, along with extinction and unrecoverable dystopia. And he defines an existential catastrophe in turn as “the destruction of humanity’s longterm potential.” So I think the simplest description of what I mean by the term “unrecoverable dystopia” would be “a scenario in which civilization will continue to exist, but it is now guaranteed that the vast majority of the value that previously was attainable will never be attained”.[1]
(See also Venn diagrams of existential, global, and suffering catastrophes and Clarifying existential risks and existential catastrophes.)
So this wouldn’t require that the average sentient being has a net-negative life, as long as it’s possible that something far better could’ve happened but now is guaranteed to not happen. And it more clearly wouldn’t require that the average person has a net-negative life, nor that the average person perceives themselves to be in a “catastrophe” or “dystopia”.
Obviously, a world in which the average person or sentient being has a net-negative life would be even worse than a world that’s an “unrecoverable dystopia” simply due to “unfulfilled potential”, and so I think your clarification of what you’re saying is useful. But I already wasn’t necessarily thinking of a world with average net-negative lives (though I failed to clarify this).
[1] That said, Ord’s own description of what he means by “unrecoverable dystopia” seems misleading: he describes it as a type of existential catastrophe in which “civilization [is] intact, but locked into a terrible form, with little or no value”. I assume he means “terrible” and “little to know” when compared against an incredibly excellent future that he considers attainable. But it’d be very easy for someone to interpret his description as meaning the term is only applying to futures that are very net-negative.
I also think “dystopia” might not be an ideal term for what Ord and I want to be referring to, both because it invites confusion and might sound silly/sci-fi/weird.