Interesting, thanks! I think those points broadly make sense to me.
So I just want to clarify that, even though I’m predicting a counterfactually worse outcome, I’m not necessarily predicting a dystopia for most people, or a scenario in which most people’s lives are net negative. A dystopian future is conceivable, but doesn’t necessarily follow from a lack of democracy.
I think this is a good point, but I also think that:
The use of the term “dystopia” without clarification is probably not ideal
A future that’s basically like the current-day Hanoi everywhere forever is very plausibly an existential catastrophe (given Bostrom/Ord’s definitions and some plausible moral and empirical views)
(This is a very different claim from “Hanoi is supremely awful by present-day standards”, or even “I’d hate to live in Hanoi myself”)
In my previous comment, I intended for things like “current-day Hanoi everywhere forever” to be potentially included as among the failure modes I’m concerned about
To expand on those claims a bit:
When I use the term “dystopia”, I tend to essentially have in mind what Ord (2020) calls “unrecoverable dystopia”, which is one of his three types of existential catastrophe, along with extinction and unrecoverable dystopia. And he defines an existential catastrophe in turn as “the destruction of humanity’s longterm potential.” So I think the simplest description of what I mean by the term “unrecoverable dystopia” would be “a scenario in which civilization will continue to exist, but it is now guaranteed that the vast majority of the value that previously was attainable will never be attained”.[1]
So this wouldn’t require that the average sentient being has a net-negative life, as long as it’s possible that something far better could’ve happened but now is guaranteed to not happen. And it more clearly wouldn’t require that the average person has a net-negative life, nor that the average person perceives themselves to be in a “catastrophe” or “dystopia”.
Obviously, a world in which the average person or sentient being has a net-negative life would be even worse than a world that’s an “unrecoverable dystopia” simply due to “unfulfilled potential”, and so I think your clarification of what you’re saying is useful. But I already wasn’t necessarily thinking of a world with average net-negative lives (though I failed to clarify this).
[1] That said, Ord’s own description of what he means by “unrecoverable dystopia” seems misleading: he describes it as a type of existential catastrophe in which “civilization [is] intact, but locked into a terrible form, with little or no value”. I assume he means “terrible” and “little to know” when compared against an incredibly excellent future that he considers attainable. But it’d be very easy for someone to interpret his description as meaning the term is only applying to futures that are very net-negative.
I also think “dystopia” might not be an ideal term for what Ord and I want to be referring to, both because it invites confusion and might sound silly/sci-fi/weird.
Interesting, thanks! I think those points broadly make sense to me.
I think this is a good point, but I also think that:
The use of the term “dystopia” without clarification is probably not ideal
A future that’s basically like the current-day Hanoi everywhere forever is very plausibly an existential catastrophe (given Bostrom/Ord’s definitions and some plausible moral and empirical views)
(This is a very different claim from “Hanoi is supremely awful by present-day standards”, or even “I’d hate to live in Hanoi myself”)
In my previous comment, I intended for things like “current-day Hanoi everywhere forever” to be potentially included as among the failure modes I’m concerned about
To expand on those claims a bit:
When I use the term “dystopia”, I tend to essentially have in mind what Ord (2020) calls “unrecoverable dystopia”, which is one of his three types of existential catastrophe, along with extinction and unrecoverable dystopia. And he defines an existential catastrophe in turn as “the destruction of humanity’s longterm potential.” So I think the simplest description of what I mean by the term “unrecoverable dystopia” would be “a scenario in which civilization will continue to exist, but it is now guaranteed that the vast majority of the value that previously was attainable will never be attained”.[1]
(See also Venn diagrams of existential, global, and suffering catastrophes and Clarifying existential risks and existential catastrophes.)
So this wouldn’t require that the average sentient being has a net-negative life, as long as it’s possible that something far better could’ve happened but now is guaranteed to not happen. And it more clearly wouldn’t require that the average person has a net-negative life, nor that the average person perceives themselves to be in a “catastrophe” or “dystopia”.
Obviously, a world in which the average person or sentient being has a net-negative life would be even worse than a world that’s an “unrecoverable dystopia” simply due to “unfulfilled potential”, and so I think your clarification of what you’re saying is useful. But I already wasn’t necessarily thinking of a world with average net-negative lives (though I failed to clarify this).
[1] That said, Ord’s own description of what he means by “unrecoverable dystopia” seems misleading: he describes it as a type of existential catastrophe in which “civilization [is] intact, but locked into a terrible form, with little or no value”. I assume he means “terrible” and “little to know” when compared against an incredibly excellent future that he considers attainable. But it’d be very easy for someone to interpret his description as meaning the term is only applying to futures that are very net-negative.
I also think “dystopia” might not be an ideal term for what Ord and I want to be referring to, both because it invites confusion and might sound silly/sci-fi/weird.