psyched about this post, want to jot down a quick nit-pick
S-Risk can occur any time from now until the end of the race, and represents – for example—a totalitarian government seizing control of the world to such an extent that human flourishing is permanently curtailed, but the development of AI is not (so S-Risk can occur before AI is Invented).
I don’t think curtailing human flourishing constitutes s-risk, I don’t think the suffering-focused community likes to draw equivalences between opportunity cost and more immediate or obvious disvalue. When the s-risk community talks about malevolent actors (see CLR), they’re talking more about associations between totalitarianism and willingness/ability to literally-torture at scale, whereas other theorists (not in the suffering-focused community) may worry about a flavor of totalitarianism where everyone has reasonable quality of life they just can’t steer or exit.
One citation for the idea that opportunity costs (say all progress but spacefaring continues) and literally everyone literally dying is morally similar is the precipice. We can (polarizingly!) talk about “existential risk” not equalling “extinction risk” but equaling under some value function. This is one way of thinking about totalitarianism in the longtermist community.
Political freedoms and the valence of day to day experience aren’t necessarily the exact same thing.
psyched about this post, want to jot down a quick nit-pick
I don’t think curtailing human flourishing constitutes s-risk, I don’t think the suffering-focused community likes to draw equivalences between opportunity cost and more immediate or obvious disvalue. When the s-risk community talks about malevolent actors (see CLR), they’re talking more about associations between totalitarianism and willingness/ability to literally-torture at scale, whereas other theorists (not in the suffering-focused community) may worry about a flavor of totalitarianism where everyone has reasonable quality of life they just can’t steer or exit.
One citation for the idea that opportunity costs (say all progress but spacefaring continues) and literally everyone literally dying is morally similar is the precipice. We can (polarizingly!) talk about “existential risk” not equalling “extinction risk” but equaling under some value function. This is one way of thinking about totalitarianism in the longtermist community.
Political freedoms and the valence of day to day experience aren’t necessarily the exact same thing.
Thank you, really interesting comment which clarifies a confusion I had when writing the essay!