Treating “good future” and “irreversibly messed up future” as exhaustive seems clearly incorrect to me.
Consider for instance the risk of an AI-stabilized personalist dictatorship, in which literally all political power is concentrated in a single immortal human being.[1]Clearly things are not going great at this point. But whether they’re irreversibly bad hinges on a lot of questions about human psychology—about the psychology of one particular human, in fact—that we don’t have answers to.
There’s some evidence Big-5 Agreeableness increases slowly over time. Would the trend hold out to thousands of years?
How long-term are long-term memories (augmented to whatever degree human mental architecture permits)?
Are value differences between humans really insurmountable or merely very very very hard to resolve? Maybe spending ten thousand years with the classics really would cultivate virtue.
Are normal human minds even stable in the very long run? Maybe we all wirehead ourselves eventually, given the chance.
So it seems to me that if we’re not ruling out permanent dystopia we shouldn’t rule out “merely” very long lived dystopia either.
This is clearly not a “good future”, in the sense that the right response to “100% chance of a good future” is to rush towards it as fast as possible, and the right response to “10% chance of utopia ’till the stars go cold, 90% chance of spending a thousand years beneath Cyber-Caligula’s sandals followed by rolling the dice again”[2] is to slow down and see if you can improve the odds a bit. But it doesn’t belong in the “irreversibly messed up” bin either: even after Cyber-Caligula takes over, the long-run future is still almost certainly utopian.
Personally I think this is far less likely than AI-stabilized oligarchy (which, if not exactly a good future, is at least much less likely to go off into rotating-golden-statue-land) but my impression is that it’s the prototypical “irreversible dystopia” for most people.
Treating “good future” and “irreversibly messed up future” as exhaustive seems clearly incorrect to me.
Consider for instance the risk of an AI-stabilized personalist dictatorship, in which literally all political power is concentrated in a single immortal human being.[1]Clearly things are not going great at this point. But whether they’re irreversibly bad hinges on a lot of questions about human psychology—about the psychology of one particular human, in fact—that we don’t have answers to.
There’s some evidence Big-5 Agreeableness increases slowly over time. Would the trend hold out to thousands of years?
How long-term are long-term memories (augmented to whatever degree human mental architecture permits)?
Are value differences between humans really insurmountable or merely very very very hard to resolve? Maybe spending ten thousand years with the classics really would cultivate virtue.
Are normal human minds even stable in the very long run? Maybe we all wirehead ourselves eventually, given the chance.
So it seems to me that if we’re not ruling out permanent dystopia we shouldn’t rule out “merely” very long lived dystopia either.
This is clearly not a “good future”, in the sense that the right response to “100% chance of a good future” is to rush towards it as fast as possible, and the right response to “10% chance of utopia ’till the stars go cold, 90% chance of spending a thousand years beneath Cyber-Caligula’s sandals followed by rolling the dice again”[2] is to slow down and see if you can improve the odds a bit. But it doesn’t belong in the “irreversibly messed up” bin either: even after Cyber-Caligula takes over, the long-run future is still almost certainly utopian.
Personally I think this is far less likely than AI-stabilized oligarchy (which, if not exactly a good future, is at least much less likely to go off into rotating-golden-statue-land) but my impression is that it’s the prototypical “irreversible dystopia” for most people.
Obviously our situation is much worse than this