Red team: Is existential security likely, assuming that we avoid existential catastrophe for a century or two?
Some reasons that I have to doubt that existential security is the default outcome we should expect:
Even superintelligent aligned AI might be flawed and fail catastrophically eventually
Vulnerable world hypothesis
Society is fairly unstable
Unregulated expansion throughout the galaxy may reduce extinction risk but may increase s-risks, and may not be desirable
Red team: Is existential security likely, assuming that we avoid existential catastrophe for a century or two?
Some reasons that I have to doubt that existential security is the default outcome we should expect:
Even superintelligent aligned AI might be flawed and fail catastrophically eventually
Vulnerable world hypothesis
Society is fairly unstable
Unregulated expansion throughout the galaxy may reduce extinction risk but may increase s-risks, and may not be desirable