Bostrom’s vulnerable world hypothesis paper seems to suggest that existential security (xsec) isn’t going to happen, that we need a dual of the yudkowsky-moore law of mad science that raises our vigilance every timestep to keep up with the drops in minimal IQ it costs to destroy the world. A lifestyle of such constant vigilance seems leagues away from the goals that futurists tend to get excited about, like long reflections, spacefaring, or a comprehensive assault on suffering itself. Is xsec (in the sense of freedom from extinction being reliable and permanent enough to permit us to do common futurist goals) the kind of thing you would actually expect to see if you lived till the year 3000, 30000, or do you think the world would be in a state of constant vigilance (fear, paranoia) as a bargain for staying alive? What are the most compelling reasons to think that a strong form of xsec, one that doesn’t depend on some positive rate of heightening vigilance in perpetuity, is worth thinking about at all?
Bostrom’s vulnerable world hypothesis paper seems to suggest that existential security (xsec) isn’t going to happen, that we need a dual of the yudkowsky-moore law of mad science that raises our vigilance every timestep to keep up with the drops in minimal IQ it costs to destroy the world. A lifestyle of such constant vigilance seems leagues away from the goals that futurists tend to get excited about, like long reflections, spacefaring, or a comprehensive assault on suffering itself. Is xsec (in the sense of freedom from extinction being reliable and permanent enough to permit us to do common futurist goals) the kind of thing you would actually expect to see if you lived till the year 3000, 30000, or do you think the world would be in a state of constant vigilance (fear, paranoia) as a bargain for staying alive? What are the most compelling reasons to think that a strong form of xsec, one that doesn’t depend on some positive rate of heightening vigilance in perpetuity, is worth thinking about at all?