I think it’s also worth mentioning that hazard remediation, in and of itself, is much too low without superintelligent AI. Natural hazards like entropy and astrophysical threats like supernovae and gamma ray bursts would kill off non-superintelligent life, several orders of magnitude sooner.
I’ll note in passing that the view I’m presenting here reflects a super low degree of cynicism relative to the surrounding memetic environment. I think the surrounding memetic environment says “humans left unstomped tend to create dystopias and/or kill themselves”, whereas I’m like, “nah, you’d need somebody else to kill us; absent that, we’d probably do fine”. (I am not a generic cynic!)
There basically aren’t any natural threats that threaten all humans, once we’ve spread a bit through space. “Entropy” isn’t really a threat, except as a stand-in for “we might not use our resources efficiently, resulting in waste”. (Or I guess “we might not do due diligence in trying to discover novel physics that might grant us unlimited negentropy”.)
I think it’s also worth mentioning that hazard remediation, in and of itself, is much too low without superintelligent AI. Natural hazards like entropy and astrophysical threats like supernovae and gamma ray bursts would kill off non-superintelligent life, several orders of magnitude sooner.
It might not be worth mentioning though.
Nate evidently disagrees:
There basically aren’t any natural threats that threaten all humans, once we’ve spread a bit through space. “Entropy” isn’t really a threat, except as a stand-in for “we might not use our resources efficiently, resulting in waste”. (Or I guess “we might not do due diligence in trying to discover novel physics that might grant us unlimited negentropy”.)