One point that hasn’t been mentioned: GCR’s may be many, many orders of magnitude more likely than extinctions. For example, it’s not hard to imagine a super deadly virus that kills 50% of the worlds population , but a virus that manages to kill literally everyone, including people hiding out in bunkers, remote villages, and in antarctica, doesn’t make too much sense: if it was that lethal, it would probably burn out before reaching everyone.
The relevant comparison in this context is not with human extinction but with an existential catastrophe. A virus that killed everyone except humans in extremely remote locations might well destroy humanity’s long-term potential. It is not plausible—at least not for the reasons provided— that “GCR’s may be many, many orders of magnitude more likely than” existential catastrophes, on reasonable interpretations of “many, many”.
(Separately, the catastrophe may involve a process that intelligently optimizes for human extinction, by either humans or non-human agents, so I also think that the claim as stated is false.)
A virus that killed everyone except humans in extremely remote locations might well destroy humanity’s long-term potential
How?
I see it delaying things while the numbers recover, but it’s not like humans will suddenly become unable to learn to read. Why would humanity not simply pick itself up and recover?
Two straightforward ways (more have been discussed in the relevant literature) are by making humanity more vulnerable to other threats and by pushing back humanity past the Great Filter (about whose location we should be pretty uncertain).
This is very vague. What other threats? It seems like a virus wiping out most of humanity would decrease the likelihood of other threats. It would put an end to climate change, reduce the motivation for nuclear attacks and ability to maintain a nuclear arsenal, reduce the likelihood of people developing AGI, etc.
Humanity’s chances of realizing its potential are substantially lower when there are only a few thousand humans around, because the species will remain vulnerable for a considerable time before it fully recovers. The relevant question is not whether the most severe current risks will be as serious in this scenario, because (1) other risks will then be much more pressing and (2) what matters is not the risk survivors of such a catastrophe face at any given time, but the cumulative risk to which the species is exposed until it bounces back.
One point that hasn’t been mentioned: GCR’s may be many, many orders of magnitude more likely than extinctions. For example, it’s not hard to imagine a super deadly virus that kills 50% of the worlds population , but a virus that manages to kill literally everyone, including people hiding out in bunkers, remote villages, and in antarctica, doesn’t make too much sense: if it was that lethal, it would probably burn out before reaching everyone.
The relevant comparison in this context is not with human extinction but with an existential catastrophe. A virus that killed everyone except humans in extremely remote locations might well destroy humanity’s long-term potential. It is not plausible—at least not for the reasons provided— that “GCR’s may be many, many orders of magnitude more likely than” existential catastrophes, on reasonable interpretations of “many, many”.
(Separately, the catastrophe may involve a process that intelligently optimizes for human extinction, by either humans or non-human agents, so I also think that the claim as stated is false.)
How?
I see it delaying things while the numbers recover, but it’s not like humans will suddenly become unable to learn to read. Why would humanity not simply pick itself up and recover?
Two straightforward ways (more have been discussed in the relevant literature) are by making humanity more vulnerable to other threats and by pushing back humanity past the Great Filter (about whose location we should be pretty uncertain).
This is very vague. What other threats? It seems like a virus wiping out most of humanity would decrease the likelihood of other threats. It would put an end to climate change, reduce the motivation for nuclear attacks and ability to maintain a nuclear arsenal, reduce the likelihood of people developing AGI, etc.
Humanity’s chances of realizing its potential are substantially lower when there are only a few thousand humans around, because the species will remain vulnerable for a considerable time before it fully recovers. The relevant question is not whether the most severe current risks will be as serious in this scenario, because (1) other risks will then be much more pressing and (2) what matters is not the risk survivors of such a catastrophe face at any given time, but the cumulative risk to which the species is exposed until it bounces back.