Luisa Rodriguez’s analysis suggests that even if 99.99% of humanity were wiped out, leaving just 800,000 survivors, the probability of human extinction rises above 1 in 5000 only when they are clustered in about 80 groups and each group has at least a 90% chance of dying out, or they are clustered in about 800 groups and each group has a 99% chance of dying out.[33]
This misrepresents Luisa’s claims in a way that I think is important, and also takes them out of context in a way I think is important.
Her post doesn’t claim that the chance of extinction under the conditions you mention is as low as you say, since there could also be “fairly indirect paths” to extinction.
Her post says “In this post, I explore the probability that if various kinds of catastrophe caused civilizational collapse, this collapse would fairly directly lead to human extinction”, and “this piece focuses on whether civilizational collapse would lead more or less directly to human extinction without additional catalyzing conditions, like a second catastrophe (either soon or long after the first) or centuries of economic stagnation that eventually end in human extinction.”
I also personally think the less direct paths should be seen as more worrying than the fairly direct paths (though I think reasonable people can disagree here)
Unfortunately, I think Luisa isn’t very clear precisely what she’s including in scope vs not, i.e. what “fairly directly” means, as I expressed in this comment.
Her post and your statement are just about extinction risk, but your post’s title indicates your post is about existential risk. I think this makes your statement misleading (though this isn’t a way it’s misrepresenting Luisa), since you don’t also say something there about non-extinction existential risk.
I also personally think we should be similarly or more worried about the sort of scenario you mention there leading to something like unrecoverable dystopia than to extinction (though I think reasonable people can disagree on this)
(Incidentally, I was worried precisely this sort of misrepresentation and taking out of context would occur when I first read her post.)
(Also, as expressed in my other comment, overall I really appreciate this post! This comment is just a criticism of one specific part of the post, and I don’t think it should radically change your high level conclusions. Though I do personally think it’s worth editing to fix this, since I think that passage could leave people with faulty views on an important question.)
Do you mean non-extinction existential risks? I can think of non-x-risk scenarios that involve humans going extinct, but those are extremely noncentral.
Thanks, this is a great comment! I’m going to edit the main post to reflect some of this.
Does (1) a second catastrophe and (2) failure for civilization to recover exhaust the possibilities for “indirect paths”? I’ve thought about this less than the other points in my main post, but I think I disagree that these are as worrying as the direct path. I think it’s possible they’re on the same magnitude, but less likely in expectation, than the direct pathways from war to existential risk via extinction.
First, catastrophes in general are just very unlikely, and I think the ‘period of vulnerability’ following a war would probably be surprisingly short (on the order of 100 years rather than thousands). Post-WWII recovery in Europe took place over the course of a few years. The US funded some of this recover via the Marshall Plan, but the investment wasn’t that big (probably <5% of national income).[1] There’s also a paper that found that, just 27 years after the Vietnam War, no difference in economic development between areas that were heavily bombed by the US and areas that weren’t.[2]
A war 10-30 times more severe than WWII would obviously take longer to recover from, but I still think we’re talking about decades or centuries rather than millenia for civilization to stabilize somewhere (albeit at a much diminished population).
Second, I find it hard to think of specific reasons why we would expect long-term civilizational stagnation. I think a catastrophic war could wipe out most of the world population, but still leave several million people alive. New Zealand alone has 5M people, for example. Humanity has previously survived much smaller population bottlenecks. Conditional on there being survivors, it also seems likely to me that they survive in at least several different places (various islands and isolated parts of the world, for example). That gives us multiple chances for some population to get it together and restart economic growth, population growth, and scientific advancement.
I’d be interested to hear more about why you think the “less direct paths should be seen as more worrying than the fairly direct paths”.
“The Marshall Plan’s accounting reflects that aid accounted for about 3% of the combined national income of the recipient countries between 1948 and 1951” (from Wikipedia; I haven’t chased down the original source, so caveat emptor)
“U.S. bombing does not have a robust negative impact on poverty rates, consumption levels, infrastructure, literacy or population density through 2002. This finding suggests that local recovery from war damage can be rapid under certain conditions, although further work is needed to establish the generality of the finding in other settings.” (Miguel & Roland, abstract, https://eml.berkeley.edu/~groland/pubs/vietnam-bombs_19oct05.pdf)
That said, I think that, personally, my main reasons for concern about such events was in any case not that they might fairly directly lead to extinction
Rather, it was that such events might:
Trigger other bad events (e.g., further conflict, development and/or deployment of dangerous technologies) that ultimately lead to extinction
Meaning any scenario in which humanity survives and regains industrial civilization, but with substantially less good outcomes than could’ve been achieved. One of many ways this could occur is negative changes in values.
(I think my views on this are pretty similar to those Beckstead expresses here)
I think this post has updated me towards somewhatless concern about such events causing extinction by triggering other bad events
This is partly because you provide some arguments that conflict in the aftermath wouldn’t be extreme or would be survived
(That said, I’m not sure how convincing I found those particular parts—I might expand on this in another comment—and I’m a bit confused about why WMDs were mentioned in Case 2 but not Case 1 or Case 3.)
But it hasn’t caused a major update regarding the other two of those pathways
Which is fair enough—one post can’t cover everything, and you explicitly noted that you’re setting those matters aside for followup posts
Relatedly, I’m excited to read those followup posts!
I think “[the period before recovery might be only] on the order of 100 years” offers little protection if we think we’re living at an especially “hingey” time; a lot could happen in this specific coming 100 years, and the state society is in when those key events happen could be a really big deal.
Also, I agree that society simply remains small or technologically stagnant or whatever indefinitely seems very unlikely. But I’m more worried about either:
“Big Deal Events” happening during the “shaken up” period (all very roughly speaking, of course!) and thus being handled worse, or
Failure to recover on some other dimensions of civilization, e.g. political and moral progress
Background thought: I think the potential value of the future is probably ridiculously huge, and there are probably many plausible futures where humanity survives for millions of years and advances technologically past the current frontiers and nothing seems obviously horrific, but we still fall massively short of how much good we could’ve achieved. E.g., we choose to stay on earth or in the solar system forever, we spread to other solar systems but still through far less of the universe than we could’ve, we never switch to more efficient digital minds, we never switch to something close to the best kind of digital minds having the best kind of lives/experience/societies, we cause unrecognised/not-cared-about large-scale suffering of nonhuman animals or some types of digital beings, …
So I think we might need to chart a careful course through the future, not just avoiding the super obvious pitfalls. And for various fuzzy reasons, I tentatively think we’re notably less likely to chart the right course following a huge but not-immediately-existential catastrophe than if we avoid such catastrophes, though I’m not very confident about that.
Thanks, this is really helpful. I think a hidden assumption in my head was that the hingey time is put on hold while civilization recovers, but now I see that that’s pretty questionable.
I also share your feeling that, for fuzzy reasons, a world with ‘lesser catastrophes’ is significantly worse in the longterm than a world without them. I’m still trying to bring those reasons into focus, though, and think this could be a really interesting direction for future research.
Regarding the “long term stagnation”—to me this suggests you seem to be thinking of the current epoch of history as showcasing the inevitable. Yet stagnation in this sense was the norm for 200,000+ years of modern Homo sapiens existing on Earth. Hence, there is real question whether this period represents a continued given, a blip, the last hurrah before the end, or perhaps the start of a much more complex trajectory of history—perhaps involving multiple periods of rapid technological flourishing, then periods of stagnation or even decline, in various patterns and ways and not to mention also geographically.
One thing to note about history or culture is that there are no inherent drivers to “greater complexity”—indeed, from an anthropological point of view one can question just what that means. It is, in this regard, much like biological evolution outside the human realm. In both biology and anthropology, there is and should be a strong skepticism toward any claim of a teleology or a linear narrative.
That said, I would still support that there is a distinction between a long term stagnation and extinction even if the former is definitely not something one should rule out—and that’s that in the latter case, there is absolutely no recovery: while it’s possible another intelligent toolmaking species could evolve, looking at the future of geological history which is potentially much more regular, the gradual heating of the Sun suggests that we could potentially be Earth’s only shot. It’s like the difference between life imprisonment, and the death penalty. The former is not fun at all, but there’s a reason there’s so much resistance to the latter, and it’s that key point of irreversibility.
This misrepresents Luisa’s claims in a way that I think is important, and also takes them out of context in a way I think is important.
Her post doesn’t claim that the chance of extinction under the conditions you mention is as low as you say, since there could also be “fairly indirect paths” to extinction.
Her post says “In this post, I explore the probability that if various kinds of catastrophe caused civilizational collapse, this collapse would fairly directly lead to human extinction”, and “this piece focuses on whether civilizational collapse would lead more or less directly to human extinction without additional catalyzing conditions, like a second catastrophe (either soon or long after the first) or centuries of economic stagnation that eventually end in human extinction.”
I also personally think the less direct paths should be seen as more worrying than the fairly direct paths (though I think reasonable people can disagree here)
Unfortunately, I think Luisa isn’t very clear precisely what she’s including in scope vs not, i.e. what “fairly directly” means, as I expressed in this comment.
Her post and your statement are just about extinction risk, but your post’s title indicates your post is about existential risk. I think this makes your statement misleading (though this isn’t a way it’s misrepresenting Luisa), since you don’t also say something there about non-extinction existential risk.
I also personally think we should be similarly or more worried about the sort of scenario you mention there leading to something like unrecoverable dystopia than to extinction (though I think reasonable people can disagree on this)
(Incidentally, I was worried precisely this sort of misrepresentation and taking out of context would occur when I first read her post.)
(Also, as expressed in my other comment, overall I really appreciate this post! This comment is just a criticism of one specific part of the post, and I don’t think it should radically change your high level conclusions. Though I do personally think it’s worth editing to fix this, since I think that passage could leave people with faulty views on an important question.)
Do you mean non-extinction existential risks? I can think of non-x-risk scenarios that involve humans going extinct, but those are extremely noncentral.
Whoops! Yeah, that was just a typo. Now fixed.
Thanks, this is a great comment! I’m going to edit the main post to reflect some of this.
Does (1) a second catastrophe and (2) failure for civilization to recover exhaust the possibilities for “indirect paths”? I’ve thought about this less than the other points in my main post, but I think I disagree that these are as worrying as the direct path. I think it’s possible they’re on the same magnitude, but less likely in expectation, than the direct pathways from war to existential risk via extinction.
First, catastrophes in general are just very unlikely, and I think the ‘period of vulnerability’ following a war would probably be surprisingly short (on the order of 100 years rather than thousands). Post-WWII recovery in Europe took place over the course of a few years. The US funded some of this recover via the Marshall Plan, but the investment wasn’t that big (probably <5% of national income).[1] There’s also a paper that found that, just 27 years after the Vietnam War, no difference in economic development between areas that were heavily bombed by the US and areas that weren’t.[2]
A war 10-30 times more severe than WWII would obviously take longer to recover from, but I still think we’re talking about decades or centuries rather than millenia for civilization to stabilize somewhere (albeit at a much diminished population).
Second, I find it hard to think of specific reasons why we would expect long-term civilizational stagnation. I think a catastrophic war could wipe out most of the world population, but still leave several million people alive. New Zealand alone has 5M people, for example. Humanity has previously survived much smaller population bottlenecks. Conditional on there being survivors, it also seems likely to me that they survive in at least several different places (various islands and isolated parts of the world, for example). That gives us multiple chances for some population to get it together and restart economic growth, population growth, and scientific advancement.
I’d be interested to hear more about why you think the “less direct paths should be seen as more worrying than the fairly direct paths”.
“The Marshall Plan’s accounting reflects that aid accounted for about 3% of the combined national income of the recipient countries between 1948 and 1951” (from Wikipedia; I haven’t chased down the original source, so caveat emptor)
“U.S. bombing does not have a robust negative impact on poverty rates, consumption levels, infrastructure, literacy or population density through 2002. This finding suggests that local recovery from war damage can be rapid under certain conditions, although further work is needed to establish the generality of the finding in other settings.” (Miguel & Roland, abstract, https://eml.berkeley.edu/~groland/pubs/vietnam-bombs_19oct05.pdf)
[written quickly, sorry]
One indication of my views is this comment I made on Luisa’s post (emphasis added):
I think “[the period before recovery might be only] on the order of 100 years” offers little protection if we think we’re living at an especially “hingey” time; a lot could happen in this specific coming 100 years, and the state society is in when those key events happen could be a really big deal.
Also, I agree that society simply remains small or technologically stagnant or whatever indefinitely seems very unlikely. But I’m more worried about either:
“Big Deal Events” happening during the “shaken up” period (all very roughly speaking, of course!) and thus being handled worse, or
Failure to recover on some other dimensions of civilization, e.g. political and moral progress
See also https://forum.effectivealtruism.org/posts/qY5q2QTG44avBbNKn/modelling-the-odds-of-recovery-from-civilizational-collapse
Background thought: I think the potential value of the future is probably ridiculously huge, and there are probably many plausible futures where humanity survives for millions of years and advances technologically past the current frontiers and nothing seems obviously horrific, but we still fall massively short of how much good we could’ve achieved. E.g., we choose to stay on earth or in the solar system forever, we spread to other solar systems but still through far less of the universe than we could’ve, we never switch to more efficient digital minds, we never switch to something close to the best kind of digital minds having the best kind of lives/experience/societies, we cause unrecognised/not-cared-about large-scale suffering of nonhuman animals or some types of digital beings, …
So I think we might need to chart a careful course through the future, not just avoiding the super obvious pitfalls. And for various fuzzy reasons, I tentatively think we’re notably less likely to chart the right course following a huge but not-immediately-existential catastrophe than if we avoid such catastrophes, though I’m not very confident about that.
Thanks, this is really helpful. I think a hidden assumption in my head was that the hingey time is put on hold while civilization recovers, but now I see that that’s pretty questionable.
I also share your feeling that, for fuzzy reasons, a world with ‘lesser catastrophes’ is significantly worse in the longterm than a world without them. I’m still trying to bring those reasons into focus, though, and think this could be a really interesting direction for future research.
Regarding the “long term stagnation”—to me this suggests you seem to be thinking of the current epoch of history as showcasing the inevitable. Yet stagnation in this sense was the norm for 200,000+ years of modern Homo sapiens existing on Earth. Hence, there is real question whether this period represents a continued given, a blip, the last hurrah before the end, or perhaps the start of a much more complex trajectory of history—perhaps involving multiple periods of rapid technological flourishing, then periods of stagnation or even decline, in various patterns and ways and not to mention also geographically.
One thing to note about history or culture is that there are no inherent drivers to “greater complexity”—indeed, from an anthropological point of view one can question just what that means. It is, in this regard, much like biological evolution outside the human realm. In both biology and anthropology, there is and should be a strong skepticism toward any claim of a teleology or a linear narrative.
That said, I would still support that there is a distinction between a long term stagnation and extinction even if the former is definitely not something one should rule out—and that’s that in the latter case, there is absolutely no recovery: while it’s possible another intelligent toolmaking species could evolve, looking at the future of geological history which is potentially much more regular, the gradual heating of the Sun suggests that we could potentially be Earth’s only shot. It’s like the difference between life imprisonment, and the death penalty. The former is not fun at all, but there’s a reason there’s so much resistance to the latter, and it’s that key point of irreversibility.