Luisa Rodriguezâs analysis suggests that even if 99.99% of humanity were wiped out, leaving just 800,000 survivors, the probability of human extinction rises above 1 in 5000 only when they are clustered in about 80 groups and each group has at least a 90% chance of dying out, or they are clustered in about 800 groups and each group has a 99% chance of dying out.[33]
This misrepresents Luisaâs claims in a way that I think is important, and also takes them out of context in a way I think is important.
Her post doesnât claim that the chance of extinction under the conditions you mention is as low as you say, since there could also be âfairly indirect pathsâ to extinction.
Her post says âIn this post, I explore the probability that if various kinds of catastrophe caused civilizational collapse, this collapse would fairly directly lead to human extinctionâ, and âthis piece focuses on whether civilizational collapse would lead more or less directly to human extinction without additional catalyzing conditions, like a second catastrophe (either soon or long after the first) or centuries of economic stagnation that eventually end in human extinction.â
I also personally think the less direct paths should be seen as more worrying than the fairly direct paths (though I think reasonable people can disagree here)
Unfortunately, I think Luisa isnât very clear precisely what sheâs including in scope vs not, i.e. what âfairly directlyâ means, as I expressed in this comment.
Her post and your statement are just about extinction risk, but your postâs title indicates your post is about existential risk. I think this makes your statement misleading (though this isnât a way itâs misrepresenting Luisa), since you donât also say something there about non-extinction existential risk.
I also personally think we should be similarly or more worried about the sort of scenario you mention there leading to something like unrecoverable dystopia than to extinction (though I think reasonable people can disagree on this)
(Incidentally, I was worried precisely this sort of misrepresentation and taking out of context would occur when I first read her post.)
(Also, as expressed in my other comment, overall I really appreciate this post! This comment is just a criticism of one specific part of the post, and I donât think it should radically change your high level conclusions. Though I do personally think itâs worth editing to fix this, since I think that passage could leave people with faulty views on an important question.)
Do you mean non-extinction existential risks? I can think of non-x-risk scenarios that involve humans going extinct, but those are extremely noncentral.
Thanks, this is a great comment! Iâm going to edit the main post to reflect some of this.
Does (1) a second catastrophe and (2) failure for civilization to recover exhaust the possibilities for âindirect pathsâ? Iâve thought about this less than the other points in my main post, but I think I disagree that these are as worrying as the direct path. I think itâs possible theyâre on the same magnitude, but less likely in expectation, than the direct pathways from war to existential risk via extinction.
First, catastrophes in general are just very unlikely, and I think the âperiod of vulnerabilityâ following a war would probably be surprisingly short (on the order of 100 years rather than thousands). Post-WWII recovery in Europe took place over the course of a few years. The US funded some of this recover via the Marshall Plan, but the investment wasnât that big (probably <5% of national income).[1] Thereâs also a paper that found that, just 27 years after the Vietnam War, no difference in economic development between areas that were heavily bombed by the US and areas that werenât.[2]
A war 10-30 times more severe than WWII would obviously take longer to recover from, but I still think weâre talking about decades or centuries rather than millenia for civilization to stabilize somewhere (albeit at a much diminished population).
Second, I find it hard to think of specific reasons why we would expect long-term civilizational stagnation. I think a catastrophic war could wipe out most of the world population, but still leave several million people alive. New Zealand alone has 5M people, for example. Humanity has previously survived much smaller population bottlenecks. Conditional on there being survivors, it also seems likely to me that they survive in at least several different places (various islands and isolated parts of the world, for example). That gives us multiple chances for some population to get it together and restart economic growth, population growth, and scientific advancement.
Iâd be interested to hear more about why you think the âless direct paths should be seen as more worrying than the fairly direct pathsâ.
âThe Marshall Planâs accounting reflects that aid accounted for about 3% of the combined national income of the recipient countries between 1948 and 1951â (from Wikipedia; I havenât chased down the original source, so caveat emptor)
âU.S. bombing does not have a robust negative impact on poverty rates, consumption levels, infrastructure, literacy or population density through 2002. This finding suggests that local recovery from war damage can be rapid under certain conditions, although further work is needed to establish the generality of the finding in other settings.â (Miguel & Roland, abstract, https://ââeml.berkeley.edu/ââ~groland/ââpubs/ââvietnam-bombs_19oct05.pdf)
That said, I think that, personally, my main reasons for concern about such events was in any case not that they might fairly directly lead to extinction
Rather, it was that such events might:
Trigger other bad events (e.g., further conflict, development and/âor deployment of dangerous technologies) that ultimately lead to extinction
Meaning any scenario in which humanity survives and regains industrial civilization, but with substantially less good outcomes than couldâve been achieved. One of many ways this could occur is negative changes in values.
(I think my views on this are pretty similar to those Beckstead expresses here)
I think this post has updated me towards somewhatless concern about such events causing extinction by triggering other bad events
This is partly because you provide some arguments that conflict in the aftermath wouldnât be extreme or would be survived
(That said, Iâm not sure how convincing I found those particular partsâI might expand on this in another commentâand Iâm a bit confused about why WMDs were mentioned in Case 2 but not Case 1 or Case 3.)
But it hasnât caused a major update regarding the other two of those pathways
Which is fair enoughâone post canât cover everything, and you explicitly noted that youâre setting those matters aside for followup posts
Relatedly, Iâm excited to read those followup posts!
I think â[the period before recovery might be only] on the order of 100 yearsâ offers little protection if we think weâre living at an especially âhingeyâ time; a lot could happen in this specific coming 100 years, and the state society is in when those key events happen could be a really big deal.
Also, I agree that society simply remains small or technologically stagnant or whatever indefinitely seems very unlikely. But Iâm more worried about either:
âBig Deal Eventsâ happening during the âshaken upâ period (all very roughly speaking, of course!) and thus being handled worse, or
Failure to recover on some other dimensions of civilization, e.g. political and moral progress
Background thought: I think the potential value of the future is probably ridiculously huge, and there are probably many plausible futures where humanity survives for millions of years and advances technologically past the current frontiers and nothing seems obviously horrific, but we still fall massively short of how much good we couldâve achieved. E.g., we choose to stay on earth or in the solar system forever, we spread to other solar systems but still through far less of the universe than we couldâve, we never switch to more efficient digital minds, we never switch to something close to the best kind of digital minds having the best kind of lives/âexperience/âsocieties, we cause unrecognised/ânot-cared-about large-scale suffering of nonhuman animals or some types of digital beings, âŠ
So I think we might need to chart a careful course through the future, not just avoiding the super obvious pitfalls. And for various fuzzy reasons, I tentatively think weâre notably less likely to chart the right course following a huge but not-immediately-existential catastrophe than if we avoid such catastrophes, though Iâm not very confident about that.
Thanks, this is really helpful. I think a hidden assumption in my head was that the hingey time is put on hold while civilization recovers, but now I see that thatâs pretty questionable.
I also share your feeling that, for fuzzy reasons, a world with âlesser catastrophesâ is significantly worse in the longterm than a world without them. Iâm still trying to bring those reasons into focus, though, and think this could be a really interesting direction for future research.
Regarding the âlong term stagnationââto me this suggests you seem to be thinking of the current epoch of history as showcasing the inevitable. Yet stagnation in this sense was the norm for 200,000+ years of modern Homo sapiens existing on Earth. Hence, there is real question whether this period represents a continued given, a blip, the last hurrah before the end, or perhaps the start of a much more complex trajectory of historyâperhaps involving multiple periods of rapid technological flourishing, then periods of stagnation or even decline, in various patterns and ways and not to mention also geographically.
One thing to note about history or culture is that there are no inherent drivers to âgreater complexityââindeed, from an anthropological point of view one can question just what that means. It is, in this regard, much like biological evolution outside the human realm. In both biology and anthropology, there is and should be a strong skepticism toward any claim of a teleology or a linear narrative.
That said, I would still support that there is a distinction between a long term stagnation and extinction even if the former is definitely not something one should rule outâand thatâs that in the latter case, there is absolutely no recovery: while itâs possible another intelligent toolmaking species could evolve, looking at the future of geological history which is potentially much more regular, the gradual heating of the Sun suggests that we could potentially be Earthâs only shot. Itâs like the difference between life imprisonment, and the death penalty. The former is not fun at all, but thereâs a reason thereâs so much resistance to the latter, and itâs that key point of irreversibility.
This misrepresents Luisaâs claims in a way that I think is important, and also takes them out of context in a way I think is important.
Her post doesnât claim that the chance of extinction under the conditions you mention is as low as you say, since there could also be âfairly indirect pathsâ to extinction.
Her post says âIn this post, I explore the probability that if various kinds of catastrophe caused civilizational collapse, this collapse would fairly directly lead to human extinctionâ, and âthis piece focuses on whether civilizational collapse would lead more or less directly to human extinction without additional catalyzing conditions, like a second catastrophe (either soon or long after the first) or centuries of economic stagnation that eventually end in human extinction.â
I also personally think the less direct paths should be seen as more worrying than the fairly direct paths (though I think reasonable people can disagree here)
Unfortunately, I think Luisa isnât very clear precisely what sheâs including in scope vs not, i.e. what âfairly directlyâ means, as I expressed in this comment.
Her post and your statement are just about extinction risk, but your postâs title indicates your post is about existential risk. I think this makes your statement misleading (though this isnât a way itâs misrepresenting Luisa), since you donât also say something there about non-extinction existential risk.
I also personally think we should be similarly or more worried about the sort of scenario you mention there leading to something like unrecoverable dystopia than to extinction (though I think reasonable people can disagree on this)
(Incidentally, I was worried precisely this sort of misrepresentation and taking out of context would occur when I first read her post.)
(Also, as expressed in my other comment, overall I really appreciate this post! This comment is just a criticism of one specific part of the post, and I donât think it should radically change your high level conclusions. Though I do personally think itâs worth editing to fix this, since I think that passage could leave people with faulty views on an important question.)
Do you mean non-extinction existential risks? I can think of non-x-risk scenarios that involve humans going extinct, but those are extremely noncentral.
Whoops! Yeah, that was just a typo. Now fixed.
Thanks, this is a great comment! Iâm going to edit the main post to reflect some of this.
Does (1) a second catastrophe and (2) failure for civilization to recover exhaust the possibilities for âindirect pathsâ? Iâve thought about this less than the other points in my main post, but I think I disagree that these are as worrying as the direct path. I think itâs possible theyâre on the same magnitude, but less likely in expectation, than the direct pathways from war to existential risk via extinction.
First, catastrophes in general are just very unlikely, and I think the âperiod of vulnerabilityâ following a war would probably be surprisingly short (on the order of 100 years rather than thousands). Post-WWII recovery in Europe took place over the course of a few years. The US funded some of this recover via the Marshall Plan, but the investment wasnât that big (probably <5% of national income).[1] Thereâs also a paper that found that, just 27 years after the Vietnam War, no difference in economic development between areas that were heavily bombed by the US and areas that werenât.[2]
A war 10-30 times more severe than WWII would obviously take longer to recover from, but I still think weâre talking about decades or centuries rather than millenia for civilization to stabilize somewhere (albeit at a much diminished population).
Second, I find it hard to think of specific reasons why we would expect long-term civilizational stagnation. I think a catastrophic war could wipe out most of the world population, but still leave several million people alive. New Zealand alone has 5M people, for example. Humanity has previously survived much smaller population bottlenecks. Conditional on there being survivors, it also seems likely to me that they survive in at least several different places (various islands and isolated parts of the world, for example). That gives us multiple chances for some population to get it together and restart economic growth, population growth, and scientific advancement.
Iâd be interested to hear more about why you think the âless direct paths should be seen as more worrying than the fairly direct pathsâ.
âThe Marshall Planâs accounting reflects that aid accounted for about 3% of the combined national income of the recipient countries between 1948 and 1951â (from Wikipedia; I havenât chased down the original source, so caveat emptor)
âU.S. bombing does not have a robust negative impact on poverty rates, consumption levels, infrastructure, literacy or population density through 2002. This finding suggests that local recovery from war damage can be rapid under certain conditions, although further work is needed to establish the generality of the finding in other settings.â (Miguel & Roland, abstract, https://ââeml.berkeley.edu/ââ~groland/ââpubs/ââvietnam-bombs_19oct05.pdf)
[written quickly, sorry]
One indication of my views is this comment I made on Luisaâs post (emphasis added):
I think â[the period before recovery might be only] on the order of 100 yearsâ offers little protection if we think weâre living at an especially âhingeyâ time; a lot could happen in this specific coming 100 years, and the state society is in when those key events happen could be a really big deal.
Also, I agree that society simply remains small or technologically stagnant or whatever indefinitely seems very unlikely. But Iâm more worried about either:
âBig Deal Eventsâ happening during the âshaken upâ period (all very roughly speaking, of course!) and thus being handled worse, or
Failure to recover on some other dimensions of civilization, e.g. political and moral progress
See also https://ââforum.effectivealtruism.org/ââposts/ââqY5q2QTG44avBbNKn/ââmodelling-the-odds-of-recovery-from-civilizational-collapse
Background thought: I think the potential value of the future is probably ridiculously huge, and there are probably many plausible futures where humanity survives for millions of years and advances technologically past the current frontiers and nothing seems obviously horrific, but we still fall massively short of how much good we couldâve achieved. E.g., we choose to stay on earth or in the solar system forever, we spread to other solar systems but still through far less of the universe than we couldâve, we never switch to more efficient digital minds, we never switch to something close to the best kind of digital minds having the best kind of lives/âexperience/âsocieties, we cause unrecognised/ânot-cared-about large-scale suffering of nonhuman animals or some types of digital beings, âŠ
So I think we might need to chart a careful course through the future, not just avoiding the super obvious pitfalls. And for various fuzzy reasons, I tentatively think weâre notably less likely to chart the right course following a huge but not-immediately-existential catastrophe than if we avoid such catastrophes, though Iâm not very confident about that.
Thanks, this is really helpful. I think a hidden assumption in my head was that the hingey time is put on hold while civilization recovers, but now I see that thatâs pretty questionable.
I also share your feeling that, for fuzzy reasons, a world with âlesser catastrophesâ is significantly worse in the longterm than a world without them. Iâm still trying to bring those reasons into focus, though, and think this could be a really interesting direction for future research.
Regarding the âlong term stagnationââto me this suggests you seem to be thinking of the current epoch of history as showcasing the inevitable. Yet stagnation in this sense was the norm for 200,000+ years of modern Homo sapiens existing on Earth. Hence, there is real question whether this period represents a continued given, a blip, the last hurrah before the end, or perhaps the start of a much more complex trajectory of historyâperhaps involving multiple periods of rapid technological flourishing, then periods of stagnation or even decline, in various patterns and ways and not to mention also geographically.
One thing to note about history or culture is that there are no inherent drivers to âgreater complexityââindeed, from an anthropological point of view one can question just what that means. It is, in this regard, much like biological evolution outside the human realm. In both biology and anthropology, there is and should be a strong skepticism toward any claim of a teleology or a linear narrative.
That said, I would still support that there is a distinction between a long term stagnation and extinction even if the former is definitely not something one should rule outâand thatâs that in the latter case, there is absolutely no recovery: while itâs possible another intelligent toolmaking species could evolve, looking at the future of geological history which is potentially much more regular, the gradual heating of the Sun suggests that we could potentially be Earthâs only shot. Itâs like the difference between life imprisonment, and the death penalty. The former is not fun at all, but thereâs a reason thereâs so much resistance to the latter, and itâs that key point of irreversibility.