Feedback that I sent to Jeffrey Ladish about his application:
Excerpts from the application
I would like to spend five months conducting a feasibility analysis for a new project that has the potential to be built into an organization. The goal of the project would be to increase civilizational resilience to collapse in the event of a major catastrophe—that is, to preserve essential knowledge, skills, and social technology necessary for functional human civilization.
The concrete results of this work would include an argument for why or why not a project aimed at rebuilding after collapse would be feasible, and at what scale.
Several scholars and EAs have investigated this question before, so I plan to build off existing work to avoid reinventing the wheel. In particular, [Beckstead 2014](https://www.fhi.ox.ac.uk/wp-content/uploads/1-s2.0-S0016328714001888-main.pdf) investigates whether bunkers or shelters might help civilization recover from a major catastrophe. He enumerates many scenarios in which shelters would *not* be helpful, but concludes with two scenarios worthy of deeper analysis: “global food crisis” and “social collapse”. I plan to focus on “social collapse”, noting that a global food crisis may also lead to social collapse.
I expect my feasibility investigation to cover the following questions:
- Impact: what would it take for such a project to actually impact the far future?
- Tractability: what (if any) scope and scale of project might be both feasible *and* useful?
- Neglectedness: what similar projects already exist?
Example questions:
Impact:
- How fragile is the global supply chain? For example, how might humans lose the ability to manufacture semiconductors?
- What old manufacturing technologies and skills (agricultural insights? steam engine-powered factories?) would be most essential to rebuilding key capacities?
- What social structures would facilitate both survival through major catastrophes and coordination through rebuilding efforts?
Neglectedness:
- What efforts exist to preserve knowledge into the future (seed banks, book archives)? Human lives (private & public bunkers, civil defense efforts)?
Tractability:
- What funding might be available for projects aimed at civilizational resilience?
- Are there skilled people who would commit to working on such a project? Would people be willing to relocate to a remote location if needed?
- What are the benefits of starting a non profit vs. other project structures?
(3)
I believe the best feedback for measuring the impact of this research will be to solicit personal feedback on the quality of the feasibility argument I produce. I would like to present my findings to Anders Sandberg, Carl Shulman, Nick Beckstead, & other experts.
If I can present a case for a civilizational resilience project which those experts find compelling, I would hope to launch a project with that goal. Conversely, if I can present a strong case that such a project would not be effective, my work could deter others from pursuing an ineffective project.
My thoughts
I feel broadly confused about the value of working on improving the recovery from civilizational collapse, but overall feel more hesitant than enthusiastic. I have so far not heard of a civilization collapse scenario that seems likely to me and in which we have concrete precautions we can take to increase the likelihood of recovery.
Since I’ve initially read your application, I have had multiple in-person conversations with both you and Finan Adamson who used to work at ALLFED, and you both have much better models of the considerations around civilizational collapse than I do. This has made me understand your models a lot more, but has so far not updated me much towards civilizational collapse being both likely and tractable. However, I have updated my value estimate of looking into this cause area in more depth and writing up the considerations around it, since I think there is enough uncertainty and potential value in this domain that getting more clarity would be worth quite a bit.
I think at the moment, I would not be that enthusiastic about someone building a whole organization around efforts to improve recovery chances from civilizational collapse, but do think that there is potentially a lot of value in individual researchers making a better case for that kind of work and mapping out the problem space more.
I think my biggest cruxes in this space are something like the following:
Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?
Can we build any reasonable models about what our bottlenecks will be for recovery after a significant global catastrophe? (This is likely dependent on an analysis of what specific catastrophes are most likely and what state they leave humanity in)
Are there major risks that have a chance to wipe out more than 90% of the population, but not all of it? My models of biorisk suggests it’s quite hard to get to 90% mortality, I think most nuclear winter scenarios also have less than a 90% food reduction impact
Are there non-population-level dependent ways in which modern civilization is fragile that might cause widespread collapse and the end of scientific progress? If so, are there any ways to prepare for them?
Are there strong reasons to expect the existential risk profile of a recovered civilization to be significantly better than for our current civilization? (E.g. maybe a bad experience with nuclear weapons would make the world much more aware of the dangers of technology)
I think answering any mixture of these affirmatively could convince me that it is worth investing significantly more resources into this, and that it might make sense to divert resources from catastrophic (and existential) risk prevention to working on improved recovery from catastrophic events, which I think is the tradeoff I am facing with my recommendations.
I do think that a serious investigation into the question of recovery from catastrophic events is an important part of something like “covering all the bases” in efforts to improving the long-term-future. However, the field is currently still resource constrained enough that I don’t think that is sufficient for me to recommend funding to it.
Overall, I think I am more positive on making a grant like this than when I first read this, though not necessarily that much more. I have however updated positively on you in particular and think that if we want someone to write up and perform research in this space, that you are a decent candidate for it. This was partially a result of talking to you, reading some of your non-published writing and having some people I trust vouch for you, though I still haven’t really investigated this whole area enough to be confident that the kind of research you are planning to do is really what is needed.
I want to give a brief update on this topic. I spent a couple months researching civilizational collapse scenarios and come to some tentative conclusions. At some point I may write a longer post on this, but I think some of my other upcoming posts will address some of my reasoning here.
My conclusion after investigating potential collapse scenarios:
1) There are a number of plausible (>1% probability) scenarios in the next hundred years that would result in a “civilizational collapse”, where an unprecedented number of people die and key technologies are (temporarily) lost.
2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years.
3) The highest leverage point for intervention in a potential post-collapse environment would be at the state level. Individuals, even wealthy individuals, lack the infrastructure and human resources at the scale necessary to rebuild effectively. There are some decent mitigations possible in the space of information archival, such as seed banks and internet archives, but these are far less likely to have long term impacts compared to state efforts.
Based on these conclusions, I decided to focus my efforts on other global risk analysis areas, because I felt I didn’t have the relevant skills or resources to embark on a state-level project. If I did have those skills & resources, I believe (low to medium confidence) it would be worthwhile project, and if I found a person or group who did possess those skills / resources, I would strongly consider offering my assistance.
1) There are a number of plausible (>1% probability) scenarios in the next hundred years that would result in a “civilizational collapse”, where an unprecedented number of people die and key technologies are (temporarily) lost.
Are you saying here that you believe the scenarios add up to a greater than 1% probability of collapse in the next hundred years, or that you believe there are multiple scenarios that each have greater than 1% probability?
Some quick answers to your questions based on my current beliefs:
Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?
I think the answer in the short term is no, if “completely collapses” means something like “is unable to get back to at least 1950′s level technology in 500 years”. I think think there are a number of things that could reduce humanity’s “technological carrying capacity”. I’m currently working on explicating some of these factors, but some examples would be drastic climate change, long-lived radionuclides, increase in persistent pathogens.
Can we build any reasonable models about what our bottlenecks will be for recovery after a significant global catastrophe? (This is likely dependent on an analysis of what specific catastrophes are most likely and what state they leave humanity in)
I think we can. I’m not sure we can get very confident about exactly which potential bottlenecks will prove most significant, but I think we can narrow the search space and put forth some good hypotheses, both by reasoning from the best reference class examples we have and by thinking through the economics of potential scenarios.
Are there major risks that have a chance to wipe out more than 90% of the population, but not all of it? My models of biorisk suggests it’s quite hard to get to 90% mortality, I think most nuclear winter scenarios also have less than a 90% food reduction impact
I’m not sure about this one. I can think of some scenarios that would wipe out 90%+ of the population but none of them seem very likely. Engineered pandemics seem like one candidate (I agree with Denkenberger here), and the worst-case nuclear winter scenarios might also do it, though I haven’t read the nuclear winter papers in a while, and there has been several new papers and comments in the last year, including real disagreement in the field (yay, finally!)
Are there non-population-level dependent ways in which modern civilization is fragile that might cause widespread collapse and the end of scientific progress? If so, are there any ways to prepare for them?
Population seems like one important variable in our technological carrying capacity, but I expect some of the others are as important. The one I mentioned in my other post is basically I think a huge one is state planning & coordination capacity. I think post-WWII Germany and Japan illustrate this quite well. However, I don’t have a very good sense of what might cause most states to fail without also destroying a large part of the population at the same time. But what I’m saying is that the population factor might not be the most important one in those scenarios.
Are there strong reasons to expect the existential risk profile of a recovered civilization to be significantly better than for our current civilization? (E.g. maybe a bad experience with nuclear weapons would make the world much more aware of the dangers of technology)
I’m very uncertain about this. I do think there is a good case for interventions aimed at improving the existential risk profile of post-disaster civilization being competitive with interventions aimed at improving the existential risk profile of our current civilization. The gist is that there is far less competition for the former interventions. Of course, given the huge uncertainties about both the circumstances of global catastrophes and the potential intervention points, it’s hard to say whether it would possible to actually alter the post-disaster civilization’s profile at all. However, it’s also hard to say whether we can alter the current civilization’s profile at all, and it’s not obvious to me that this latter task is easier.
You say no to “Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?” and say “2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years.”
I feel like I’d much better understand what you mean if you were up for giving some probabilities here even if there’s a range or they’re imprecise or unstable. There’s a really big range within “likely” and I’d like some sense of where you are on that range.
This is very helpful to see your reasoning and cruxes. I reply to the ALLFED related issues above, but I thought I would reply to the pandemic issue here. Here is one mechanism that could result in greater than 90% mortality from a pandemic: multiple diseases at the same time: multipandemic.
Feedback that I sent to Jeffrey Ladish about his application:
Excerpts from the application
I would like to spend five months conducting a feasibility analysis for a new project that has the potential to be built into an organization. The goal of the project would be to increase civilizational resilience to collapse in the event of a major catastrophe—that is, to preserve essential knowledge, skills, and social technology necessary for functional human civilization.
The concrete results of this work would include an argument for why or why not a project aimed at rebuilding after collapse would be feasible, and at what scale.
Several scholars and EAs have investigated this question before, so I plan to build off existing work to avoid reinventing the wheel. In particular, [Beckstead 2014](https://www.fhi.ox.ac.uk/wp-content/uploads/1-s2.0-S0016328714001888-main.pdf) investigates whether bunkers or shelters might help civilization recover from a major catastrophe. He enumerates many scenarios in which shelters would *not* be helpful, but concludes with two scenarios worthy of deeper analysis: “global food crisis” and “social collapse”. I plan to focus on “social collapse”, noting that a global food crisis may also lead to social collapse.
I expect my feasibility investigation to cover the following questions:
- Impact: what would it take for such a project to actually impact the far future?
- Tractability: what (if any) scope and scale of project might be both feasible *and* useful?
- Neglectedness: what similar projects already exist?
Example questions:
Impact:
- How fragile is the global supply chain? For example, how might humans lose the ability to manufacture semiconductors?
- What old manufacturing technologies and skills (agricultural insights? steam engine-powered factories?) would be most essential to rebuilding key capacities?
- What social structures would facilitate both survival through major catastrophes and coordination through rebuilding efforts?
Neglectedness:
- What efforts exist to preserve knowledge into the future (seed banks, book archives)? Human lives (private & public bunkers, civil defense efforts)?
Tractability:
- What funding might be available for projects aimed at civilizational resilience?
- Are there skilled people who would commit to working on such a project? Would people be willing to relocate to a remote location if needed?
- What are the benefits of starting a non profit vs. other project structures?
(3)
I believe the best feedback for measuring the impact of this research will be to solicit personal feedback on the quality of the feasibility argument I produce. I would like to present my findings to Anders Sandberg, Carl Shulman, Nick Beckstead, & other experts.
If I can present a case for a civilizational resilience project which those experts find compelling, I would hope to launch a project with that goal. Conversely, if I can present a strong case that such a project would not be effective, my work could deter others from pursuing an ineffective project.
My thoughts
I feel broadly confused about the value of working on improving the recovery from civilizational collapse, but overall feel more hesitant than enthusiastic. I have so far not heard of a civilization collapse scenario that seems likely to me and in which we have concrete precautions we can take to increase the likelihood of recovery.
Since I’ve initially read your application, I have had multiple in-person conversations with both you and Finan Adamson who used to work at ALLFED, and you both have much better models of the considerations around civilizational collapse than I do. This has made me understand your models a lot more, but has so far not updated me much towards civilizational collapse being both likely and tractable. However, I have updated my value estimate of looking into this cause area in more depth and writing up the considerations around it, since I think there is enough uncertainty and potential value in this domain that getting more clarity would be worth quite a bit.
I think at the moment, I would not be that enthusiastic about someone building a whole organization around efforts to improve recovery chances from civilizational collapse, but do think that there is potentially a lot of value in individual researchers making a better case for that kind of work and mapping out the problem space more.
I think my biggest cruxes in this space are something like the following:
Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?
Can we build any reasonable models about what our bottlenecks will be for recovery after a significant global catastrophe? (This is likely dependent on an analysis of what specific catastrophes are most likely and what state they leave humanity in)
Are there major risks that have a chance to wipe out more than 90% of the population, but not all of it? My models of biorisk suggests it’s quite hard to get to 90% mortality, I think most nuclear winter scenarios also have less than a 90% food reduction impact
Are there non-population-level dependent ways in which modern civilization is fragile that might cause widespread collapse and the end of scientific progress? If so, are there any ways to prepare for them?
Are there strong reasons to expect the existential risk profile of a recovered civilization to be significantly better than for our current civilization? (E.g. maybe a bad experience with nuclear weapons would make the world much more aware of the dangers of technology)
I think answering any mixture of these affirmatively could convince me that it is worth investing significantly more resources into this, and that it might make sense to divert resources from catastrophic (and existential) risk prevention to working on improved recovery from catastrophic events, which I think is the tradeoff I am facing with my recommendations.
I do think that a serious investigation into the question of recovery from catastrophic events is an important part of something like “covering all the bases” in efforts to improving the long-term-future. However, the field is currently still resource constrained enough that I don’t think that is sufficient for me to recommend funding to it.
Overall, I think I am more positive on making a grant like this than when I first read this, though not necessarily that much more. I have however updated positively on you in particular and think that if we want someone to write up and perform research in this space, that you are a decent candidate for it. This was partially a result of talking to you, reading some of your non-published writing and having some people I trust vouch for you, though I still haven’t really investigated this whole area enough to be confident that the kind of research you are planning to do is really what is needed.
I want to give a brief update on this topic. I spent a couple months researching civilizational collapse scenarios and come to some tentative conclusions. At some point I may write a longer post on this, but I think some of my other upcoming posts will address some of my reasoning here.
My conclusion after investigating potential collapse scenarios:
1) There are a number of plausible (>1% probability) scenarios in the next hundred years that would result in a “civilizational collapse”, where an unprecedented number of people die and key technologies are (temporarily) lost.
2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years.
3) The highest leverage point for intervention in a potential post-collapse environment would be at the state level. Individuals, even wealthy individuals, lack the infrastructure and human resources at the scale necessary to rebuild effectively. There are some decent mitigations possible in the space of information archival, such as seed banks and internet archives, but these are far less likely to have long term impacts compared to state efforts.
Based on these conclusions, I decided to focus my efforts on other global risk analysis areas, because I felt I didn’t have the relevant skills or resources to embark on a state-level project. If I did have those skills & resources, I believe (low to medium confidence) it would be worthwhile project, and if I found a person or group who did possess those skills / resources, I would strongly consider offering my assistance.
Are you saying here that you believe the scenarios add up to a greater than 1% probability of collapse in the next hundred years, or that you believe there are multiple scenarios that each have greater than 1% probability?
Some quick answers to your questions based on my current beliefs:
Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?
I think the answer in the short term is no, if “completely collapses” means something like “is unable to get back to at least 1950′s level technology in 500 years”. I think think there are a number of things that could reduce humanity’s “technological carrying capacity”. I’m currently working on explicating some of these factors, but some examples would be drastic climate change, long-lived radionuclides, increase in persistent pathogens.
Can we build any reasonable models about what our bottlenecks will be for recovery after a significant global catastrophe? (This is likely dependent on an analysis of what specific catastrophes are most likely and what state they leave humanity in)
I think we can. I’m not sure we can get very confident about exactly which potential bottlenecks will prove most significant, but I think we can narrow the search space and put forth some good hypotheses, both by reasoning from the best reference class examples we have and by thinking through the economics of potential scenarios.
Are there major risks that have a chance to wipe out more than 90% of the population, but not all of it? My models of biorisk suggests it’s quite hard to get to 90% mortality, I think most nuclear winter scenarios also have less than a 90% food reduction impact
I’m not sure about this one. I can think of some scenarios that would wipe out 90%+ of the population but none of them seem very likely. Engineered pandemics seem like one candidate (I agree with Denkenberger here), and the worst-case nuclear winter scenarios might also do it, though I haven’t read the nuclear winter papers in a while, and there has been several new papers and comments in the last year, including real disagreement in the field (yay, finally!)
Are there non-population-level dependent ways in which modern civilization is fragile that might cause widespread collapse and the end of scientific progress? If so, are there any ways to prepare for them?
Population seems like one important variable in our technological carrying capacity, but I expect some of the others are as important. The one I mentioned in my other post is basically I think a huge one is state planning & coordination capacity. I think post-WWII Germany and Japan illustrate this quite well. However, I don’t have a very good sense of what might cause most states to fail without also destroying a large part of the population at the same time. But what I’m saying is that the population factor might not be the most important one in those scenarios.
Are there strong reasons to expect the existential risk profile of a recovered civilization to be significantly better than for our current civilization? (E.g. maybe a bad experience with nuclear weapons would make the world much more aware of the dangers of technology)
I’m very uncertain about this. I do think there is a good case for interventions aimed at improving the existential risk profile of post-disaster civilization being competitive with interventions aimed at improving the existential risk profile of our current civilization. The gist is that there is far less competition for the former interventions. Of course, given the huge uncertainties about both the circumstances of global catastrophes and the potential intervention points, it’s hard to say whether it would possible to actually alter the post-disaster civilization’s profile at all. However, it’s also hard to say whether we can alter the current civilization’s profile at all, and it’s not obvious to me that this latter task is easier.
You say no to “Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?” and say “2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years.”
I feel like I’d much better understand what you mean if you were up for giving some probabilities here even if there’s a range or they’re imprecise or unstable. There’s a really big range within “likely” and I’d like some sense of where you are on that range.
This is very helpful to see your reasoning and cruxes. I reply to the ALLFED related issues above, but I thought I would reply to the pandemic issue here. Here is one mechanism that could result in greater than 90% mortality from a pandemic: multiple diseases at the same time: multipandemic.