Sheltering humanity against x-risk: report from the SHELTER weekend
Greetings everyone! I was one of the participants in the SHELTER weekend (4-8 Aug 22) organized by members of the EA community. I wrote a report on the outcomes of the discussion and intended to publish it here, although I—again—needed some prodding (thanks :)) to make that finally happen. Thanks to everyone involved, all the commenters of the previous versions, and the community for the support that made all this possible!
The document is 47 pages with its appendix, so instead of flooding this forum, here’s the link to the Google document: anyone should be able to comment on it this way. I licensed my contributions with a CC-BY-NC license, so feel free to share this as widely as possible. If there are other outlets where you think this should be published, let me know, I’m very open to suggestions.
EDIT: In brief, the key takeaways are here (thanks Cillian for reminding me of this oversight):
KEY TAKEAWAYS
There is no escape hatch for humanity, nor for the rich. Shelters that can reliably protect even a small group of humans against catastrophes that would otherwise make humanity extinct are probably infeasible due to multiple technical, psychological, social, political, and economic issues. Constructing “escape hatches” for the few, particularly for the rich and the powerful, would probably increase the net catastrophic and existential risk, as any benefits gained would almost certainly be offset by incentive hazards and further erosion of the perception that we are all in this together.
Self-sufficient space colonies that could protect against existential risks require technologies and skills that, if they existed, could be used more cheaply and reliably to create self-sustaining shelters on Earth. This will likely remain the case in the foreseeable future.
Even if a small group manages to survive a planetary catastrophe, only in some scenarios it is at all plausible that their descendants could repair the damages caused by any catastrophic outcome that the global society failed to prevent, and reconstitute the technological civilization.
Therefore, to save civilization, one needs to save society. The best lifeboat is the ship; the best shelter is a functional society. Increasing the resilience of societies and their capability for cooperative action would increase humanity’s resilience against events that could cascade into existential risks while having obvious benefits in less dire circumstances as well.
Even though popular discussion about shelters tends to revolve around bunkers and stockpiles, the importance of organizational efforts, e.g. maintenance, training, and preparedness, cannot be overstated. No amount of material preparations or technology will help in a crisis if they do not work due to lack of maintenance, or if humans do not know how to use them. On the other hand, organizations that train to respond to disruptions can improvise even if they lack materials.
The solutions to the shelter problem are not primarily technological. As far as I’m aware, no one has been able to identify any foreseeable technologies that would offer substantial improvements in societal resilience or otherwise provide a significant reduction in existential risk, although research into resilience-enhancing, “resilient by default” and “gracefully failing” technologies and practices should probably receive more funding than it currently does. However, even here the primary problem is not technical but economic: more resilient technologies and practices often exist already but they tend to be more expensive to buy or to use.
Longer-term research programs could nevertheless develop cost-effective ways to increase resilience against catastrophes and permit easier or faster recovery from a disaster. One obvious partner would be research into self-sustaining ecosystems for space colonization. A demonstration facility for the long-term feasibility of a closed-loop life support system would also double as a shelter, even if the small scale of such habitats and likely reliance on the “iceberg” of external technical support raises serious questions about the contribution they could provide for existential risk reduction.
Natural, accidental, or deliberate release of a dangerous pathogen(s) is widely seen as the threat with the most potential to precipitate an existential risk, although one should remember that the ongoing COVID-19 pandemic may bias this conclusion. A particularly worrisome prospect is the simultaneous, deliberate release of two or more pathogens, which could greatly confound the efforts to detect and contain the outbreak.
The SHELTER meeting participants seemed to broadly agree that with some exceptions, any single causal factor is unlikely to cause the extinction of humanity and is probably not sufficient to cause a catastrophic event. Instead, most existential risks and many catastrophic risks would probably be the result of several interacting mechanisms that e.g. prevent timely response to a risk that in theory should be manageable. Breakdown of the societal capability to act is thus a major risk multiplier. Single-cause risks that threaten human extinction, such as nearly omniscient AI god, are probably risks that shelters cannot realistically protect against.
Existing efforts in disaster management, particularly in countries with already robust civil defense/disaster response capability (Finland, Sweden, Switzerland etc.) could probably be augmented by relatively low-cost means to reduce the likelihood of major catastrophe(s) a) cascading to existential risks and/or b) leading to serious, irrecoverable loss of accumulated knowledge. Empirical validation of proposed means for improving resilience and the probability of recovery is necessary.
Two of the shelter strategies that seemed to gather the most support are a) hardening existing facilities identified as crucially important for reducing the likelihood of disasters cascading into catastrophes or existential risks, e.g. biomedical research and manufacturing facilities, and b) maintaining or even increasing the geographical and cultural diversity of humanity by supporting or even creating relatively isolated communities and helping them increase their resilience against biological threats in particular.
Maintaining human geographical and cultural diversity by supporting relatively isolated communities would be a no-regrets strategy that would increase resiliency and provide tangible benefits to typically underserved communities today.
Any strategy that is adopted must gain buy-in from the people who are involved. Gaining acceptance from the people is particularly important when supporting isolated communities, most of whom have very good reasons to be extremely wary of outsiders trying to “help” them. A humble bottom-up approach that is guided by what the people themselves want and need is practically mandatory.
I’m also interested in further collaboration on this and other x-risk topics, so feel free to reach out directly. My e-mail can be found in the report.
As this is also my first post after a long time lurking around the EA community, a brief introduction may be in order. I’m Janne from Finland. I’m currently a postdoc researcher at the Lappeenranta University of Technology and am supposed to be working on a report detailing how the experiences of wartime industrial mobilization could help accelerate the response to climate and other crises.
In practice, much of my time right now is spent on writing a treatise on societal power and its linkages to thermodynamical systems and the implications this has on what kinds of societies can be sustainable over the very long term. (TL;DR: very unequal societies, that is, where power is distributed very unequally, are, in my assessment, unlikely to survive the development of technologies required for long-term survival, space colonization included.)
Doing what I can to increase the probability of “intelligent” life’s survival over the very long term has been my guiding passion since 2002. In the past, I’ve worked on e.g. sustainable design, energy systems research, climate and general sustainability issues, and technological substitution of scarce resources.
I also have some understanding of security policy and military matters and some contacts in Finnish expert and political circles. As I briefly note in the report, the Finnish “comprehensive security” policy could offer some examples of how to “harden” communities and increase resilience against risks that could spiral to catastrophic or even existential otherwise. The system of preparedness and shelters Finland has could probably also be improved at a reasonable cost to increase the probability of e.g. knowledge retention and thus help reduce existential risks. If you have ideas about that, please let me know, and I’ll see if I can help you reach the right places.
Again, thanks to everyone involved, I hope the report is of some use to someone!
- Posts from 2022 you thought were valuable (or underrated) by 17 Jan 2023 16:42 UTC; 87 points) (
- Future Matters #6: FTX collapse, value lock-in, and counterarguments to AI x-risk by 30 Dec 2022 13:10 UTC; 58 points) (
- EA Architect: Dissertation on Improving the Social Dynamics of Confined Spaces & Shelters Precedents Report by 6 Jun 2023 11:58 UTC; 42 points) (
- EA & LW Forums Weekly Summary (10 − 16 Oct 22′) by 17 Oct 2022 22:51 UTC; 24 points) (
- 18 Oct 2022 14:42 UTC; 14 points) 's comment on Ask Charity Entrepreneurship Anything by (
- EA & LW Forums Weekly Summary (10 − 16 Oct 22′) by 17 Oct 2022 22:51 UTC; 12 points) (LessWrong;
- 30 Oct 2022 19:37 UTC; 4 points) 's comment on Map of Biosecurity Interventions by (
- 27 Mar 2023 19:42 UTC; 3 points) 's comment on How much should governments pay to prevent catastrophes? Longtermism’s limited role by (
Suggestion: consider including a brief summary of the report in this forum post (e.g. the “key takeaways” section).
I’ve copied it below for ease
Thank you—should’ve considered that myself.
Janne—thanks very much for posting this, and welcome (as an active participant) to the EA Forum!
At first glance, the report looks very useful for EA folks to read, regarding strategies for increasing civilizational resilience and reducing X risk.
I wonder if Rob Wiblin might be interested in interviewing you about it for a 80k Hours podcast?