It’s worth noting that ensuring recovery after a near-extinction event is less robust under moral uncertainty and less cooperative given disagreements on population-ethical views than just “prevent our still-functioning civilization from going extinct.” In particular, the latter scenario (preventing extinction for a still-functioning civilization) is really good not just on a totalist view of aggregative consequentialism, but also for all existing people who don’t want to die, don’t want their relatives or friends or loved ones to die, and want civilization to go on for their personal contributions to continue to matter. All of that gets disrupted in a near-extinction collapse.
(There’s also an effect from considerations of “Which worlds get saved?” where, in a post-collapse scenario, you’ve updated that humans just aren’t very good at getting their shit together. All else equal, you should be less optimistic about our ability to pull off good things in the long-run future compared to in a world where we didn’t bring about a self-imposed civilizational collapse / near-extinction event.)
Therefore, one thing that makes the type of intervention you’re proposing more robust would be to also focus on improving the quality of the future conditional on successful rebuilding. That is, if you have information or resources that would help a second stage civilization to do better than it otherwise would (at preventing particularly bad future outcomes), that would make the intervention more robustly positive.
There’s an argument to be made that extinction is rather unlikely in general even with the massive population decreases you’re describing, and that rebuilding from a “higher base” is likely to lead to a wiser or otherwise morally better civilization than rebuilding from a lower base. (For instance, perhaps because more structures from the previous civilization are preserved, which makes it easier to “learn lessons” and have an inspiring narrative about what mistakes to avoid). That said, these things are hard to predict.[1]
Firstly, we can tell probable-sounding just-so stories where slower rebuilding leads to better outcomes. Secondly, there isn’t necessarily even a straightforward relationship between things like “civilizational wisdom” or “civilization’s ability to coordinate” to averting some of the worst possible outcomes with earth-originating space colonization (“s-risks”). In particular, sometimes it’s better to fail at some high-risk endeavor in a very stupid way rather than in a way that is “almost right.” It’s not obvious where on that spectrum a civilization would end up if you just make it a bit wiser and better-coordinated. You could argue that “being wiser is always better” because wisdom means people will want to pause, reflect, and make use of option value when they’re faced with an invention that has some chance of turning out to be a Pandora’s box. However, the ability to pause and reflect again requires being above a certain threshold on things like wisdom and ability to coordinate – otherwise there may be no “option value” in practice. (When it comes to evaluating whether a given intervention is robust, it concerns me that EAs have historically applied the “option value argument” without caveats to our present civilization, which seems quite distinctly below that threshold the way things are going – though one may hope that we’ll somehow be able to change that trajectory, which gives the basis for a more nuanced option-value argument.)
Probably not, especially not in the sense that anyone wanting to implement a low-effort version of this project should feel discouraged. (“Low-effort versions” of this would mostly help make the lives for people in post-apocalyptic scenarios less scary and more easily survivable, which seems obviously valuable. Beyond that, insofar as you manage to preserve information, that seems likely positive despite the caveats I mentioned!)
Still, before people start high-effort versions of the idea that go more in the direction of “civilization re-starter kits” (like vast storages of items to build self-functioning communities) or super bunkers, I’d personally like to see a more in-depth evaluation of the concerns.
For what it’s worth, improving the quality of a newly rebuilt civilization seems more important than making sure rebuilding happens at all even according to the total view on population ethics (that’s my guess at least – though it depends on how totalists would view futures controlled by non-aligned AI), so investigating whether there are ways to focus especially on wisdom and coordination abilities of a new civilization seems important also from that perspective.
It’s worth noting that ensuring recovery after a near-extinction event is less robust under moral uncertainty and less cooperative given disagreements on population-ethical views than just “prevent our still-functioning civilization from going extinct.” In particular, the latter scenario (preventing extinction for a still-functioning civilization) is really good not just on a totalist view of aggregative consequentialism, but also for all existing people who don’t want to die, don’t want their relatives or friends or loved ones to die, and want civilization to go on for their personal contributions to continue to matter. All of that gets disrupted in a near-extinction collapse.
(There’s also an effect from considerations of “Which worlds get saved?” where, in a post-collapse scenario, you’ve updated that humans just aren’t very good at getting their shit together. All else equal, you should be less optimistic about our ability to pull off good things in the long-run future compared to in a world where we didn’t bring about a self-imposed civilizational collapse / near-extinction event.)
Therefore, one thing that makes the type of intervention you’re proposing more robust would be to also focus on improving the quality of the future conditional on successful rebuilding. That is, if you have information or resources that would help a second stage civilization to do better than it otherwise would (at preventing particularly bad future outcomes), that would make the intervention more robustly positive.
There’s an argument to be made that extinction is rather unlikely in general even with the massive population decreases you’re describing, and that rebuilding from a “higher base” is likely to lead to a wiser or otherwise morally better civilization than rebuilding from a lower base. (For instance, perhaps because more structures from the previous civilization are preserved, which makes it easier to “learn lessons” and have an inspiring narrative about what mistakes to avoid). That said, these things are hard to predict.[1]
Firstly, we can tell probable-sounding just-so stories where slower rebuilding leads to better outcomes. Secondly, there isn’t necessarily even a straightforward relationship between things like “civilizational wisdom” or “civilization’s ability to coordinate” to averting some of the worst possible outcomes with earth-originating space colonization (“s-risks”). In particular, sometimes it’s better to fail at some high-risk endeavor in a very stupid way rather than in a way that is “almost right.” It’s not obvious where on that spectrum a civilization would end up if you just make it a bit wiser and better-coordinated. You could argue that “being wiser is always better” because wisdom means people will want to pause, reflect, and make use of option value when they’re faced with an invention that has some chance of turning out to be a Pandora’s box. However, the ability to pause and reflect again requires being above a certain threshold on things like wisdom and ability to coordinate – otherwise there may be no “option value” in practice. (When it comes to evaluating whether a given intervention is robust, it concerns me that EAs have historically applied the “option value argument” without caveats to our present civilization, which seems quite distinctly below that threshold the way things are going – though one may hope that we’ll somehow be able to change that trajectory, which gives the basis for a more nuanced option-value argument.)
Paragraph 1:
Yeah, saving humanity from [near] extinction is my Plan A.
Paragraph 2+3+4:
I don’t know how to change humanity’s direction.
Do you think this disqualifies the project?
Probably not, especially not in the sense that anyone wanting to implement a low-effort version of this project should feel discouraged. (“Low-effort versions” of this would mostly help make the lives for people in post-apocalyptic scenarios less scary and more easily survivable, which seems obviously valuable. Beyond that, insofar as you manage to preserve information, that seems likely positive despite the caveats I mentioned!)
Still, before people start high-effort versions of the idea that go more in the direction of “civilization re-starter kits” (like vast storages of items to build self-functioning communities) or super bunkers, I’d personally like to see a more in-depth evaluation of the concerns.
For what it’s worth, improving the quality of a newly rebuilt civilization seems more important than making sure rebuilding happens at all even according to the total view on population ethics (that’s my guess at least – though it depends on how totalists would view futures controlled by non-aligned AI), so investigating whether there are ways to focus especially on wisdom and coordination abilities of a new civilization seems important also from that perspective.