One issue I feel the EA community has badly neglected is the probability given various (including modest) civilizational backslide scenarios of us still being able to (and *actually*) developing the economies of scale needed to become an interstellar species.
To give a single example, a runaway Kessler effect could make putting anything in orbit basically impossible unless governments overcome the global tragedy of the commons and mount an extremely expensive mission to remove enough debris to regain effective orbital access—in a world where we’ve lost satellite technology and everything that depends on it.
EA so far seem to have treated ‘humanity doesn’t go extinct’ in scenarios like this as equivalent to ‘humanity reaches its interstellar potential’, which seems very dangerous to me—intuitively, it feels like there’s at least a 1% chance that we wouldn’t ever solve such a problem in practice, even if civilisation lasted for millennia afterwards. If so, then we should be treating it as (at least) 1/100th of an existential catastrophe—and a couple of orders of magnitude doesn’t seem like that big a deal especially if there are many more such scenarios than there are extinction-causing ones.
Do you have any thoughts on how to model this question in a generalisable way that it could give a heuristic for non-literal-extinction GCRs? Or do you think one would need to research specific GCRs to answer it for each of them?
One issue I feel the EA community has badly neglected is the probability given various (including modest) civilizational backslide scenarios of us still being able to (and *actually*) developing the economies of scale needed to become an interstellar species.
To give a single example, a runaway Kessler effect could make putting anything in orbit basically impossible unless governments overcome the global tragedy of the commons and mount an extremely expensive mission to remove enough debris to regain effective orbital access—in a world where we’ve lost satellite technology and everything that depends on it.
EA so far seem to have treated ‘humanity doesn’t go extinct’ in scenarios like this as equivalent to ‘humanity reaches its interstellar potential’, which seems very dangerous to me—intuitively, it feels like there’s at least a 1% chance that we wouldn’t ever solve such a problem in practice, even if civilisation lasted for millennia afterwards. If so, then we should be treating it as (at least) 1/100th of an existential catastrophe—and a couple of orders of magnitude doesn’t seem like that big a deal especially if there are many more such scenarios than there are extinction-causing ones.
Do you have any thoughts on how to model this question in a generalisable way that it could give a heuristic for non-literal-extinction GCRs? Or do you think one would need to research specific GCRs to answer it for each of them?