If I were to add one thing to this report, it would probably be a comparison of increasing the likelihood of space settlement vs increasing the likelihood of extremely resilient and self-sustaining disaster shelters (e.g. shelters that could be self-sustaining for decades or possibly centuries). You note the similarities in “Design of disaster shelters”, but don’t compare these as possible interventions (as far as I can tell).
My naive (mostly uninformed) guess would have been that very good disaster shelters are wildly cheaper and easier (prior to radical technology change like superhuman AI or nanotech) while offering most of the same benefits.
(I put a low probability on commercially viable and self-sustaining space colonies prior to some other radical change in the technical landscape, but perhaps I’m missing some story for economic viability. Like I think the probability of these sorts of space colonies in the next 60 years is low (without some other radical technical advancement like AI or nanotech happening prior in which case the value add is more complex).)
Hey Ryan, I think your scepticism is a widely held view among EAs, but IMO overlooks some crucial factors in addition to the considerations Christopher mention:
Focusing just on cost seems like a huge oversimplification. If Musk or someone can set up a space economy that gradually gives people real incentives to fly there and back, it’s not that hard to imagine the free market effectively covering this cost many times over. That doesn’t have to mean something like ‘people on Earth pay to transport materials from the surface of Mars to the surface of Earth’. If you have a bunch of individuals living on the surface for whatever reason (initially scientific research, say), they could make a living in any number of ways such that some of the value is exportable to earth—low gravity research or industry, xenobiology, web development, art, or whatever—and you can bet they’d be spending a very high proportion of that on urgently reducing the number of ways that their environment could kill them.
Feedback loops: we know that Earth has a terrible record of keeping important-to-isolate areas actually isolated. You could spend the GDP of a mid-sized nation on creating a network of theoretically isolated bunkers, but if one sleepy resident calls out for pizza, leaves a vent off or whatever, the entire thing could be compromised, at least to biological hazards. On the other hand, space settlements, unlike bunkers, force isolation. Obviously mistakes can still be fatal, but unlike on Earth they’re going to have a fast and hard-to-ignore feedback loop if they’re doing something existentially risky. On an offworld colony if you make a serious mistake a bunch of people will probably die very soon after. On an Earth bunker if you make a serious mistake the whole of the bunker might all die at some indefinite point after—which is exactly what it was supposed to safeguard against.
Relatedly, lack of any serious proposal to make such disaster shelters on Earth. This is maybe just human motivation, or the ‘forcing function’ people like Robert Zubrin have described: you put people on a remote frontier where their lives are in danger every day, and they’re just going to be a lot more productive than people who vaguely think their project might matter someday.
Also relatedly, in-atmosphere biodefence seems hopeless. We seem quite close to it being terrifyingly easy to create extremely viral and extremely lethal pandemics and nowhere near being able to regulate the ecosystem to the degree that would defend against them.
Long-term value: a bunker system is ultimately way more limited as a backup mechanism. At its apotheosis, the residents might be able to help humanity leapfrog some of the way towards the modern era (which may not actually bypass that much of the existential risk, per my argument here and the subsequent post I’m still working on in that series). Whereas the apotheosis of offworld colonisation is basically our end goal—colonising the Virgo supercluster. And it would only take a century or two at most—and possibly only a few decades on optimistic timelines—to far surpass the best defence a bunker system could provide.
‘Self-sustainingness’ is more of a spectrum than a hard line. If some 100% lethal pandemic or other such event killed all humans on Earth, an offworld colony would still be able to repopulate it if they had the capacity for a couple of trips back and the wherewithal to outlast the catastrophe. It seems plausible that an offworld colony could start providing meaningful backup (without yet being fully self-sustaining) by 2050-2060.
If you also think AI timelines might substantially longer than EAs typically think or that AI could be less ‘extinction event’ and more ‘global catastrophe on the order of taking down the internet’ (which seems plausible to me—there are a lot of selection effects in this community for short-timeline-doomers and evidence for some quite strong groupthink) then it starts to look like a reasonable area for further consideration, especially given how little serious scrutiny it’s had among EAs to date.
I’d like to stress that it seems at least plausible to me that encouraging space colonization could be a worthwhile cause area. (I’m uncertain what is sufficiently neglected here, but it seems plausible that there are some key neglected areas.) And I’d like to stress that I haven’t thought about the area very much.
Accordingly, feel free to not engage with the rest of my comment.
A sufficient crux for me would be thinking that it’s doable to substantially effect the probability of commercially viable long-term space colonization[1] within the next 40 years. My main concern here would be that commercially viable long-term space colonization within 40 years is quite unlikely by default and thus hard to boost the absolute probability by much (it’s way down on the logistic success curve). My second biggest concern would be that this doesn’t really seem very neglected. (Though perhaps resources are sufficiently inefficiently allocated that there is a neglected subfield?)
To emphasize, this is a sufficient crux, but not the only sufficient crux.
I have >25% on AI taking longer than 30 years, so this isn’t that much of discount on this view (but the potential for working on AI being better might be a substantial discount.)
That is, this space colonization happening prior to some other radical technological change. As, in encouraging space colonization after transformative AI or nanotech doesn’t seem that important for various reasons.
main concern here would be that commercially viable long-term space colonization within 40 years is quite unlikely by default and thus hard to boost the absolute probability by much
I agree this is an important question—I would like to see more research effort into it (if nothing else, that question seems neglected).
That said, I am strongly averse to EAs using ‘not neglected’ as evidence that some area isn’t worth supporting. There’s so much context to it, and it’s not even well defined (not neglected relative to overall amount of effort required, or in absolute terms)? At best it gives a first guess at tractability and so is a heuristic for prioritising prioritisation projects—but for a movement that has been around for over a decade and sunk millions of work hours into cause prioritisation, it really should be an obsolete heuristic.
That said, I am strongly averse to EAs using ‘not neglected’ as evidence that some area isn’t worth supporting.
I strongly disagree that ‘not neglected’ isn’t good evidence. This is evidence depending only on pretty weak assumptions (returns which diminish faster than seeing more people working on a thing is evidence for the thing being good). For research-ish areas, I think you should probably have something like a log returns prior in which case this makes sense.
I’m somewhat sympathetic to “maybe someone should just do the serious prioritization research” in which case we don’t need rough except for prioritising prioritisation projects. ([Low confidence] But in practice, I think high level spot checking is necessary and often better than actual reports for longtermism IMO. For this heuristics are pretty important. It’s just pretty easy to have a lot of information/understanding which allows for beating carefully done prioritization research in the longermist space in many cases. Like research can be informative, but often not for the bottom line, but instead for answering various questions about fundamentals.)
For the definition of ITN that I use (but maybe people use others?), tractability is basically separable from neglectedness:
Recall the importance-tractability-neglectedness (ITN) framework for estimating cost-effectiveness:
Importance = utility gained / % of problem solved
Tractability = % of problem solved / % increase in resources
Neglectedness = % increase in resources / extra $
(Quote from the same post I linked above.)
it’s not even well defined (not neglected relative to overall amount of effort required, or in absolute terms)
In absolute terms ideally (as in definition above), though this might be somewhat poorly defined still.
At the end of the day, we just want to compute expected value with respect to various actions, but I still think that “are a bunch of people already trying to solve the problem and are they approaching it in a reasonable way” is a pretty good heuristic. (In research-ish fields, clearly some fields end up having pretty linear returns and we can usually predict this with some other simple heuristics.)
I don’t have time to respond to this in as much depth as I’d like, but maybe it’s worth a few cursory remarks, since much of what you’ve said touches on my frustrations with the concept.
returns which diminish faster than seeing more people working on a thing is evidence for the thing being good
I’m not sure how to parse this.
pretty weak assumptions… For research-ish areas, I think you should probably have something like a log returns prior in which case this makes sense.
This seems like an extremely strong assumption to me, especially since it’s basically the assertion I’m contesting:
‘research-ish’ is extremely underdefined and in practice not normally what we’re discussing. Eg the matter at hand: space colonisation is a confluence of a huge number of areas, including public opinion, funding, engineering, political will, etc. In fact, there’s very little academic research that needs to be done to make it happen. I think this is normally the case when EAs dismiss something as ‘not neglected’ (cf also climate change)
For technology-ish areas, public opinion-ish areas (ie almost any practical issue) I think you should have something much more like an S-curve prior, which requires you to make more assumptions than a log returns prior, and to do more empirical research to justify them. S-curves also have a much more complicated relationship with expected marginal value.
Some real world examples of problems and ambiguities with the concept that showcase these issues:
Climate change is often dismissed as ‘not neglected’, but at least some proportion of work that many metrics would consider as ‘climate change related’ is very ineffective or literally zero-sum (such as opposing vs advocating nuclear energy)
The EA movement began because Toby/Holden/Elie found a bunch of underutilised health-relevant research that was being ignored by a huge base of donors—if that work hadn’t been done, GWWC and Givewell would have been much less compelling sales pitches, and the movement would be able to boast of far fewer lives saved.
People often described AI safety research as ‘neglected’, but this is using what strikes me as a very tribal definition of AI safety—most tech workers spend their lives trying to ‘align’ computer programs with the goal they have in mind, but EAs tend to dismiss this irrelevant, implying that AI doom is highly likely if left to such people. I see no good reason for such a strong take, and if you even given some weight to the safety-relevance of such work, then AI safety becomes a far less neglected field, with ~25million people * whatever weighting you give them working on it.
A line from the recent EA Netherlands newsletter: ‘We have the core of a social movement in the Netherlands, but we don’t have much by way of a research field or a set of organisations putting said research into practice. This means that, while we have many people in the community who want to use their time and their talents to do good, we lack the organisations that can provide the work. There are opportunities abroad, but not enough, and the fact they are abroad limits their capacity to absorb talent located in the Netherlands.’ Another way of looking at this would be ‘EA field building and organisational development in the Netherlands is hugely neglected, and therefore and extremely high-leverage opportunity for doing such things’. Yet they seem to have (IMO sensibly) taken the opposite view.
It’s just pretty easy to have a lot of information/understanding which allows for beating carefully done prioritization research in the longermist space in many cases.
This seems like a very hard claim to justify, given both conceptually how difficult it is to measure outputs/value gains in the longtermist space and how few organisations in it are producing anything meaningfully measureable at all. I think many assumptions the space makes, going right back to the idea that longtermist work is better from a longtermist perspective than ‘short termism’ are justified on flimsy, self-supporting heuristics.
For the definition of ITN that I use (but maybe people use others?),
I don’t believe in practice (m?)any people use that version. In order to get a neatly cancellable equation, it quietly replaces ‘tractability’ - a relatively intuitive concept that people can often do useful work on—with something like elasticity of tractability, which I’ve never heard anyone opine on, directly or indirectly in any other context.
Hi Ryan, thanks a lot for taking the time and leaving your thoughts, I appreciate it!
I agree that extremely resilient and self-sustaining disaster shelters might offer similar benefits from an X-risk perspective.
I didn’t go all to deep into that topic because I feel there aren’t the same incentives (and excitement) around building shelters on earth compared to starting settlements away from earth. At least, I am unaware of designated X-risk shelters being built specifically to save humanity in the event of a catastrophe (an exception might be the Svalbard Seed Vault or some of the military nuclear bunkers, or maybe nuclear submarines). On the other hand, there are multiple efforts currently aimed at starting a space settlement, and the concept is much more embedded into the popular awareness. I think there is an inherent attractiveness of spreading “outwards” as opposed to going “inwards”. To put it romantically: there seems to be more potential for human development in the reaches of space than below the earth.
HOWEVER I totally agree with you that it would certainly be smart to prepare (x-risk) shelters on earth for many reasons! Are you aware of any projects currently pursuing this?
Are you aware of any projects currently pursuing this?
People have certainly talked about this on the forum, but maybe people currently think it seems somewhat more cost effective to work on other projects given the current x-risk reduction funding?
Thanks for the report.
If I were to add one thing to this report, it would probably be a comparison of increasing the likelihood of space settlement vs increasing the likelihood of extremely resilient and self-sustaining disaster shelters (e.g. shelters that could be self-sustaining for decades or possibly centuries). You note the similarities in “Design of disaster shelters”, but don’t compare these as possible interventions (as far as I can tell).
My naive (mostly uninformed) guess would have been that very good disaster shelters are wildly cheaper and easier (prior to radical technology change like superhuman AI or nanotech) while offering most of the same benefits.
(I put a low probability on commercially viable and self-sustaining space colonies prior to some other radical change in the technical landscape, but perhaps I’m missing some story for economic viability. Like I think the probability of these sorts of space colonies in the next 60 years is low (without some other radical technical advancement like AI or nanotech happening prior in which case the value add is more complex).)
Hey Ryan, I think your scepticism is a widely held view among EAs, but IMO overlooks some crucial factors in addition to the considerations Christopher mention:
Focusing just on cost seems like a huge oversimplification. If Musk or someone can set up a space economy that gradually gives people real incentives to fly there and back, it’s not that hard to imagine the free market effectively covering this cost many times over. That doesn’t have to mean something like ‘people on Earth pay to transport materials from the surface of Mars to the surface of Earth’. If you have a bunch of individuals living on the surface for whatever reason (initially scientific research, say), they could make a living in any number of ways such that some of the value is exportable to earth—low gravity research or industry, xenobiology, web development, art, or whatever—and you can bet they’d be spending a very high proportion of that on urgently reducing the number of ways that their environment could kill them.
Feedback loops: we know that Earth has a terrible record of keeping important-to-isolate areas actually isolated. You could spend the GDP of a mid-sized nation on creating a network of theoretically isolated bunkers, but if one sleepy resident calls out for pizza, leaves a vent off or whatever, the entire thing could be compromised, at least to biological hazards. On the other hand, space settlements, unlike bunkers, force isolation. Obviously mistakes can still be fatal, but unlike on Earth they’re going to have a fast and hard-to-ignore feedback loop if they’re doing something existentially risky. On an offworld colony if you make a serious mistake a bunch of people will probably die very soon after. On an Earth bunker if you make a serious mistake the whole of the bunker might all die at some indefinite point after—which is exactly what it was supposed to safeguard against.
Relatedly, lack of any serious proposal to make such disaster shelters on Earth. This is maybe just human motivation, or the ‘forcing function’ people like Robert Zubrin have described: you put people on a remote frontier where their lives are in danger every day, and they’re just going to be a lot more productive than people who vaguely think their project might matter someday.
Also relatedly, in-atmosphere biodefence seems hopeless. We seem quite close to it being terrifyingly easy to create extremely viral and extremely lethal pandemics and nowhere near being able to regulate the ecosystem to the degree that would defend against them.
Long-term value: a bunker system is ultimately way more limited as a backup mechanism. At its apotheosis, the residents might be able to help humanity leapfrog some of the way towards the modern era (which may not actually bypass that much of the existential risk, per my argument here and the subsequent post I’m still working on in that series). Whereas the apotheosis of offworld colonisation is basically our end goal—colonising the Virgo supercluster. And it would only take a century or two at most—and possibly only a few decades on optimistic timelines—to far surpass the best defence a bunker system could provide.
‘Self-sustainingness’ is more of a spectrum than a hard line. If some 100% lethal pandemic or other such event killed all humans on Earth, an offworld colony would still be able to repopulate it if they had the capacity for a couple of trips back and the wherewithal to outlast the catastrophe. It seems plausible that an offworld colony could start providing meaningful backup (without yet being fully self-sustaining) by 2050-2060.
If you also think AI timelines might substantially longer than EAs typically think or that AI could be less ‘extinction event’ and more ‘global catastrophe on the order of taking down the internet’ (which seems plausible to me—there are a lot of selection effects in this community for short-timeline-doomers and evidence for some quite strong groupthink) then it starts to look like a reasonable area for further consideration, especially given how little serious scrutiny it’s had among EAs to date.
Thanks for the comment.
I’d like to stress that it seems at least plausible to me that encouraging space colonization could be a worthwhile cause area. (I’m uncertain what is sufficiently neglected here, but it seems plausible that there are some key neglected areas.) And I’d like to stress that I haven’t thought about the area very much.
Accordingly, feel free to not engage with the rest of my comment.
A sufficient crux for me would be thinking that it’s doable to substantially effect the probability of commercially viable long-term space colonization[1] within the next 40 years. My main concern here would be that commercially viable long-term space colonization within 40 years is quite unlikely by default and thus hard to boost the absolute probability by much (it’s way down on the logistic success curve). My second biggest concern would be that this doesn’t really seem very neglected. (Though perhaps resources are sufficiently inefficiently allocated that there is a neglected subfield?)
To emphasize, this is a sufficient crux, but not the only sufficient crux.
I have >25% on AI taking longer than 30 years, so this isn’t that much of discount on this view (but the potential for working on AI being better might be a substantial discount.)
That is, this space colonization happening prior to some other radical technological change. As, in encouraging space colonization after transformative AI or nanotech doesn’t seem that important for various reasons.
I agree this is an important question—I would like to see more research effort into it (if nothing else, that question seems neglected).
That said, I am strongly averse to EAs using ‘not neglected’ as evidence that some area isn’t worth supporting. There’s so much context to it, and it’s not even well defined (not neglected relative to overall amount of effort required, or in absolute terms)? At best it gives a first guess at tractability and so is a heuristic for prioritising prioritisation projects—but for a movement that has been around for over a decade and sunk millions of work hours into cause prioritisation, it really should be an obsolete heuristic.
I strongly disagree that ‘not neglected’ isn’t good evidence. This is evidence depending only on pretty weak assumptions (returns which diminish faster than seeing more people working on a thing is evidence for the thing being good). For research-ish areas, I think you should probably have something like a log returns prior in which case this makes sense.
I’m somewhat sympathetic to “maybe someone should just do the serious prioritization research” in which case we don’t need rough except for prioritising prioritisation projects. ([Low confidence] But in practice, I think high level spot checking is necessary and often better than actual reports for longtermism IMO. For this heuristics are pretty important. It’s just pretty easy to have a lot of information/understanding which allows for beating carefully done prioritization research in the longermist space in many cases. Like research can be informative, but often not for the bottom line, but instead for answering various questions about fundamentals.)
See also Most problems fall within a 100x tractability range.
For the definition of ITN that I use (but maybe people use others?), tractability is basically separable from neglectedness:
(Quote from the same post I linked above.)
In absolute terms ideally (as in definition above), though this might be somewhat poorly defined still.
At the end of the day, we just want to compute expected value with respect to various actions, but I still think that “are a bunch of people already trying to solve the problem and are they approaching it in a reasonable way” is a pretty good heuristic. (In research-ish fields, clearly some fields end up having pretty linear returns and we can usually predict this with some other simple heuristics.)
I don’t have time to respond to this in as much depth as I’d like, but maybe it’s worth a few cursory remarks, since much of what you’ve said touches on my frustrations with the concept.
I’m not sure how to parse this.
This seems like an extremely strong assumption to me, especially since it’s basically the assertion I’m contesting:
‘research-ish’ is extremely underdefined and in practice not normally what we’re discussing. Eg the matter at hand: space colonisation is a confluence of a huge number of areas, including public opinion, funding, engineering, political will, etc. In fact, there’s very little academic research that needs to be done to make it happen. I think this is normally the case when EAs dismiss something as ‘not neglected’ (cf also climate change)
For technology-ish areas, public opinion-ish areas (ie almost any practical issue) I think you should have something much more like an S-curve prior, which requires you to make more assumptions than a log returns prior, and to do more empirical research to justify them. S-curves also have a much more complicated relationship with expected marginal value.
Some real world examples of problems and ambiguities with the concept that showcase these issues:
Climate change is often dismissed as ‘not neglected’, but at least some proportion of work that many metrics would consider as ‘climate change related’ is very ineffective or literally zero-sum (such as opposing vs advocating nuclear energy)
The EA movement began because Toby/Holden/Elie found a bunch of underutilised health-relevant research that was being ignored by a huge base of donors—if that work hadn’t been done, GWWC and Givewell would have been much less compelling sales pitches, and the movement would be able to boast of far fewer lives saved.
People often described AI safety research as ‘neglected’, but this is using what strikes me as a very tribal definition of AI safety—most tech workers spend their lives trying to ‘align’ computer programs with the goal they have in mind, but EAs tend to dismiss this irrelevant, implying that AI doom is highly likely if left to such people. I see no good reason for such a strong take, and if you even given some weight to the safety-relevance of such work, then AI safety becomes a far less neglected field, with ~25million people * whatever weighting you give them working on it.
A line from the recent EA Netherlands newsletter: ‘We have the core of a social movement in the Netherlands, but we don’t have much by way of a research field or a set of organisations putting said research into practice. This means that, while we have many people in the community who want to use their time and their talents to do good, we lack the organisations that can provide the work. There are opportunities abroad, but not enough, and the fact they are abroad limits their capacity to absorb talent located in the Netherlands.’ Another way of looking at this would be ‘EA field building and organisational development in the Netherlands is hugely neglected, and therefore and extremely high-leverage opportunity for doing such things’. Yet they seem to have (IMO sensibly) taken the opposite view.
This seems like a very hard claim to justify, given both conceptually how difficult it is to measure outputs/value gains in the longtermist space and how few organisations in it are producing anything meaningfully measureable at all. I think many assumptions the space makes, going right back to the idea that longtermist work is better from a longtermist perspective than ‘short termism’ are justified on flimsy, self-supporting heuristics.
I don’t believe in practice (m?)any people use that version. In order to get a neatly cancellable equation, it quietly replaces ‘tractability’ - a relatively intuitive concept that people can often do useful work on—with something like elasticity of tractability, which I’ve never heard anyone opine on, directly or indirectly in any other context.
Hi Ryan, thanks a lot for taking the time and leaving your thoughts, I appreciate it!
I agree that extremely resilient and self-sustaining disaster shelters might offer similar benefits from an X-risk perspective.
I didn’t go all to deep into that topic because I feel there aren’t the same incentives (and excitement) around building shelters on earth compared to starting settlements away from earth. At least, I am unaware of designated X-risk shelters being built specifically to save humanity in the event of a catastrophe (an exception might be the Svalbard Seed Vault or some of the military nuclear bunkers, or maybe nuclear submarines).
On the other hand, there are multiple efforts currently aimed at starting a space settlement, and the concept is much more embedded into the popular awareness.
I think there is an inherent attractiveness of spreading “outwards” as opposed to going “inwards”. To put it romantically: there seems to be more potential for human development in the reaches of space than below the earth.
HOWEVER I totally agree with you that it would certainly be smart to prepare (x-risk) shelters on earth for many reasons! Are you aware of any projects currently pursuing this?
People have certainly talked about this on the forum, but maybe people currently think it seems somewhat more cost effective to work on other projects given the current x-risk reduction funding?