As far as I can tell there is zero serious basis for going to other planets in order to save humanity and it’s an idea which stays alive merely because of science fiction fantasies and publicity statements from Elon Musk and the like. I’ve yet to see a likely catastrophic scenario where having a human space colony would be useful that would not be much more easily protected against with infrastructure on Earth.
-Can it help prevent x-risk events? Nope, there’s nothing it can do for us except tourism and moon rocks.
-Is it good for keeping people safe against x-risks? Nope. In what scenario does having a lunar colony efficiently make humanity more resilient? If there’s an asteroid, go somewhere safe on Earth. If there’s cascading global warming, move to the Yukon. If there’s a nuclear war, go to a fallout shelter. If there’s a pandemic, build a biosphere.
-Can it bring people back to Earth after an extended period of isolation? Nope, the Moon has none of the resources required for sustaining a spacefaring civilization, except sunlight and water. Whatever resources you have will degrade with inefficiencies and damage. Your only hope is to just wait for however many years or millennia it takes for Earth to become habitable again and then jump back in a prepackaged spacecraft. But, as noted above, it’s vastly easier to just do this in a shelter on Earth.
-It’s physically impossible to terraform the Moon with conceivable technology, as it has month-long days, and far too little gravity to sustain an atmosphere.
-”But don’t we need to leave the planet EVENTUALLY?” Maybe, but if we have multiple centuries or millennia then you should wait for better general technology and AI to be developed to make space travel easy, instead of funneling piles of money into it now.
I really fail to see the logic behind “Earth might become slightly less habitable in the future, so we need to go to an extremely isolated, totally barren wasteland that is absolutely inhospitable to all carbon-based life in order to survive.” Whatever happens to Earth, it’s still not going to have 200 degree temperature swings, a totally sterile geology, cancerous space radiation, unhealthy minimal gravity and a multibillion dollar week-long commute.
I’m in favor of questioning the logic of people like Musk, because I think the mindset needed to be a successful entrepreneur is significantly different than the mindset needed to improve the far future in a way that minimizes the chance of backfire. I’m also not that optimistic about colonizing Mars as a cause area. But I think you are being overly pessimistic here:
The Great Filter is arguably the central fact of our existence. Either we represent an absurd stroke of luck, perhaps the only chance the universe will ever have to know itself, or we face virtually certain doom in the future. (Disregarding the simulation hypothesis and similar. Maybe dark matter is computronium and we are in a nature preserve. Does anyone know of other ways to break the Great Filter’s assumptions?)
Working on AI safety won’t plausibly help with the Great Filter. AI itself isn’t the filter. And if the filter is late, AI research won’t save us: a late filter implies that AI is hard. (Restated: an uncolonized galaxy suggests superintelligence has never been developed, which means civilizations fail before developing superintelligence. So if the filter is ahead, it will come before superintelligence. More thoughts of mine.)
So what could help if there’s a filter in front of us? The filter is likely non-obvious, because every species before us failed to get through. This decreases the promise of guarding against specific obvious scenarios like asteroid/global warming/nuclear war/pandemic. I have not looked into the less obvious scenarios, but a planetary colony could be useful for some, such as the conversion of regular matter into strange matter as described in this post. (Should the Great Filter caution us against performing innocuous-seeming physics experiments? Perhaps there is a trap in physics that makes up the filter. Physics experiments to facilitate space exploration could be especially deadly—see my technology tree point.)
Colonizing planets before it would be reasonable to do so looks like a decent project to me in the world where AI is hard and the filter is some random thing we can’t anticipate. These are both strong possibilities.
Random trippy thought: Species probably vary on psychological measures like willingness to investigate and act on non-obvious filter candidates. If we think we’re bad at creative thinking relative to a hypothetical alien species, it’s probably a failing strategy for us. If we think the filter is ahead of us, we should go into novel-writing mode and think: what might be unique about our situation as a species that will somehow allow us to squeak past the filter? Then we can try to play to that strength.
We could study animal behavior to answer questions of this sort. A quick example: Bonobos are quite intelligent, and they may be much more benevolent than humans. If the filter is late, bonobo-type civilizations have probably been filtered many times. This suggests that working to make humanity more bonobo-like and cooperative will not help with a late filter. (On the other hand, I think I read that humans have an unusual lack of genetic diversity relative to the average species, due a relatively recent near-extinction event… so perhaps this adds up to a significant advantage in intraspecies cooperation overall?)
BTW, there’s more discussion of this thread on Facebook and Arbital.
If the filter is ahead of us, then it’s likely to be the sort of thing which civilizations don’t ordinarily protect against. Humans seem to really like the idea of going to space. It’s a common extension of basic drives for civilizations to expand, explore, conquer, discover, etc.
This decreases the promise of guarding against specific obvious scenarios like asteroid/global warming/nuclear war/pandemic.
Civilizations can be predictably bad at responding to such scenarios, which are often coordination problems or other kinds of dilemmas, so I think it’s still very likely that such scenarios are filters.
I have not looked into the less obvious scenarios, but a planetary colony could be useful for some, such as the conversion of regular matter into strange matter as described in this post. (Should the Great Filter caution us against performing innocuous-seeming physics experiments? Perhaps there is a trap in physics that makes up the filter.)
Civilizations seem to have strong drives to explore other planets anyway. So even if these kinds of possibilities are really neglected, I think it’s unlikely that they are filters, unless they only occur to pre-expansion civilizations, in which case our plans for colonizing other planets can’t be implemented soon enough to remove the risk.
Yes but going to another planet is so useless to known x-risks that it doesn’t even work as a heuristic. Allocating government funding towards any other area would be just as good along general civilization-robustness lines.
It’s bad if evaluated in the reference class of “things that work for known x-risks”. But heuristics should be used from various levels of abstractions. It looks great on robustness, resilience, redundancy grounds—i.e. in the reference class of “things that stop things from dying”. Or if you look at all of human civilization in the reference class of species, or in the reference class of civilizations.
When not looking at specific risks, I still don’t see how it works well in generic robustness/resiliency/redundancy grounds compared to other things. Better healthcare, more education, less military conflict… tons of things seem to be equally good if not better along those lines, when it comes to improving the overall strength of the human race.
They may be good for improving the overall strength of the human race but to say that improves the robustness and resiliency is a non sequitur.
The idea (see e.g. here, here just to take my top two google results) is to work on modularity, back-ups, and decentralized, adaptivity, et cetera. Things like healthcare and education are centralized and don’t adapt.
I know you said “I don’t see how...” but in order to see how, probably the best thing is to read around the topic, and likewise for other puzzled readers.
These are sufficiently generic criteria that all kinds of systems can improve them. Healthcare, for instance: build more advanced healthcare centers in more areas of the world. This will give any segment of the population more redundancy and resiliency when performing healthcare related functions. Same goes with education: provide more educational programs so that they are redundant and resilient to anything that happens to other educational programs and provide varied methods of education. If you take an old-fashioned geopolitical look at the world then sure it seems like being on another planet makes you really robust, but if we’re protecting against Unknown Unknowns then you can’t assume that far-away-and-in-space is a more valuable direction to go in, out of all the other directions that you can go for improving resilience and redundancy.
Making healthcare centers more advanced would prima facie reduce the resiliency of healthcare systems by making them more complex and brittle. One would have to argue for more specific changes.
You don’t need to resort to a geopolitical stance to want to be on another planet. Physical separation and duplication is useful for redundancy of basically everything. Any reasonable reference class makes this look good.
For the last two layers of nested comments you have not actually addressed my arguments, which can be seen if you look carefully over them, nor have you given any impression of really engaging seriously with the issue, so this is my final comment for the thread.
Making healthcare centers more advanced would prima facie reduce the resiliency of healthcare systems by making them more complex and brittle.”
I said to build more healthcare centers. The more healthcare centers you have, the more redundancy you have. If you add very advanced healthcare centers or very basic ones without removing existing ones, then you have the option of providing more and different types of healthcare. This provides adaptiveness and redundancy in the form of different types of healthcare provision. If you add more healthcare professionals, you are achieving redundancy and adaptiveness by adding new talent and new ways of thinking to the field. And so on.
The whole redundancy-adaptiveness-etc stance is perfectly useful when you have some idea of what the risk actually is. If you really want to protect against “unknown unknowns” then you have no reason to think that the problem with humanity is going to be that we’re all on the same planet, as opposed to the problem being that we don’t have enough hospitals or didn’t learn how to cure cancer or something of the sort.
You don’t need to resort to a geopolitical stance to want to be on another planet. Physical separation and duplication is useful for redundancy of basically everything.
A colony on another planet is not some sort of parallel civilization that can support and replace the critical functions of the Earth-based one like an electric power generator. You can’t use facilities and resources on Mars or the Moon to prop up a failing system on Earth and vice versa without extremely high costs and time delays. The combined Earth + space colony civilization within the current technological horizon isn’t an integrated, resilient, adaptive system where strengths of one area can rapidly support the other. Even if the extraterrestrial colony were self-sustaining, there would essentially be two independent systems with their own possible failure modes, which is worse than systems which can flexibly support each other.
Physical separation can be taken in a bunch of different ways. Maybe the next x-risk will be best mitigated by minimizing the number of people who are within a five meter radius of another. Maybe the next x-risk will be best mitigated by increasing the number of people who are more than 25 meters beneath the surface of the planet. Maybe it will be mitigated by evening the distribution of people across the planet, to be less focused in cities and more distributed across the countryside or oceans.
In any case, protecting Earth’s civilization has a much higher payoff than protecting a small civilization on an extraterrestrial body.
For the last two layers of nested comments you have not actually addressed my arguments, which can be seen if you look carefully over them, nor have you given any impression of really engaging seriously with the issue, so this is my final comment for the thread.
Hmm, well that’s puzzling to me, because it looks like I answered them pretty directly.
Nope, the Moon has none of the resources required for sustaining a spacefaring civilization, except sunlight and water.
Well. This might be a bit of an over-statement—we don’t really have a good idea of what’s up there. There is good evidence for Titanium and there may be Platnium Group metals up there. Who knows what else?
The moon, mars, or colonies inside hollowed out asteroids certainly doesn’t make sense as x-risk mitigation in the near or medium term, but at some point they’re going to be necessary.
Is it good for keeping people safe against x-risks? Nope. In what scenario does having a lunar colony efficiently make humanity more resilient? If there’s an asteroid, go somewhere safe on Earth...
There are no known Earth-crossing minor planets large enough that a shelter on the other side of the world would be destroyed. All of them are approximately the size of the dinosaur-killer asteroid or smaller. We’ve surveyed of the large ones and there are no foreseeable impact risks from them.
Large asteroids are easier to detect from a long distance. A very large asteroid would have to come in from some previously unknown, unexpected orbit for it to be previously undetected. So probably a comet-like orbit, which for a large asteroid is probably ridiculously unusual.
I really don’t know how big it would have to be to destroy a solid underground or underwater structure. Maybe around the size of the Vredefort asteroid if not larger. But we haven’t had such an impact since the end of the late heavy bombardment period, three billion years ago, when these objects were cleared from Earth’s orbit.
The big threat is from comets, because we have not tracked the vast majority of them. There is evidence in periodicity of bombardment that would correlate with the perturbation of the Oort Cloud of comets (see the book Global Catastrophic Risks). Burned-out comets can be very dark, and we would have little warning.
Around 100 km diameter would boil the oceans. It is possible that a bunker in Antarctica that can handle hundreds of atmospheres of pressure (due to the oceans being above us in vapor form) could work. But it would have to last for something like 1000 years. Or we would have to stay on Mars for 1000 years.
Nope, the Moon has none of the resources required for sustaining a spacefaring civilization, except sunlight and water. Whatever resources you have will degrade with inefficiencies and damage. Your only hope is to just wait for however many years or millennia it takes for Earth to become habitable again and then jump back in a prepackaged spacecraft. But, as noted above, it’s vastly easier to just do this in a shelter on Earth.
You are forgetting the rocks, including metals and so forth that we know to be present there (and on the asteroids, which are an even more serious target). Lunar dirt and rock is about 10% aluminum, just like earth dirt and rock is, and just like stony asteroids are. Oxygen is the most abundant element in them, followed by silicon. Iron is also present in small (but inexpensively magnetically collectible) amounts in lunar regolith due to meteorite impacts from metallic asteroids.
The problem with earth is that as long as we stay here, we tend to only develop technologies optimized for this environment—which is small, crowded, and vulnerable. If you develop technologies for the Moon, that same approach will tend to work almost anywhere in the universe. You wouldn’t stay Moon-only for long.
We do have approaches that could be used, but they aren’t mature because we don’t have a need, thanks to plentiful water and water-based geology. For example, we have long known that you can convert any substance to plasma by raising the temperature to 10,000k and the dissociated ions can be separated by mass charge ratio (a la mass spectrometry). Efficiency in such a system would be tricky, but isn’t necessarily insoluble (might require that it be done at very large scale, for example). Energy efficiency itself is also somewhat less relevant given the abundance of sunlight.
The big issue with dragging our feet on space is more to do with astronomical waste than x-risk in my opinion. Every day we wait to build the first self replicating robotic space factory is another huge loss in terms of expected utility from economic growth. The chance of an asteroid impact probably isn’t high enough to rate by comparison to the missed gains of even a fraction of 1% of the solar output translated to meaningful economic activity.
I’m not sure expanding into space necessarily (in the “all else equal” sense) reduces x-risk, since space warfare has the capacity to be pretty brutal (impact weapons, e.g.) and the increased computational resources that would be granted by having a mature self replicating space industrial capacity could lead to earlier brute forcing of AGI. It’s probably important to control who has access to space for it to actually reduce x-risk (just like any other form of great power, really). You would certainly eliminate some x-risks entirely though (natural asteroid impact, virus that wipes out humanity, global warming caused by reliance on carbon based fuels, nearby supernova, etc.)
The big issue with dragging our feet on space is more to do with astronomical waste than x-risk in my opinion.
In that case you should invest directly in base technologies. The private sector will find the most profitable uses for them, and usually there are more profitable applications for technology than space. Everyone loves to talk about all the new technologies which came out of the U.S. space program, but imagine how much more we would have gotten had we invested the same amount of money directly into medical technology, material science, and orange-flavored powdered drink mix.
Every day we wait to build the first self replicating robotic space factory is another huge loss in terms of expected utility from economic growth.
The technologies required for that are various things which are beyond our current abilities. We can’t even do self replication on Earth. We may as well start with the fundamental nanoengineering and artificial intelligence domains. We don’t know how space tech and missions will evolve, so if we try to make applied technology for current missions then much of the effort will be poorly targeted and less useful. It’s already clear that more serious basic problems in materials science, AI and other domains must be overcome for space exploration to provide positive returns, and those are the fields which both the private sector and the government are less interested in supporting (due to long term horizons and riskiness of profits for the private sector, and lack of politically sellable ‘results’ for the government).
In that case you should invest directly in base technologies. The private sector will find the most profitable uses for them, and usually there are more profitable applications for technology than space. Everyone loves to talk about all the new technologies which came out of the U.S. space program, but imagine how much more we would have gotten had we invested the same amount of money directly into medical technology, material science, and orange-flavored powdered drink mix.
I’m with you on spinoffs argument, however we’re concerned with technologies of specific usefulness in space to tap space resources. What is the profitable application of a zero gravity refinery for turning heterogeneous rocks into aluminum? Assume the process is (at the small scale) around 5% as energy efficient as electrolyzing bauxite and requires a high vacuum. Chances are such a thing could be worth something in a world without cheaper ways to get aluminum, assuming you could work around the gravity difference. Not so much in a world with abundant bauxite, gravity, and an atmosphere. So there is little incentive to develop in that direction unless you are actually planning to use it in space, where it would be highly useful (because aluminum is so useful in the service of energy collection in space that 5% energy efficiency actually wouldn’t slow growth by much).
The technologies required for that are various things which are beyond our current abilities. We can’t even do self replication on Earth. We may as well start with the fundamental nanoengineering and artificial intelligence domains. We don’t know how space tech and missions will evolve, so if we try to make applied technology for current missions then much of the effort will be poorly targeted and less useful. It’s already clear that more serious basic problems in materials science, AI and other domains must be overcome for space exploration to provide positive returns, and those are the fields which both the private sector and the government are less interested in supporting (due to long term horizons and riskiness of profits for the private sector, and lack of politically sellable ‘results’ for the government).
We actually do facilitate the replication of machinery, with the aid of human labor. An orbital factory wouldn’t have much time delay compared to the human nervous system, so the minimal requirement for a fully self replicating space swarm seems to be telerobotics sufficiently good to mimic the human hand well enough to perform maintenance and assembly tasks. No new advances in nanoengineering or artificial intelligence are needed. However, until such a system replicates itself adequately to return results, it would be a monetary sink rather than a source of profit. It would become profitable at some point, because the cheap on site energy, ready made vacuum, zero gravity, absence of atmosphere/weather, reduction of rent/crowding issues due to 3d construction, enhanced transport logistics between factories due to vacuum/zero gravity, etc, would all contribute to making it more efficient.
As far as I can tell there is zero serious basis for going to other planets in order to save humanity and it’s an idea which stays alive merely because of science fiction fantasies and publicity statements from Elon Musk and the like. I’ve yet to see a likely catastrophic scenario where having a human space colony would be useful that would not be much more easily protected against with infrastructure on Earth.
-Can it help prevent x-risk events? Nope, there’s nothing it can do for us except tourism and moon rocks.
-Is it good for keeping people safe against x-risks? Nope. In what scenario does having a lunar colony efficiently make humanity more resilient? If there’s an asteroid, go somewhere safe on Earth. If there’s cascading global warming, move to the Yukon. If there’s a nuclear war, go to a fallout shelter. If there’s a pandemic, build a biosphere.
-Can it bring people back to Earth after an extended period of isolation? Nope, the Moon has none of the resources required for sustaining a spacefaring civilization, except sunlight and water. Whatever resources you have will degrade with inefficiencies and damage. Your only hope is to just wait for however many years or millennia it takes for Earth to become habitable again and then jump back in a prepackaged spacecraft. But, as noted above, it’s vastly easier to just do this in a shelter on Earth.
-It’s physically impossible to terraform the Moon with conceivable technology, as it has month-long days, and far too little gravity to sustain an atmosphere.
-”But don’t we need to leave the planet EVENTUALLY?” Maybe, but if we have multiple centuries or millennia then you should wait for better general technology and AI to be developed to make space travel easy, instead of funneling piles of money into it now.
I really fail to see the logic behind “Earth might become slightly less habitable in the future, so we need to go to an extremely isolated, totally barren wasteland that is absolutely inhospitable to all carbon-based life in order to survive.” Whatever happens to Earth, it’s still not going to have 200 degree temperature swings, a totally sterile geology, cancerous space radiation, unhealthy minimal gravity and a multibillion dollar week-long commute.
I’m in favor of questioning the logic of people like Musk, because I think the mindset needed to be a successful entrepreneur is significantly different than the mindset needed to improve the far future in a way that minimizes the chance of backfire. I’m also not that optimistic about colonizing Mars as a cause area. But I think you are being overly pessimistic here:
The Great Filter is arguably the central fact of our existence. Either we represent an absurd stroke of luck, perhaps the only chance the universe will ever have to know itself, or we face virtually certain doom in the future. (Disregarding the simulation hypothesis and similar. Maybe dark matter is computronium and we are in a nature preserve. Does anyone know of other ways to break the Great Filter’s assumptions?)
Working on AI safety won’t plausibly help with the Great Filter. AI itself isn’t the filter. And if the filter is late, AI research won’t save us: a late filter implies that AI is hard. (Restated: an uncolonized galaxy suggests superintelligence has never been developed, which means civilizations fail before developing superintelligence. So if the filter is ahead, it will come before superintelligence. More thoughts of mine.)
So what could help if there’s a filter in front of us? The filter is likely non-obvious, because every species before us failed to get through. This decreases the promise of guarding against specific obvious scenarios like asteroid/global warming/nuclear war/pandemic. I have not looked into the less obvious scenarios, but a planetary colony could be useful for some, such as the conversion of regular matter into strange matter as described in this post. (Should the Great Filter caution us against performing innocuous-seeming physics experiments? Perhaps there is a trap in physics that makes up the filter. Physics experiments to facilitate space exploration could be especially deadly—see my technology tree point.)
Colonizing planets before it would be reasonable to do so looks like a decent project to me in the world where AI is hard and the filter is some random thing we can’t anticipate. These are both strong possibilities.
Random trippy thought: Species probably vary on psychological measures like willingness to investigate and act on non-obvious filter candidates. If we think we’re bad at creative thinking relative to a hypothetical alien species, it’s probably a failing strategy for us. If we think the filter is ahead of us, we should go into novel-writing mode and think: what might be unique about our situation as a species that will somehow allow us to squeak past the filter? Then we can try to play to that strength.
We could study animal behavior to answer questions of this sort. A quick example: Bonobos are quite intelligent, and they may be much more benevolent than humans. If the filter is late, bonobo-type civilizations have probably been filtered many times. This suggests that working to make humanity more bonobo-like and cooperative will not help with a late filter. (On the other hand, I think I read that humans have an unusual lack of genetic diversity relative to the average species, due a relatively recent near-extinction event… so perhaps this adds up to a significant advantage in intraspecies cooperation overall?)
BTW, there’s more discussion of this thread on Facebook and Arbital.
If the filter is ahead of us, then it’s likely to be the sort of thing which civilizations don’t ordinarily protect against. Humans seem to really like the idea of going to space. It’s a common extension of basic drives for civilizations to expand, explore, conquer, discover, etc.
Civilizations can be predictably bad at responding to such scenarios, which are often coordination problems or other kinds of dilemmas, so I think it’s still very likely that such scenarios are filters.
Civilizations seem to have strong drives to explore other planets anyway. So even if these kinds of possibilities are really neglected, I think it’s unlikely that they are filters, unless they only occur to pre-expansion civilizations, in which case our plans for colonizing other planets can’t be implemented soon enough to remove the risk.
It’s hubris to think that you need to have modeled the risk for it to be able to kill you. Must also invest in heuristic robustness measures.
Yes but going to another planet is so useless to known x-risks that it doesn’t even work as a heuristic. Allocating government funding towards any other area would be just as good along general civilization-robustness lines.
It’s bad if evaluated in the reference class of “things that work for known x-risks”. But heuristics should be used from various levels of abstractions. It looks great on robustness, resilience, redundancy grounds—i.e. in the reference class of “things that stop things from dying”. Or if you look at all of human civilization in the reference class of species, or in the reference class of civilizations.
When not looking at specific risks, I still don’t see how it works well in generic robustness/resiliency/redundancy grounds compared to other things. Better healthcare, more education, less military conflict… tons of things seem to be equally good if not better along those lines, when it comes to improving the overall strength of the human race.
They may be good for improving the overall strength of the human race but to say that improves the robustness and resiliency is a non sequitur.
The idea (see e.g. here, here just to take my top two google results) is to work on modularity, back-ups, and decentralized, adaptivity, et cetera. Things like healthcare and education are centralized and don’t adapt.
I know you said “I don’t see how...” but in order to see how, probably the best thing is to read around the topic, and likewise for other puzzled readers.
These are sufficiently generic criteria that all kinds of systems can improve them. Healthcare, for instance: build more advanced healthcare centers in more areas of the world. This will give any segment of the population more redundancy and resiliency when performing healthcare related functions. Same goes with education: provide more educational programs so that they are redundant and resilient to anything that happens to other educational programs and provide varied methods of education. If you take an old-fashioned geopolitical look at the world then sure it seems like being on another planet makes you really robust, but if we’re protecting against Unknown Unknowns then you can’t assume that far-away-and-in-space is a more valuable direction to go in, out of all the other directions that you can go for improving resilience and redundancy.
Making healthcare centers more advanced would prima facie reduce the resiliency of healthcare systems by making them more complex and brittle. One would have to argue for more specific changes.
You don’t need to resort to a geopolitical stance to want to be on another planet. Physical separation and duplication is useful for redundancy of basically everything. Any reasonable reference class makes this look good.
For the last two layers of nested comments you have not actually addressed my arguments, which can be seen if you look carefully over them, nor have you given any impression of really engaging seriously with the issue, so this is my final comment for the thread.
I said to build more healthcare centers. The more healthcare centers you have, the more redundancy you have. If you add very advanced healthcare centers or very basic ones without removing existing ones, then you have the option of providing more and different types of healthcare. This provides adaptiveness and redundancy in the form of different types of healthcare provision. If you add more healthcare professionals, you are achieving redundancy and adaptiveness by adding new talent and new ways of thinking to the field. And so on.
The whole redundancy-adaptiveness-etc stance is perfectly useful when you have some idea of what the risk actually is. If you really want to protect against “unknown unknowns” then you have no reason to think that the problem with humanity is going to be that we’re all on the same planet, as opposed to the problem being that we don’t have enough hospitals or didn’t learn how to cure cancer or something of the sort.
A colony on another planet is not some sort of parallel civilization that can support and replace the critical functions of the Earth-based one like an electric power generator. You can’t use facilities and resources on Mars or the Moon to prop up a failing system on Earth and vice versa without extremely high costs and time delays. The combined Earth + space colony civilization within the current technological horizon isn’t an integrated, resilient, adaptive system where strengths of one area can rapidly support the other. Even if the extraterrestrial colony were self-sustaining, there would essentially be two independent systems with their own possible failure modes, which is worse than systems which can flexibly support each other.
Physical separation can be taken in a bunch of different ways. Maybe the next x-risk will be best mitigated by minimizing the number of people who are within a five meter radius of another. Maybe the next x-risk will be best mitigated by increasing the number of people who are more than 25 meters beneath the surface of the planet. Maybe it will be mitigated by evening the distribution of people across the planet, to be less focused in cities and more distributed across the countryside or oceans.
In any case, protecting Earth’s civilization has a much higher payoff than protecting a small civilization on an extraterrestrial body.
Hmm, well that’s puzzling to me, because it looks like I answered them pretty directly.
Well. This might be a bit of an over-statement—we don’t really have a good idea of what’s up there. There is good evidence for Titanium and there may be Platnium Group metals up there. Who knows what else?
The moon, mars, or colonies inside hollowed out asteroids certainly doesn’t make sense as x-risk mitigation in the near or medium term, but at some point they’re going to be necessary.
What if it’s a big asteroid?
There are no known Earth-crossing minor planets large enough that a shelter on the other side of the world would be destroyed. All of them are approximately the size of the dinosaur-killer asteroid or smaller. We’ve surveyed of the large ones and there are no foreseeable impact risks from them.
Large asteroids are easier to detect from a long distance. A very large asteroid would have to come in from some previously unknown, unexpected orbit for it to be previously undetected. So probably a comet-like orbit, which for a large asteroid is probably ridiculously unusual.
I really don’t know how big it would have to be to destroy a solid underground or underwater structure. Maybe around the size of the Vredefort asteroid if not larger. But we haven’t had such an impact since the end of the late heavy bombardment period, three billion years ago, when these objects were cleared from Earth’s orbit.
The big threat is from comets, because we have not tracked the vast majority of them. There is evidence in periodicity of bombardment that would correlate with the perturbation of the Oort Cloud of comets (see the book Global Catastrophic Risks). Burned-out comets can be very dark, and we would have little warning.
If it’s so big no bunkers work, how long would we have to wait on Mars before coming back?
Around 100 km diameter would boil the oceans. It is possible that a bunker in Antarctica that can handle hundreds of atmospheres of pressure (due to the oceans being above us in vapor form) could work. But it would have to last for something like 1000 years. Or we would have to stay on Mars for 1000 years.
You are forgetting the rocks, including metals and so forth that we know to be present there (and on the asteroids, which are an even more serious target). Lunar dirt and rock is about 10% aluminum, just like earth dirt and rock is, and just like stony asteroids are. Oxygen is the most abundant element in them, followed by silicon. Iron is also present in small (but inexpensively magnetically collectible) amounts in lunar regolith due to meteorite impacts from metallic asteroids.
The problem with earth is that as long as we stay here, we tend to only develop technologies optimized for this environment—which is small, crowded, and vulnerable. If you develop technologies for the Moon, that same approach will tend to work almost anywhere in the universe. You wouldn’t stay Moon-only for long.
We do have approaches that could be used, but they aren’t mature because we don’t have a need, thanks to plentiful water and water-based geology. For example, we have long known that you can convert any substance to plasma by raising the temperature to 10,000k and the dissociated ions can be separated by mass charge ratio (a la mass spectrometry). Efficiency in such a system would be tricky, but isn’t necessarily insoluble (might require that it be done at very large scale, for example). Energy efficiency itself is also somewhat less relevant given the abundance of sunlight.
The big issue with dragging our feet on space is more to do with astronomical waste than x-risk in my opinion. Every day we wait to build the first self replicating robotic space factory is another huge loss in terms of expected utility from economic growth. The chance of an asteroid impact probably isn’t high enough to rate by comparison to the missed gains of even a fraction of 1% of the solar output translated to meaningful economic activity.
I’m not sure expanding into space necessarily (in the “all else equal” sense) reduces x-risk, since space warfare has the capacity to be pretty brutal (impact weapons, e.g.) and the increased computational resources that would be granted by having a mature self replicating space industrial capacity could lead to earlier brute forcing of AGI. It’s probably important to control who has access to space for it to actually reduce x-risk (just like any other form of great power, really). You would certainly eliminate some x-risks entirely though (natural asteroid impact, virus that wipes out humanity, global warming caused by reliance on carbon based fuels, nearby supernova, etc.)
In that case you should invest directly in base technologies. The private sector will find the most profitable uses for them, and usually there are more profitable applications for technology than space. Everyone loves to talk about all the new technologies which came out of the U.S. space program, but imagine how much more we would have gotten had we invested the same amount of money directly into medical technology, material science, and orange-flavored powdered drink mix.
The technologies required for that are various things which are beyond our current abilities. We can’t even do self replication on Earth. We may as well start with the fundamental nanoengineering and artificial intelligence domains. We don’t know how space tech and missions will evolve, so if we try to make applied technology for current missions then much of the effort will be poorly targeted and less useful. It’s already clear that more serious basic problems in materials science, AI and other domains must be overcome for space exploration to provide positive returns, and those are the fields which both the private sector and the government are less interested in supporting (due to long term horizons and riskiness of profits for the private sector, and lack of politically sellable ‘results’ for the government).
I’m with you on spinoffs argument, however we’re concerned with technologies of specific usefulness in space to tap space resources. What is the profitable application of a zero gravity refinery for turning heterogeneous rocks into aluminum? Assume the process is (at the small scale) around 5% as energy efficient as electrolyzing bauxite and requires a high vacuum. Chances are such a thing could be worth something in a world without cheaper ways to get aluminum, assuming you could work around the gravity difference. Not so much in a world with abundant bauxite, gravity, and an atmosphere. So there is little incentive to develop in that direction unless you are actually planning to use it in space, where it would be highly useful (because aluminum is so useful in the service of energy collection in space that 5% energy efficiency actually wouldn’t slow growth by much).
We actually do facilitate the replication of machinery, with the aid of human labor. An orbital factory wouldn’t have much time delay compared to the human nervous system, so the minimal requirement for a fully self replicating space swarm seems to be telerobotics sufficiently good to mimic the human hand well enough to perform maintenance and assembly tasks. No new advances in nanoengineering or artificial intelligence are needed. However, until such a system replicates itself adequately to return results, it would be a monetary sink rather than a source of profit. It would become profitable at some point, because the cheap on site energy, ready made vacuum, zero gravity, absence of atmosphere/weather, reduction of rent/crowding issues due to 3d construction, enhanced transport logistics between factories due to vacuum/zero gravity, etc, would all contribute to making it more efficient.