I’m in favor of questioning the logic of people like Musk, because I think the mindset needed to be a successful entrepreneur is significantly different than the mindset needed to improve the far future in a way that minimizes the chance of backfire. I’m also not that optimistic about colonizing Mars as a cause area. But I think you are being overly pessimistic here:
The Great Filter is arguably the central fact of our existence. Either we represent an absurd stroke of luck, perhaps the only chance the universe will ever have to know itself, or we face virtually certain doom in the future. (Disregarding the simulation hypothesis and similar. Maybe dark matter is computronium and we are in a nature preserve. Does anyone know of other ways to break the Great Filter’s assumptions?)
Working on AI safety won’t plausibly help with the Great Filter. AI itself isn’t the filter. And if the filter is late, AI research won’t save us: a late filter implies that AI is hard. (Restated: an uncolonized galaxy suggests superintelligence has never been developed, which means civilizations fail before developing superintelligence. So if the filter is ahead, it will come before superintelligence. More thoughts of mine.)
So what could help if there’s a filter in front of us? The filter is likely non-obvious, because every species before us failed to get through. This decreases the promise of guarding against specific obvious scenarios like asteroid/global warming/nuclear war/pandemic. I have not looked into the less obvious scenarios, but a planetary colony could be useful for some, such as the conversion of regular matter into strange matter as described in this post. (Should the Great Filter caution us against performing innocuous-seeming physics experiments? Perhaps there is a trap in physics that makes up the filter. Physics experiments to facilitate space exploration could be especially deadly—see my technology tree point.)
Colonizing planets before it would be reasonable to do so looks like a decent project to me in the world where AI is hard and the filter is some random thing we can’t anticipate. These are both strong possibilities.
Random trippy thought: Species probably vary on psychological measures like willingness to investigate and act on non-obvious filter candidates. If we think we’re bad at creative thinking relative to a hypothetical alien species, it’s probably a failing strategy for us. If we think the filter is ahead of us, we should go into novel-writing mode and think: what might be unique about our situation as a species that will somehow allow us to squeak past the filter? Then we can try to play to that strength.
We could study animal behavior to answer questions of this sort. A quick example: Bonobos are quite intelligent, and they may be much more benevolent than humans. If the filter is late, bonobo-type civilizations have probably been filtered many times. This suggests that working to make humanity more bonobo-like and cooperative will not help with a late filter. (On the other hand, I think I read that humans have an unusual lack of genetic diversity relative to the average species, due a relatively recent near-extinction event… so perhaps this adds up to a significant advantage in intraspecies cooperation overall?)
BTW, there’s more discussion of this thread on Facebook and Arbital.
If the filter is ahead of us, then it’s likely to be the sort of thing which civilizations don’t ordinarily protect against. Humans seem to really like the idea of going to space. It’s a common extension of basic drives for civilizations to expand, explore, conquer, discover, etc.
This decreases the promise of guarding against specific obvious scenarios like asteroid/global warming/nuclear war/pandemic.
Civilizations can be predictably bad at responding to such scenarios, which are often coordination problems or other kinds of dilemmas, so I think it’s still very likely that such scenarios are filters.
I have not looked into the less obvious scenarios, but a planetary colony could be useful for some, such as the conversion of regular matter into strange matter as described in this post. (Should the Great Filter caution us against performing innocuous-seeming physics experiments? Perhaps there is a trap in physics that makes up the filter.)
Civilizations seem to have strong drives to explore other planets anyway. So even if these kinds of possibilities are really neglected, I think it’s unlikely that they are filters, unless they only occur to pre-expansion civilizations, in which case our plans for colonizing other planets can’t be implemented soon enough to remove the risk.
I’m in favor of questioning the logic of people like Musk, because I think the mindset needed to be a successful entrepreneur is significantly different than the mindset needed to improve the far future in a way that minimizes the chance of backfire. I’m also not that optimistic about colonizing Mars as a cause area. But I think you are being overly pessimistic here:
The Great Filter is arguably the central fact of our existence. Either we represent an absurd stroke of luck, perhaps the only chance the universe will ever have to know itself, or we face virtually certain doom in the future. (Disregarding the simulation hypothesis and similar. Maybe dark matter is computronium and we are in a nature preserve. Does anyone know of other ways to break the Great Filter’s assumptions?)
Working on AI safety won’t plausibly help with the Great Filter. AI itself isn’t the filter. And if the filter is late, AI research won’t save us: a late filter implies that AI is hard. (Restated: an uncolonized galaxy suggests superintelligence has never been developed, which means civilizations fail before developing superintelligence. So if the filter is ahead, it will come before superintelligence. More thoughts of mine.)
So what could help if there’s a filter in front of us? The filter is likely non-obvious, because every species before us failed to get through. This decreases the promise of guarding against specific obvious scenarios like asteroid/global warming/nuclear war/pandemic. I have not looked into the less obvious scenarios, but a planetary colony could be useful for some, such as the conversion of regular matter into strange matter as described in this post. (Should the Great Filter caution us against performing innocuous-seeming physics experiments? Perhaps there is a trap in physics that makes up the filter. Physics experiments to facilitate space exploration could be especially deadly—see my technology tree point.)
Colonizing planets before it would be reasonable to do so looks like a decent project to me in the world where AI is hard and the filter is some random thing we can’t anticipate. These are both strong possibilities.
Random trippy thought: Species probably vary on psychological measures like willingness to investigate and act on non-obvious filter candidates. If we think we’re bad at creative thinking relative to a hypothetical alien species, it’s probably a failing strategy for us. If we think the filter is ahead of us, we should go into novel-writing mode and think: what might be unique about our situation as a species that will somehow allow us to squeak past the filter? Then we can try to play to that strength.
We could study animal behavior to answer questions of this sort. A quick example: Bonobos are quite intelligent, and they may be much more benevolent than humans. If the filter is late, bonobo-type civilizations have probably been filtered many times. This suggests that working to make humanity more bonobo-like and cooperative will not help with a late filter. (On the other hand, I think I read that humans have an unusual lack of genetic diversity relative to the average species, due a relatively recent near-extinction event… so perhaps this adds up to a significant advantage in intraspecies cooperation overall?)
BTW, there’s more discussion of this thread on Facebook and Arbital.
If the filter is ahead of us, then it’s likely to be the sort of thing which civilizations don’t ordinarily protect against. Humans seem to really like the idea of going to space. It’s a common extension of basic drives for civilizations to expand, explore, conquer, discover, etc.
Civilizations can be predictably bad at responding to such scenarios, which are often coordination problems or other kinds of dilemmas, so I think it’s still very likely that such scenarios are filters.
Civilizations seem to have strong drives to explore other planets anyway. So even if these kinds of possibilities are really neglected, I think it’s unlikely that they are filters, unless they only occur to pre-expansion civilizations, in which case our plans for colonizing other planets can’t be implemented soon enough to remove the risk.