This is a difficult one, and both my thoughts and my justifications (especially the few sources I cite) are very incomplete.
It seems to me for now that existential risk reduction is likely to be negative, as both human and AI-controlled futures could contain immense orders of magnitude more suffering than the current world (and technological developments could also enable more intense suffering, whether in humans or in digital minds). The most salient ethical problems with the extinction of earth-originating intelligent life seem to be the likelihood of biological suffering continuing on earth for millions of years (though it’s not clear to me whether it would be more or less intense without intelligent life on earth), and the possibility of space (and eventually earth) being colonized by aliens (though whether their values will be better or worse remains an open question in my view).
Another point (which I’m not certain about how to weigh in my considerations) is that certain extinction events could massively reduce suffering on earth, by preventing digital sentience or even by causing the end of biological sentient life (this seems unlikely, and I’ve asked here how likely or unlikely EAs thought this was).
However, I am very uncertain of the tractability of improving future outcomes, especially considering recentposts by researchers at the Center on Long-Term Risk, or this one by a former researcher there, highlighting how uncertain it is that we are well-placed to improve the future. Nonetheless, I think that efforts made to improve the future, like the work of the Center for Reducing Suffering, the Center on Long-Term Risk, or the Sentience Institute, advocate for important values and could have some positive flow-through effects in the medium-term (though I don’t necessarily think that this robustly improves the longer term future). I will note, however, that I am biased since work related the Center for Reducing Suffering was the primary reason I got into EA.
I am very open to changing my mind on this, but for now I’m under 50% agree because it seems to me that, in short:
Extinction Risk Reduction could very well have negative expected value.
Efforts to improve the value of futures where we survive might have some moderate positive effects in the short-term.
Lots of uncertainties. I expect to have moved my cursor before the end of the week!
This is a difficult one, and both my thoughts and my justifications (especially the few sources I cite) are very incomplete.
It seems to me for now that existential risk reduction is likely to be negative, as both human and AI-controlled futures could contain immense orders of magnitude more suffering than the current world (and technological developments could also enable more intense suffering, whether in humans or in digital minds). The most salient ethical problems with the extinction of earth-originating intelligent life seem to be the likelihood of biological suffering continuing on earth for millions of years (though it’s not clear to me whether it would be more or less intense without intelligent life on earth), and the possibility of space (and eventually earth) being colonized by aliens (though whether their values will be better or worse remains an open question in my view).
Another point (which I’m not certain about how to weigh in my considerations) is that certain extinction events could massively reduce suffering on earth, by preventing digital sentience or even by causing the end of biological sentient life (this seems unlikely, and I’ve asked here how likely or unlikely EAs thought this was).
However, I am very uncertain of the tractability of improving future outcomes, especially considering recent posts by researchers at the Center on Long-Term Risk, or this one by a former researcher there, highlighting how uncertain it is that we are well-placed to improve the future. Nonetheless, I think that efforts made to improve the future, like the work of the Center for Reducing Suffering, the Center on Long-Term Risk, or the Sentience Institute, advocate for important values and could have some positive flow-through effects in the medium-term (though I don’t necessarily think that this robustly improves the longer term future). I will note, however, that I am biased since work related the Center for Reducing Suffering was the primary reason I got into EA.
I am very open to changing my mind on this, but for now I’m under 50% agree because it seems to me that, in short:
Extinction Risk Reduction could very well have negative expected value.
Efforts to improve the value of futures where we survive might have some moderate positive effects in the short-term.
Lots of uncertainties. I expect to have moved my cursor before the end of the week!