Since most of the responders here are defending x-risk reduction, I wanted to chime in and say that I think your argument is far from ludicrous and is in-fact why I don’t prioritize x-risk reduction, even as a total utilitarian.
The main reason it’s difficult for me to be on board with pro-x-risk-reduction arguments is that much of it seems to rely on projections about what might happen in the future, which seems very prone to miss important considerations. For example, saying that WAS will be trivially easy to solve once we have an aligned AI, or saying that the future is more likely to be optimized for value rather than disvalue, both seem overconfident and speculative (even if you can give some plausible sounding arguments).
If I were more comfortable with projections about what will happen in the far future, I’m still not sure I would end up favoring x-risk reduction. Take AI x-risk: it’s possible that we have a truly aligned AI, or that we have a paperclip maximizer, but it’s also possible that we have a powerful general AI whose values are not as badly misaligned as a paperclip maximizer’s, but that are somehow dependent on the values of its creators. In this scenario, it seems crucially important to speed up the improvement of humanity’s values.
I agree with Moses in that I much prefer a scenario where everything in our light cone is turned into paperclips to one e.g. where humans are wiped out due to some deadly pathogen, but other life continues to exist here and wherever else in the universe. This doesn’t necessarily mean that I favor biorisk reduction over AI risk reduction, since AI risk reduction also has the favorable effect of making a remarkable outcome (aligned AI) more likely. I don’t know which one I’d favor more all things considered.
Since most of the responders here are defending x-risk reduction, I wanted to chime in and say that I think your argument is far from ludicrous and is in-fact why I don’t prioritize x-risk reduction, even as a total utilitarian.
The main reason it’s difficult for me to be on board with pro-x-risk-reduction arguments is that much of it seems to rely on projections about what might happen in the future, which seems very prone to miss important considerations. For example, saying that WAS will be trivially easy to solve once we have an aligned AI, or saying that the future is more likely to be optimized for value rather than disvalue, both seem overconfident and speculative (even if you can give some plausible sounding arguments).
If I were more comfortable with projections about what will happen in the far future, I’m still not sure I would end up favoring x-risk reduction. Take AI x-risk: it’s possible that we have a truly aligned AI, or that we have a paperclip maximizer, but it’s also possible that we have a powerful general AI whose values are not as badly misaligned as a paperclip maximizer’s, but that are somehow dependent on the values of its creators. In this scenario, it seems crucially important to speed up the improvement of humanity’s values.
I agree with Moses in that I much prefer a scenario where everything in our light cone is turned into paperclips to one e.g. where humans are wiped out due to some deadly pathogen, but other life continues to exist here and wherever else in the universe. This doesn’t necessarily mean that I favor biorisk reduction over AI risk reduction, since AI risk reduction also has the favorable effect of making a remarkable outcome (aligned AI) more likely. I don’t know which one I’d favor more all things considered.
Thanks :)