[...] whether it would be good or bad for everyone to die
Iâm sorry for not engaging with the rest of your comment (Iâm not very knowledgeable on questions of cluelessness), but this is something I sometimes hear in X-risk discussion and I find it a bit confusing. Depending on what animals are sentient, itâs likely that every few weeks, the vast majority of the worldâs individuals die prematurely, often in painful ways (being eaten alive or starving). To my understanding, the case EA makes against X-risk is not the badness of death for the individuals whose lives will be somewhat shortenedâbecause it would not seem compelling in that case, especially when aiming to take into consideration of the welfare /â interests of most individuals on earth. I donât think this is a complex philosophical point or some extreme skepticism: Iâm just superficially observing that the situation of âeveryone dies prematurelyâ[1] seems to be very close to what we already have, so it doesnât seem that obvious that this is what makes X-risks intuitively bad.
(To be clear, Iâm not saying âanimals die so X-risk is goodâ, my point is simply that I donât agree that the fact that X-risks cause death is what makes them exceptionally bad, and (though Iâm much less sure about that) to my understanding, that is not what initially motivated EAs to care about X-risks (as opposed to the possibility of creating a flourishing future, or other considerations I know less well)).
Not that I supposed that âprematurelyâ was implied when you said âgood or bad for everyone to dieâ. Of course, if we think that itâs bad in general that individuals will die, no matter whether they die at a very young age or not, the case for X-risks being exceptionally bad seems weaker.
A very important consequence of everyone simultaneously dying would be that there would not be any future people. (I didnât mean to imply that what makes it bad is just the harm of death to the individuals directly affected. Just that it would be bad for everyone to die so.)
Yes, I agree with that! This is what I consider to be the core concern regarding X-risk. Therefore, instead of framing it as âwhether it would be good or bad for everyone to die,â the statement âwhether it would be good or bad for no future people to come into existenceâ seems more accurate, as it addresses what is likely the crux of the issue. This latter framing makes it much more reasonable to hold some degree of agnosticism on the question. Moreover, I think everyone maintains some minor uncertainties about thisâeven those most convinced of the importance of reducing extinction risk often remind us of the possibility of âfutures worse than extinction.â This clarification isnât intended to draw any definitive conclusion, just to highlight that being agnostic on this specific question isnât as counter-intuitive as the initial statement in your top comment might have suggested (though, as Jim noted, the post wasnât specifically arguing that we should be agnostic on that point either).
I hope I didnât come across as excessively nitpicky. I was motivated to write by impression that in X-risk discourse, there is sometimes (accidental) equivocation between the badness of our deaths and the badness of the non-existence of future beings. I sympathize with this: given the short timelines, I think many of us are concerned about X-risks for both reasons, and so itâs understandable that both get discussed (and this isnât unique to X-risks, of course). I hope you have a nice day of existence, Richard Y. Chappell, I really appreciate your blog!
One last clarification Iâd want to add is just the distinction between uncertainty and cluelessness. Thereâs immense uncertainty about the future: many different possibilities, varying in valence from very good to very bad. But appreciating that uncertainty is compatible with having (very) confident views about whether the continuation of humanity is good or bad in expectation, and thus not being utterly âcluelessâ about how the various prospects balance out.
Iâm sorry for not engaging with the rest of your comment (Iâm not very knowledgeable on questions of cluelessness), but this is something I sometimes hear in X-risk discussion and I find it a bit confusing. Depending on what animals are sentient, itâs likely that every few weeks, the vast majority of the worldâs individuals die prematurely, often in painful ways (being eaten alive or starving). To my understanding, the case EA makes against X-risk is not the badness of death for the individuals whose lives will be somewhat shortenedâbecause it would not seem compelling in that case, especially when aiming to take into consideration of the welfare /â interests of most individuals on earth. I donât think this is a complex philosophical point or some extreme skepticism: Iâm just superficially observing that the situation of âeveryone dies prematurelyâ[1] seems to be very close to what we already have, so it doesnât seem that obvious that this is what makes X-risks intuitively bad.
(To be clear, Iâm not saying âanimals die so X-risk is goodâ, my point is simply that I donât agree that the fact that X-risks cause death is what makes them exceptionally bad, and (though Iâm much less sure about that) to my understanding, that is not what initially motivated EAs to care about X-risks (as opposed to the possibility of creating a flourishing future, or other considerations I know less well)).
Not that I supposed that âprematurelyâ was implied when you said âgood or bad for everyone to dieâ. Of course, if we think that itâs bad in general that individuals will die, no matter whether they die at a very young age or not, the case for X-risks being exceptionally bad seems weaker.
A very important consequence of everyone simultaneously dying would be that there would not be any future people. (I didnât mean to imply that what makes it bad is just the harm of death to the individuals directly affected. Just that it would be bad for everyone to die so.)
Yes, I agree with that! This is what I consider to be the core concern regarding X-risk. Therefore, instead of framing it as âwhether it would be good or bad for everyone to die,â the statement âwhether it would be good or bad for no future people to come into existenceâ seems more accurate, as it addresses what is likely the crux of the issue. This latter framing makes it much more reasonable to hold some degree of agnosticism on the question. Moreover, I think everyone maintains some minor uncertainties about thisâeven those most convinced of the importance of reducing extinction risk often remind us of the possibility of âfutures worse than extinction.â This clarification isnât intended to draw any definitive conclusion, just to highlight that being agnostic on this specific question isnât as counter-intuitive as the initial statement in your top comment might have suggested (though, as Jim noted, the post wasnât specifically arguing that we should be agnostic on that point either).
I hope I didnât come across as excessively nitpicky. I was motivated to write by impression that in X-risk discourse, there is sometimes (accidental) equivocation between the badness of our deaths and the badness of the non-existence of future beings. I sympathize with this: given the short timelines, I think many of us are concerned about X-risks for both reasons, and so itâs understandable that both get discussed (and this isnât unique to X-risks, of course). I hope you have a nice day of existence, Richard Y. Chappell, I really appreciate your blog!
No worries at all (and best wishes to you too!).
One last clarification Iâd want to add is just the distinction between uncertainty and cluelessness. Thereâs immense uncertainty about the future: many different possibilities, varying in valence from very good to very bad. But appreciating that uncertainty is compatible with having (very) confident views about whether the continuation of humanity is good or bad in expectation, and thus not being utterly âcluelessâ about how the various prospects balance out.