[...] whether it would be good or bad for everyone to die
Iām sorry for not engaging with the rest of your comment (Iām not very knowledgeable on questions of cluelessness), but this is something I sometimes hear in X-risk discussion and I find it a bit confusing. Depending on what animals are sentient, itās likely that every few weeks, the vast majority of the worldās individuals die prematurely, often in painful ways (being eaten alive or starving). To my understanding, the case EA makes against X-risk is not the badness of death for the individuals whose lives will be somewhat shortenedābecause it would not seem compelling in that case, especially when aiming to take into consideration of the welfare /ā interests of most individuals on earth. I donāt think this is a complex philosophical point or some extreme skepticism: Iām just superficially observing that the situation of āeveryone dies prematurelyā[1] seems to be very close to what we already have, so it doesnāt seem that obvious that this is what makes X-risks intuitively bad.
(To be clear, Iām not saying āanimals die so X-risk is goodā, my point is simply that I donāt agree that the fact that X-risks cause death is what makes them exceptionally bad, and (though Iām much less sure about that) to my understanding, that is not what initially motivated EAs to care about X-risks (as opposed to the possibility of creating a flourishing future, or other considerations I know less well)).
Not that I supposed that āprematurelyā was implied when you said āgood or bad for everyone to dieā. Of course, if we think that itās bad in general that individuals will die, no matter whether they die at a very young age or not, the case for X-risks being exceptionally bad seems weaker.
A very important consequence of everyone simultaneously dying would be that there would not be any future people. (I didnāt mean to imply that what makes it bad is just the harm of death to the individuals directly affected. Just that it would be bad for everyone to die so.)
Yes, I agree with that! This is what I consider to be the core concern regarding X-risk. Therefore, instead of framing it as āwhether it would be good or bad for everyone to die,ā the statement āwhether it would be good or bad for no future people to come into existenceā seems more accurate, as it addresses what is likely the crux of the issue. This latter framing makes it much more reasonable to hold some degree of agnosticism on the question. Moreover, I think everyone maintains some minor uncertainties about thisāeven those most convinced of the importance of reducing extinction risk often remind us of the possibility of āfutures worse than extinction.ā This clarification isnāt intended to draw any definitive conclusion, just to highlight that being agnostic on this specific question isnāt as counter-intuitive as the initial statement in your top comment might have suggested (though, as Jim noted, the post wasnāt specifically arguing that we should be agnostic on that point either).
I hope I didnāt come across as excessively nitpicky. I was motivated to write by impression that in X-risk discourse, there is sometimes (accidental) equivocation between the badness of our deaths and the badness of the non-existence of future beings. I sympathize with this: given the short timelines, I think many of us are concerned about X-risks for both reasons, and so itās understandable that both get discussed (and this isnāt unique to X-risks, of course). I hope you have a nice day of existence, Richard Y. Chappell, I really appreciate your blog!
One last clarification Iād want to add is just the distinction between uncertainty and cluelessness. Thereās immense uncertainty about the future: many different possibilities, varying in valence from very good to very bad. But appreciating that uncertainty is compatible with having (very) confident views about whether the continuation of humanity is good or bad in expectation, and thus not being utterly ācluelessā about how the various prospects balance out.
Iām sorry for not engaging with the rest of your comment (Iām not very knowledgeable on questions of cluelessness), but this is something I sometimes hear in X-risk discussion and I find it a bit confusing. Depending on what animals are sentient, itās likely that every few weeks, the vast majority of the worldās individuals die prematurely, often in painful ways (being eaten alive or starving). To my understanding, the case EA makes against X-risk is not the badness of death for the individuals whose lives will be somewhat shortenedābecause it would not seem compelling in that case, especially when aiming to take into consideration of the welfare /ā interests of most individuals on earth. I donāt think this is a complex philosophical point or some extreme skepticism: Iām just superficially observing that the situation of āeveryone dies prematurelyā[1] seems to be very close to what we already have, so it doesnāt seem that obvious that this is what makes X-risks intuitively bad.
(To be clear, Iām not saying āanimals die so X-risk is goodā, my point is simply that I donāt agree that the fact that X-risks cause death is what makes them exceptionally bad, and (though Iām much less sure about that) to my understanding, that is not what initially motivated EAs to care about X-risks (as opposed to the possibility of creating a flourishing future, or other considerations I know less well)).
Not that I supposed that āprematurelyā was implied when you said āgood or bad for everyone to dieā. Of course, if we think that itās bad in general that individuals will die, no matter whether they die at a very young age or not, the case for X-risks being exceptionally bad seems weaker.
A very important consequence of everyone simultaneously dying would be that there would not be any future people. (I didnāt mean to imply that what makes it bad is just the harm of death to the individuals directly affected. Just that it would be bad for everyone to die so.)
Yes, I agree with that! This is what I consider to be the core concern regarding X-risk. Therefore, instead of framing it as āwhether it would be good or bad for everyone to die,ā the statement āwhether it would be good or bad for no future people to come into existenceā seems more accurate, as it addresses what is likely the crux of the issue. This latter framing makes it much more reasonable to hold some degree of agnosticism on the question. Moreover, I think everyone maintains some minor uncertainties about thisāeven those most convinced of the importance of reducing extinction risk often remind us of the possibility of āfutures worse than extinction.ā This clarification isnāt intended to draw any definitive conclusion, just to highlight that being agnostic on this specific question isnāt as counter-intuitive as the initial statement in your top comment might have suggested (though, as Jim noted, the post wasnāt specifically arguing that we should be agnostic on that point either).
I hope I didnāt come across as excessively nitpicky. I was motivated to write by impression that in X-risk discourse, there is sometimes (accidental) equivocation between the badness of our deaths and the badness of the non-existence of future beings. I sympathize with this: given the short timelines, I think many of us are concerned about X-risks for both reasons, and so itās understandable that both get discussed (and this isnāt unique to X-risks, of course). I hope you have a nice day of existence, Richard Y. Chappell, I really appreciate your blog!
No worries at all (and best wishes to you too!).
One last clarification Iād want to add is just the distinction between uncertainty and cluelessness. Thereās immense uncertainty about the future: many different possibilities, varying in valence from very good to very bad. But appreciating that uncertainty is compatible with having (very) confident views about whether the continuation of humanity is good or bad in expectation, and thus not being utterly ācluelessā about how the various prospects balance out.