It sounds like at least part of your argument could be summarized as: “Will MacAskill underrates x-risk relative to most longtermists; The Precipice is a better introduction because it focuses on x-risk.”
I don’t have a strong view about the focus on x-risk in general. I care most about the lack of clarity on which areas are highest priority, and what I believe is a mistake in not focusing on AI enough (and the wrong emphasis within AI).
In the post I wrote:
One big difference between The Precipice and WWOTF that I’m not as sure about is the framing of reducing x-risks as interventions as opposed to trajectory changes and safeguarding civilization. I lean toward The Precipice and x-risks here but this belief isn’t very resilient.
IMO, x-risk reduction is not the only (potential) way of influencing the long-term future, so it’s good for an introductory book on longtermism to be fairly agnostic on whether to prioritize x-risk reduction.
(More on the object level, I believe the longtermist community has been too quick to settle on x-risk as the main thing worth working on, and it would be good to have more work on other areas, although I still think x-risk should be the top longtermist priority.)
I think we have an object-level disagreement here, likely due to different beliefs about AI. I agree reducing x-risk isn’t the only possible longtermist intervention, but I am not convinced that many others are relatively promising. I think influencing AI stuff directly or indirectly is likely the most important lever for influencing the future, whether this is framed as x-risk or s-risk or a trajectory change or safeguarding civilization (x-risk still seems most natural to me, but I’m fine with other phrasings and am also concerned about value lock-in and s-risks though I think these can be thought of as a class of x-risks).
We also might have different beliefs about the level of agnosticism appropriate in an introductory book. I agree it shouldn’t give too strong vibes of “we’ve figured out the most effective things and they are A, B, and C” but I think it’s valuable to be pretty clear about our current best guesses.
I’m fine with other phrasings and am also concerned about value lock-in and s-risks though I think these can be thought of as a class of x-risks
I’m not keen on classifying s-risks as x-risks because, for better or worse, most people really just seem to mean “extinction or permanent human disempowerment” when they talk about “x-risks.” I worry that a motte-and-bailey can happen here, where (1) people include s-risks within x-risks when trying to get people on board with focusing on x-risks, but then (2) their further discussion of x-risks basically equates them with non-s-x-risks. The fact that the “dictionary definition” of x-risks would include s-risks doesn’t solve this problem.
I think this is a valid concern. Separately, it’s not clear that all s-risks are x-risks, depending on how “astronomical suffering” and “human potential” are understood.
What do you think about the concept of a hellish existential catastrophe? It highlights both that (some) s-risks fall under the category of existential risk and that they have an additional important property absent from typical x-risks. The concept isolates a risk the reduction of which should arguably be prioritized by EAs with different moral perspectives.
I don’t have a strong view about the focus on x-risk in general. I care most about the lack of clarity on which areas are highest priority, and what I believe is a mistake in not focusing on AI enough (and the wrong emphasis within AI).
In the post I wrote:
I also discussed this further in the appendix.
I think we have an object-level disagreement here, likely due to different beliefs about AI. I agree reducing x-risk isn’t the only possible longtermist intervention, but I am not convinced that many others are relatively promising. I think influencing AI stuff directly or indirectly is likely the most important lever for influencing the future, whether this is framed as x-risk or s-risk or a trajectory change or safeguarding civilization (x-risk still seems most natural to me, but I’m fine with other phrasings and am also concerned about value lock-in and s-risks though I think these can be thought of as a class of x-risks).
We also might have different beliefs about the level of agnosticism appropriate in an introductory book. I agree it shouldn’t give too strong vibes of “we’ve figured out the most effective things and they are A, B, and C” but I think it’s valuable to be pretty clear about our current best guesses.
I’m not keen on classifying s-risks as x-risks because, for better or worse, most people really just seem to mean “extinction or permanent human disempowerment” when they talk about “x-risks.” I worry that a motte-and-bailey can happen here, where (1) people include s-risks within x-risks when trying to get people on board with focusing on x-risks, but then (2) their further discussion of x-risks basically equates them with non-s-x-risks. The fact that the “dictionary definition” of x-risks would include s-risks doesn’t solve this problem.
I think this is a valid concern. Separately, it’s not clear that all s-risks are x-risks, depending on how “astronomical suffering” and “human potential” are understood.
What do you think about the concept of a hellish existential catastrophe? It highlights both that (some) s-risks fall under the category of existential risk and that they have an additional important property absent from typical x-risks. The concept isolates a risk the reduction of which should arguably be prioritized by EAs with different moral perspectives.