Fourth, I’m not sure why you think I’ve misrepresented MacAskill (do you mean ‘misunderstood’?). In the part you quote, I am (I think?) making my own assessment, not stating MacAskill’s view at all.
You say the following in the summary of the book section (bold part added by me):
If correct, this [the intuition of neutrality] would present a severe challenge to longtermism
By including it in the ‘summary’ section I think you implicitly present this as a view Will espoused in the book—and I don’t agree that he did.
But the cause du jour of longtermism is preventing existential risks in order that many future happy generations exist. If one accepts the intuition of neutrality that would reduce/remove the good of doing that. Hence, it does present a severe challenge to longtermism in practice—especially if you want to claim, as MacAskill does, that longtermism changes the priorities.
Sure, people talk about avoiding extinction quite a bit, but that isn’t the only reason to care about existential risk, as I explain in my post. For example, you can want to prevent existential risks that involve locking-in bad states of the world in which we continue to exist e.g. an authoritarian state such as China using powerful AI to control the world.
One could say reducing x-risk from AI is the cause du jour of the longtermist community. The key point is that reducing x-risk from AI is still a valid priority (for longtermist reasons) if one accepts the intuition of neutrality.
Accepting the intuition of neutrality would involve some re-prioritization within the longtermist community—say moving resources away from x-risks that are solely extinction risks (like biorisks?) and towards x-risks that are more (like s-risks from misaligned AI or digital sentience). I simply don’t think accepting the intuition of neutrality is a “severe” challenge for longtermism, and I think it is clear Will doesn’t think so either (e.g. see this).
You say the following in the summary of the book section (bold part added by me):
By including it in the ‘summary’ section I think you implicitly present this as a view Will espoused in the book—and I don’t agree that he did.
Sure, people talk about avoiding extinction quite a bit, but that isn’t the only reason to care about existential risk, as I explain in my post. For example, you can want to prevent existential risks that involve locking-in bad states of the world in which we continue to exist e.g. an authoritarian state such as China using powerful AI to control the world.
One could say reducing x-risk from AI is the cause du jour of the longtermist community. The key point is that reducing x-risk from AI is still a valid priority (for longtermist reasons) if one accepts the intuition of neutrality.
Accepting the intuition of neutrality would involve some re-prioritization within the longtermist community—say moving resources away from x-risks that are solely extinction risks (like biorisks?) and towards x-risks that are more (like s-risks from misaligned AI or digital sentience). I simply don’t think accepting the intuition of neutrality is a “severe” challenge for longtermism, and I think it is clear Will doesn’t think so either (e.g. see this).