This isn’t an answer to your question, but I’m curious why you consider this to be a question worth raising for longtermism specifically, given that the same question could be raised for any claim of the form ‘x has priority over y’ and the potential to influence how EA resources are allocated. For example, you could ask someone claiming that farmed animal welfare has priority over global health and development whether they worry about the human beings who will predictably die—or be “killed”, as you put it—if donors reallocate funds in accordance with this ranking.
(Separately, note that the form of longtermism defended in WWOTF does not in fact imply that benefitting future beings has priority over helping people now.)
I raise the question because this thread is an AMA with Will on his current book. Will’s writing impacts my philanthropy, and so I am curious the type of impact he expected on philanthropic work before I dive in. Editing the question to speak more to that point.
As far as ‘x has priority over y’ is concerned, I agree that that type of calculation can be applied to any cause. My EA-inspired philanthropy is largely allocated to organizations that can save more lives in a quicker time than others. (Grants to folks who can’t afford drugs for Multiple myeloma is a current favorite of mine.)
This isn’t an answer to your question, but I’m curious why you consider this to be a question worth raising for longtermism specifically, given that the same question could be raised for any claim of the form ‘x has priority over y’ and the potential to influence how EA resources are allocated. For example, you could ask someone claiming that farmed animal welfare has priority over global health and development whether they worry about the human beings who will predictably die—or be “killed”, as you put it—if donors reallocate funds in accordance with this ranking.
(Separately, note that the form of longtermism defended in WWOTF does not in fact imply that benefitting future beings has priority over helping people now.)
I raise the question because this thread is an AMA with Will on his current book. Will’s writing impacts my philanthropy, and so I am curious the type of impact he expected on philanthropic work before I dive in. Editing the question to speak more to that point.
As far as ‘x has priority over y’ is concerned, I agree that that type of calculation can be applied to any cause. My EA-inspired philanthropy is largely allocated to organizations that can save more lives in a quicker time than others. (Grants to folks who can’t afford drugs for Multiple myeloma is a current favorite of mine.)
Re: “Killing”. Point taken.