Is your claim “Impartial altruists with ~no credence on longtermism would have more impact donating to AI/GCRs over animals / global health”?
To my mind, this is the crux, because:
If Yes, then I agree that it totally makes sense for non-longtermist EAs to donate to AI/GCRs
If No, then I’m confused why one wouldn’t donate to animals / global health instead?
[I use “donate” rather than “work on” because donations aren’t sensitive to individual circumstances, e.g. personal fit. I’m also assuming impartiality because this seems core to EA to me, but of course one could donate / work on a topic for non-impartial/ non-EA reasons]
Yes. Moreover, GCR mitigation can appeal even to partial altruists: something that would kill most of everyone, would in particular kill most of whatever group you’re partial towards. (With the caveat that “no credence on longtermism” is underspecified, since we haven’t said what we assume instead of longtermism; but the case for e.g. AI risk is robust enough to be strong under a variety of guiding principles.)
FWIW, in the (rough) BOTECs we use for opportunity prioritization at Effective Institutions Project, this has been our conclusion as well. GCR prevention is tough to beat for cost-effectiveness even only considering impacts on a 10-year time horizon, provided you are comfortable making judgments based on expected value with wide uncertainty bands.
I think people have a cached intuition that “global health is most cost-effective on near-term timescales” but what’s really happened is that “a well-respected charity evaluator that researches donation opportunities with highly developed evidence bases has selected global health as the most cost-effective cause with a highly-developed evidence base.” Remove the requirement for certainty about the floor of impact that your donation will have, and all of the sudden a lot of stuff looks competitive with bednets on expected-value terms.
(I should caveat that we haven’t yet tried to incorporate animal welfare into our calculations and therefore have no comparison there.)
Is your claim “Impartial altruists with ~no credence on longtermism would have more impact donating to AI/GCRs over animals / global health”?
To my mind, this is the crux, because:
If Yes, then I agree that it totally makes sense for non-longtermist EAs to donate to AI/GCRs
If No, then I’m confused why one wouldn’t donate to animals / global health instead?
[I use “donate” rather than “work on” because donations aren’t sensitive to individual circumstances, e.g. personal fit. I’m also assuming impartiality because this seems core to EA to me, but of course one could donate / work on a topic for non-impartial/ non-EA reasons]
Yes. Moreover, GCR mitigation can appeal even to partial altruists: something that would kill most of everyone, would in particular kill most of whatever group you’re partial towards. (With the caveat that “no credence on longtermism” is underspecified, since we haven’t said what we assume instead of longtermism; but the case for e.g. AI risk is robust enough to be strong under a variety of guiding principles.)
FWIW, in the (rough) BOTECs we use for opportunity prioritization at Effective Institutions Project, this has been our conclusion as well. GCR prevention is tough to beat for cost-effectiveness even only considering impacts on a 10-year time horizon, provided you are comfortable making judgments based on expected value with wide uncertainty bands.
I think people have a cached intuition that “global health is most cost-effective on near-term timescales” but what’s really happened is that “a well-respected charity evaluator that researches donation opportunities with highly developed evidence bases has selected global health as the most cost-effective cause with a highly-developed evidence base.” Remove the requirement for certainty about the floor of impact that your donation will have, and all of the sudden a lot of stuff looks competitive with bednets on expected-value terms.
(I should caveat that we haven’t yet tried to incorporate animal welfare into our calculations and therefore have no comparison there.)