Is your claim âImpartial altruists with ~no credence on longtermism would have more impact donating to AI/âGCRs over animals /â global healthâ?
To my mind, this is the crux, because:
If Yes, then I agree that it totally makes sense for non-longtermist EAs to donate to AI/âGCRs
If No, then Iâm confused why one wouldnât donate to animals /â global health instead?
[I use âdonateâ rather than âwork onâ because donations arenât sensitive to individual circumstances, e.g. personal fit. Iâm also assuming impartiality because this seems core to EA to me, but of course one could donate /â work on a topic for non-impartial/â non-EA reasons]
Yes. Moreover, GCR mitigation can appeal even to partial altruists: something that would kill most of everyone, would in particular kill most of whatever group youâre partial towards. (With the caveat that âno credence on longtermismâ is underspecified, since we havenât said what we assume instead of longtermism; but the case for e.g. AI risk is robust enough to be strong under a variety of guiding principles.)
FWIW, in the (rough) BOTECs we use for opportunity prioritization at Effective Institutions Project, this has been our conclusion as well. GCR prevention is tough to beat for cost-effectiveness even only considering impacts on a 10-year time horizon, provided you are comfortable making judgments based on expected value with wide uncertainty bands.
I think people have a cached intuition that âglobal health is most cost-effective on near-term timescalesâ but whatâs really happened is that âa well-respected charity evaluator that researches donation opportunities with highly developed evidence bases has selected global health as the most cost-effective cause with a highly-developed evidence base.â Remove the requirement for certainty about the floor of impact that your donation will have, and all of the sudden a lot of stuff looks competitive with bednets on expected-value terms.
(I should caveat that we havenât yet tried to incorporate animal welfare into our calculations and therefore have no comparison there.)
Is your claim âImpartial altruists with ~no credence on longtermism would have more impact donating to AI/âGCRs over animals /â global healthâ?
To my mind, this is the crux, because:
If Yes, then I agree that it totally makes sense for non-longtermist EAs to donate to AI/âGCRs
If No, then Iâm confused why one wouldnât donate to animals /â global health instead?
[I use âdonateâ rather than âwork onâ because donations arenât sensitive to individual circumstances, e.g. personal fit. Iâm also assuming impartiality because this seems core to EA to me, but of course one could donate /â work on a topic for non-impartial/â non-EA reasons]
Yes. Moreover, GCR mitigation can appeal even to partial altruists: something that would kill most of everyone, would in particular kill most of whatever group youâre partial towards. (With the caveat that âno credence on longtermismâ is underspecified, since we havenât said what we assume instead of longtermism; but the case for e.g. AI risk is robust enough to be strong under a variety of guiding principles.)
FWIW, in the (rough) BOTECs we use for opportunity prioritization at Effective Institutions Project, this has been our conclusion as well. GCR prevention is tough to beat for cost-effectiveness even only considering impacts on a 10-year time horizon, provided you are comfortable making judgments based on expected value with wide uncertainty bands.
I think people have a cached intuition that âglobal health is most cost-effective on near-term timescalesâ but whatâs really happened is that âa well-respected charity evaluator that researches donation opportunities with highly developed evidence bases has selected global health as the most cost-effective cause with a highly-developed evidence base.â Remove the requirement for certainty about the floor of impact that your donation will have, and all of the sudden a lot of stuff looks competitive with bednets on expected-value terms.
(I should caveat that we havenât yet tried to incorporate animal welfare into our calculations and therefore have no comparison there.)