“Should EA develop any framework for responding to acute crises where traditional cost-effectiveness analysis isn’t possible? Or is our position that if we can’t measure it with near-certainty, we won’t fund it—even during famines?”
This is tricky. I think that most[1] of EA is outside of global health/welfare, and much of this is incredibly speculative. AI safety is pretty wild, and even animal welfare work can be more speculative.
GiveWell has historically represented much of the EA-aligned global welfare work. They’ve also seemed to cater to particularly risk-averse donors, from what I can tell.
So an intervention like this is in a tricky middle-ground, where it’s much less speculative than AI risk, but more speculative than much of the GiveWell spend. This is about the point where you can’t really think of “EA” as one unified thing with one utility function. The funding works much more as a bunch of different buckets with fairly different criteria.
Bigger-picture, EAs have a very small sliver of philanthropic spending, which itself is a small sliver of global spending. In my preferred world we wouldn’t need to be so incredibly ruthless with charity choices, because there would just be much more available.
[1] In terms of respected EA discussions/researchers.
“Should EA develop any framework for responding to acute crises where traditional cost-effectiveness analysis isn’t possible? Or is our position that if we can’t measure it with near-certainty, we won’t fund it—even during famines?”
This is tricky. I think that most[1] of EA is outside of global health/welfare, and much of this is incredibly speculative. AI safety is pretty wild, and even animal welfare work can be more speculative.
GiveWell has historically represented much of the EA-aligned global welfare work. They’ve also seemed to cater to particularly risk-averse donors, from what I can tell.
So an intervention like this is in a tricky middle-ground, where it’s much less speculative than AI risk, but more speculative than much of the GiveWell spend. This is about the point where you can’t really think of “EA” as one unified thing with one utility function. The funding works much more as a bunch of different buckets with fairly different criteria.
Bigger-picture, EAs have a very small sliver of philanthropic spending, which itself is a small sliver of global spending. In my preferred world we wouldn’t need to be so incredibly ruthless with charity choices, because there would just be much more available.
[1] In terms of respected EA discussions/researchers.