On âbeneficiaries preferencesâ I agree with you that the vast majority of EA in practice discounts them heavily, probably much more than when the post I linked to was written.
They are definitely taken into account though. I really like this document from a GiveWell staff member, and I think itâs representative of how a large part of EA not focused on x-risk/âlongermism thinks about these things. Especially now that GiveDirectly has been removed from GiveWell recommended charities, which I think aura-wise is a big change. But lots of EAs still donate to GiveDirectly, and GiveDirectly still gives talks in EA conferences and is on EA job boards.
I personally really like the recent posts and comments advocating for more research, and I think taking into account beneficiaries preferences is a tricky moral problem for interventions targeting humans.
Hi Hannah! My very personal perspective, Iâm still relatively new to EA.
On âuncertainty in generalâ, I see lots of posts on Moral Uncertainty, Cluelessness, Model Uncertainty, âwiden your confidence intervalsâ, âwe consider our cost-effectiveness numbers to be extremely roughâ, and so on, even after spending tens of millions/âyear in research.
I think this is very different from the attitude of the Scientific Charity movement.
On âbeneficiaries preferencesâ I agree with you that the vast majority of EA in practice discounts them heavily, probably much more than when the post I linked to was written.
They are definitely taken into account though. I really like this document from a GiveWell staff member, and I think itâs representative of how a large part of EA not focused on x-risk/âlongermism thinks about these things. Especially now that GiveDirectly has been removed from GiveWell recommended charities, which I think aura-wise is a big change.
But lots of EAs still donate to GiveDirectly, and GiveDirectly still gives talks in EA conferences and is on EA job boards.
I personally really like the recent posts and comments advocating for more research, and I think taking into account beneficiaries preferences is a tricky moral problem for interventions targeting humans.
Also probably worth mentioning âBig Tent EAâ and âEA as a questionâ.