Hi Lorenzo, can you please expand on “> EAs are also much less confident that they know what people need better than they do”?
In my experience, EA has an aura of being confident that their conclusions are more accurate or effective than others’ (including beneficiaries) because people within EA arrive at their conclusions using robust tools.
On “beneficiaries preferences” I agree with you that the vast majority of EA in practice discounts them heavily, probably much more than when the post I linked to was written.
They are definitely taken into account though. I really like this document from a GiveWell staff member, and I think it’s representative of how a large part of EA not focused on x-risk/longermism thinks about these things. Especially now that GiveDirectly has been removed from GiveWell recommended charities, which I think aura-wise is a big change. But lots of EAs still donate to GiveDirectly, and GiveDirectly still gives talks in EA conferences and is on EA job boards.
I personally really like the recent posts and comments advocating for more research, and I think taking into account beneficiaries preferences is a tricky moral problem for interventions targeting humans.
Hi Lorenzo, can you please expand on “> EAs are also much less confident that they know what people need better than they do”?
In my experience, EA has an aura of being confident that their conclusions are more accurate or effective than others’ (including beneficiaries) because people within EA arrive at their conclusions using robust tools.
Hi Hannah! My very personal perspective, I’m still relatively new to EA.
On “uncertainty in general”, I see lots of posts on Moral Uncertainty, Cluelessness, Model Uncertainty, “widen your confidence intervals”, “we consider our cost-effectiveness numbers to be extremely rough”, and so on, even after spending tens of millions/year in research.
I think this is very different from the attitude of the Scientific Charity movement.
On “beneficiaries preferences” I agree with you that the vast majority of EA in practice discounts them heavily, probably much more than when the post I linked to was written.
They are definitely taken into account though. I really like this document from a GiveWell staff member, and I think it’s representative of how a large part of EA not focused on x-risk/longermism thinks about these things. Especially now that GiveDirectly has been removed from GiveWell recommended charities, which I think aura-wise is a big change.
But lots of EAs still donate to GiveDirectly, and GiveDirectly still gives talks in EA conferences and is on EA job boards.
I personally really like the recent posts and comments advocating for more research, and I think taking into account beneficiaries preferences is a tricky moral problem for interventions targeting humans.
Also probably worth mentioning “Big Tent EA” and “EA as a question”.