At first glance, I was worried that a lot of it would be low-quality criticisms that attack strawmen of EA, and the othercomments in this thread basically confirm this.
Itās astounding how often critics of EA get basic things about EA wrong. The two most salient ones that I see are:
āEA is founded on utilitarianismāāno, itās not. EA is loosely based on utilitarianism, but as Erich writes, EA is compatible with a broader range of ethical frameworks, particularly beneficentric ones.
Corollary: if you morally value other things besides welfare (e.g. biodiversity), come up with some way to trade off those moral goods against each other and compare interventions using your all-things-considered metric.
āEA only cares about things that can be easily measuredāāagain, no. Itās great when thereās empirical studies of cost-effectiveness, but we all recognize thatās not always possible. In general, the EA movement has become more open to acting on greater uncertainty. What ultimately matters is estimating impact, not measuring it. Open Phil had back-of-the-envelope calculations for a vast set of cause areasin 2014. EAs have put pages and pages of effort into trying to estimate the impact of things that are hard to measure, like economic growth and biosecurity interventions.
To be fair, my responses to these criticisms above still assume that you can quantify good done. But in principle, you can compare the impact of interventions without necessarily quantifying them (e.g. using ordinal social welfare functions); itās just a lot easier to make up numbers and use them.
On the one hand, I really wish people who want to criticize EA would actually do their fucking homework and listen to what EAs themselves have said about the things theyāre criticizing. On the other hand, if people consistently mistakenly believe that EA is only utilitarianism and only cares about measurable outcomes, then maybe we need to adjust our own messaging to avoid this.
So if youāre wondering why EAs are reluctant to engage with ādeep criticismsā of EA principles, maybe itās because a lot of them miss the mark.
I think itās wrong to think of it as criticism in the way EA thinks about criticism which is ātell me X thing Iām doing is wrong so I can fix itā but rather highlight the set of existing fundamental disagreements. I think the book is targeted at an imagined left-wing young person who the authors think would be ātrickedā into EA because they misread certain claims that EA puts forward. Itās a form of memeplex competition. Moreover, I do think some of the empirical details talking about the effect ACE has on the wider community can inform a lot of EAs with coordinating with wider ecosystems in cause areas and common communication failure modes.
Well, no. I gather that the goal of these criticisms is to ādisprove EAā or āargue that EA is wrongā. To the extent that they attack strawmen of EA instead of representing it for what it is and arguing against that, theyāve failed to achieve that goal.
FWIW, I read your comment as agreeing with zchuangs. They say that the book aims to convince its target audience by highlighting fundamental differences, and you say it aims to disprove EA. Highlighting (and I guess more specifically, sometimes at least, arguing against) fundamental principles of EA seems like itās in the ādisproving EAā bucket to me.
(I agree with both of these perspectives, but I only read a single essay, which zchuang called one of the weird ones, so take that with a grain of salt.)
At first glance, I was worried that a lot of it would be low-quality criticisms that attack strawmen of EA, and the other comments in this thread basically confirm this.
Itās astounding how often critics of EA get basic things about EA wrong. The two most salient ones that I see are:
āEA is founded on utilitarianismāāno, itās not. EA is loosely based on utilitarianism, but as Erich writes, EA is compatible with a broader range of ethical frameworks, particularly beneficentric ones.
Corollary: if you morally value other things besides welfare (e.g. biodiversity), come up with some way to trade off those moral goods against each other and compare interventions using your all-things-considered metric.
āEA only cares about things that can be easily measuredāāagain, no. Itās great when thereās empirical studies of cost-effectiveness, but we all recognize thatās not always possible. In general, the EA movement has become more open to acting on greater uncertainty. What ultimately matters is estimating impact, not measuring it. Open Phil had back-of-the-envelope calculations for a vast set of cause areas in 2014. EAs have put pages and pages of effort into trying to estimate the impact of things that are hard to measure, like economic growth and biosecurity interventions.
To be fair, my responses to these criticisms above still assume that you can quantify good done. But in principle, you can compare the impact of interventions without necessarily quantifying them (e.g. using ordinal social welfare functions); itās just a lot easier to make up numbers and use them.
On the one hand, I really wish people who want to criticize EA would actually do their fucking homework and listen to what EAs themselves have said about the things theyāre criticizing. On the other hand, if people consistently mistakenly believe that EA is only utilitarianism and only cares about measurable outcomes, then maybe we need to adjust our own messaging to avoid this.
So if youāre wondering why EAs are reluctant to engage with ādeep criticismsā of EA principles, maybe itās because a lot of them miss the mark.
I think itās wrong to think of it as criticism in the way EA thinks about criticism which is ātell me X thing Iām doing is wrong so I can fix itā but rather highlight the set of existing fundamental disagreements. I think the book is targeted at an imagined left-wing young person who the authors think would be ātrickedā into EA because they misread certain claims that EA puts forward. Itās a form of memeplex competition. Moreover, I do think some of the empirical details talking about the effect ACE has on the wider community can inform a lot of EAs with coordinating with wider ecosystems in cause areas and common communication failure modes.
Well, no. I gather that the goal of these criticisms is to ādisprove EAā or āargue that EA is wrongā. To the extent that they attack strawmen of EA instead of representing it for what it is and arguing against that, theyāve failed to achieve that goal.
FWIW, I read your comment as agreeing with zchuangs. They say that the book aims to convince its target audience by highlighting fundamental differences, and you say it aims to disprove EA. Highlighting (and I guess more specifically, sometimes at least, arguing against) fundamental principles of EA seems like itās in the ādisproving EAā bucket to me.
(I agree with both of these perspectives, but I only read a single essay, which zchuang called one of the weird ones, so take that with a grain of salt.)