There’s also the feedback we get in talks, and the comments on all the articles and media attention we’ve gotten, which is very extensive. I’ve also presented on these topics in an academic setting.
From this, I feel I know the most common criticisms of EA (as practiced, rather than in theory) pretty well.
doesn’t appreciate the importance of systemic change
too focused on the measurable rather than unquantifiable benefits
smuggles utilitarian assumptions under the table (e.g. that you can aggregate small benefits and weigh them against large benefits; e.g. that you shouldn’t be much more concerned with avoiding causing harm than with actively doing good)
…
However, I haven’t seen a smart outside person spend a considerable amount of time to evaluating and criticising effective altruism. These objections are just the ones that people think of off the top of their head. I’d really like to see what someone who spent e.g. a week investigating EA and criticising it would say.
I agree with these but they also reinforce the fact that “Effective altruism” as a category is quite unwieldy. “too focused on the measurable rather than unquantifiable benefits”—well, we have a huge chunk of people calling themselves EA’s who mostly care about totally unquantifiable GCR research. Simiarly for Ultilitarianism and comparing animal and human suffering or other such notions. The “4 causes” commonly identified with EA have quite distinct weaknesses and it would be good (in my view) if people started assessing them on their own merits and not lumping them under one banner.
Nitpick: You can’t count the global catastrophes (yep, still zero for this decade) but you might be able to tell if it’s working in other ways… Maybe. But yeah, I agree that that’s the big weakness of GCR research.
Asteroid/comet impact, super volcanic eruptions, and even nuclear war risks are quantifiable within an order of magnitude or two: link. There are additional uncertainties in the cost and efficacy of interventions such as storing food or alternate foods. However, if you value future generations, one-three orders of magnitude uncertainty is not a significant barrier to making a quantified case.
I agree that we get heaps of feedback from talks and media so i imagine you personally encounter as much criticism of EA as any other single person, and since it’s often brief spots with a big audience, that a lot of it feels like it’s not well thought through.
doesn’t appreciate the importance of systemic change—too focused on the measurable rather than unquantifiable benefits—smuggles utilitarian assumptions under the table.
Mightn’t there be value in these criticisms?
The systemic change criticism seems valid for EA five years ago. Now, GiveWell have started seriously analysing advocacy, GoodVentures have started funding it and FHI/CEA have started engaging policymakers so we’ve decided that these activities are crucial. Next time we could listen sooner right?
Regarding smuggling in utilitarianism—well, there are related objections about to moralising, demandingness and self-sacrifice, which we’ve started to address in the last year or two, and which seem important. When we write in research articles or books, it seems like we are starting to get more careful about stating ethical assumptioms, which seems good.
So, as non-smart or poorly-considered as this criricism may be, we’ve reasons to expect gold there, and any discussion should help the movement’s self-awareness, psychological health and resiliency to further criticism.
There’s also the feedback we get in talks, and the comments on all the articles and media attention we’ve gotten, which is very extensive. I’ve also presented on these topics in an academic setting.
And I asked for feedback here: https://www.facebook.com/wdcrouch/posts/10100610793427240?stream_ref=10
From this, I feel I know the most common criticisms of EA (as practiced, rather than in theory) pretty well.
doesn’t appreciate the importance of systemic change
too focused on the measurable rather than unquantifiable benefits
smuggles utilitarian assumptions under the table (e.g. that you can aggregate small benefits and weigh them against large benefits; e.g. that you shouldn’t be much more concerned with avoiding causing harm than with actively doing good) …
However, I haven’t seen a smart outside person spend a considerable amount of time to evaluating and criticising effective altruism. These objections are just the ones that people think of off the top of their head. I’d really like to see what someone who spent e.g. a week investigating EA and criticising it would say.
I agree with these but they also reinforce the fact that “Effective altruism” as a category is quite unwieldy. “too focused on the measurable rather than unquantifiable benefits”—well, we have a huge chunk of people calling themselves EA’s who mostly care about totally unquantifiable GCR research. Simiarly for Ultilitarianism and comparing animal and human suffering or other such notions. The “4 causes” commonly identified with EA have quite distinct weaknesses and it would be good (in my view) if people started assessing them on their own merits and not lumping them under one banner.
Nitpick: You can’t count the global catastrophes (yep, still zero for this decade) but you might be able to tell if it’s working in other ways… Maybe. But yeah, I agree that that’s the big weakness of GCR research.
Asteroid/comet impact, super volcanic eruptions, and even nuclear war risks are quantifiable within an order of magnitude or two: link. There are additional uncertainties in the cost and efficacy of interventions such as storing food or alternate foods. However, if you value future generations, one-three orders of magnitude uncertainty is not a significant barrier to making a quantified case.
I agree that we get heaps of feedback from talks and media so i imagine you personally encounter as much criticism of EA as any other single person, and since it’s often brief spots with a big audience, that a lot of it feels like it’s not well thought through.
Mightn’t there be value in these criticisms?
The systemic change criticism seems valid for EA five years ago. Now, GiveWell have started seriously analysing advocacy, GoodVentures have started funding it and FHI/CEA have started engaging policymakers so we’ve decided that these activities are crucial. Next time we could listen sooner right?
Regarding smuggling in utilitarianism—well, there are related objections about to moralising, demandingness and self-sacrifice, which we’ve started to address in the last year or two, and which seem important. When we write in research articles or books, it seems like we are starting to get more careful about stating ethical assumptioms, which seems good.
So, as non-smart or poorly-considered as this criricism may be, we’ve reasons to expect gold there, and any discussion should help the movement’s self-awareness, psychological health and resiliency to further criticism.
Would they do it if we paid them?