How to criticise Effective Altruism

I’ve become increasingly frustrated with attempts to critique Effective Altruism. Critiques from within the movement often seem to have large blind spots, and critiques from outside generally miss the mark, focusing their attack on the version of the movement that is publicised, not on the actual claims its members take to be true.

I think that it is really important that we regularly hear well targeted critiques of EA, the movement and the philosophy. Partly this is because good critiques can help us do good better, adjusting our methods to better fit reality. But just as importantly, hearing the substantial disagreements that other intelligent people have about all aspects of EA is necessary for genuine intellectual and epistemic humility. Without this humility we are likely to go wrong.

Recently I ran a discussion session for my local group in which we went over some critiques of EA. To focus our conversation, I drew up a taxonomy of possible critiques. This helped us to formulate new critical questions, but it also assisted in the clarification and understanding of critiques which come from outside of the movement, whose intention can often be lost in translation. In this post I will explain the taxonomy that we worked with.

I split possible criticisms of EA into Goal-level, Procedural and Object-level critiques.

Goal-level

(or we aren’t doing the right thing)

I am characterising the goal (or project) of EA as “doing the most good”. I think it is best framed in this way because the need for action, effectiveness, maximisation and the quantification of good are all implied by the sentence, while (besides maximisation) it implies no specific designation of what “the good” is. If you disagree, then this section should only run a little differently.

Critiques at the Goal-level are those that disagree with holding the ideal of “doing the most good”. It is my impression that many EAs join the movement because they already assume this to be the correct goal, and therefore attempts to steel-man out-group critiques that aim at this level might lead to a mis-direction of the objection to another claim. It is important not to do this because there are very real objections, even to this fundamental claim.

These arguments can be intrinsic (the goal is internally incoherent or false), or extrinsic (a movement with this goal should not exist).

Some examples:

  • Meta-ethical issues (intrinsic):

    • What is good is just doing what is right, and that isn’t a quantifiable concept. (Some deontologists reject that maximisation is good). Critiques like this are especially relevant if we further accept MacAskill’s definition of effective altruism as “tentatively understanding ‘the good’ in impartial welfarist terms”. However, I don’t think this is necessary at this stage, and instead we can elicit the same issue by simply asserting that a core aspect of EA is doing the most good, i.e. quantification and maximisation.

    • Ethics is not the kind of thing that can be systematised at all. (Moral Particularism for example)

  • Political issues (extrinsic):

    • Doing good should be the role of the government, making it the goal of an NGO is antithetical to democracy, which itself promotes good, and therefore the goal of this group is doomed to contradiction. (I’ve probably fluffed this one because I don’t have a solid understanding of the view).

There are probably many other ways to critique the goal of EA, and the existence of a movement with that goal. I’d love to see more examples in the comments.

Also it is worth noting here that disagreements over what “good” is are not critiques of EA based on this taxonomy. You have to have an idea of what “good” is to engage with EA at all, but critiques of your idea of “good” target something prior to EA. I think this marries well with the wide base of axiologies that EA allows, from negative utilitarians to hedonists.

Procedural

(or we are doing the right thing in the wrong way)

By procedural I mean, broadly, the ways in which the movement goes about achieving its goals. This includes institutional critiques, but also those that criticise general social norms or emphases within the presently existing community. We could also refer to this layer as discussing ‘strategy’ but in a broad sense which incorporates the strategic decisions (often implicit and undeliberated) to allow norms to develop and perpetuate as well as explicit strategic decisions made by influential organisations.

All critiques of this kind are aimed at the movement as it actually exists, and they argue that EA as it is is not EA as its goal implies it should be.

Some examples of both institutional and social or attitudinal critiques:

  • The case of the missing cause prioritisation research: This is procedural because the author agrees that we should do the most good, agrees that we should prioritise causes in order to do so, but thinks that currently there are not projects and institutions in place that do so.

  • Individualistic viewpoints: We look at the best thing individuals can do, and though we sometimes have meta discussions about what the movement needs more of, that fails to recognise that thinking about what everyone who currently takes some heed of EA advice should do and what any given individual should do may lead to very different conclusions.

  • Standpoint epistemology: In many EA interventions (sometimes, in long-termism especially, due to necessity) we do not have the possible beneficiaries of our actions present in the room- this might mean that we are far more likely to do the wrong thing by their lights. It might be that EA would do more good by changing its institutions to centre the voices of those who suffer from poverty- injustice- etc…

  • Demandingness: EA and its associated ethical systems can be very demanding and perhaps this is antithetical to a good life. This could be a very real problem for the movement for a lot of reasons: its members might achieve less, less members will join, the movement itself could suffer burnout...

Again there will be many more examples of potential procedural issues with EA, I’d love to hear about some more in the comments.

Object-level

(or a specific application of the correct goals and procedures needs adjusting)

This level of critique is much of what is relevant to EAs on a daily basis. As new empirical information is uncovered we might realise that we should shift resources around, GiveWell should change its recommendations, or graduates should be discouraged from applying to roles in AI safety. These critiques are very important, but they should be separated from the philosophy of EA (Goal-level) and the contingently existing movement itself (Procedural).

Conclusion

Thanks for reading! In the comments, it would be great to hear any problems with this taxonomy. The most important errors to point out would be types of critique which cannot fit into any category, these are more consequential than counter-examples that seem to fit multiple categories (though examples of those are welcome as well). If there are several good objections, I will publish an updated version of the taxonomy in a few weeks. It would also be great to see some more (taxonomised) critiques of EA in the comments, your own or your favourites from elsewhere.