EA cause areas are likely power-law distributed too

Link post

So there are two pieces of common effective altruist thinking that I think are in tension with each other, but for which a more sophisticated version of a similar view makes sense and dissolves that tension. This means that in my experience, people can see others holding the more sophisticated view, and adopting the simple one without really examining them (and discovering this tension).

This proposed tension is between two statements/​beliefs. The first is the common (and core!) community belief that the impact of different interventions is power-law distributed. This means that the very best intervention is several times more impactful than the almost-best ones. The second is a statement or belief along the lines of “I am so glad someone has done so much work thinking about which areas/​interventions would have the most impact, as that means my task of choosing among them is easier”, or the extreme one which continues “as that means I don’t have to think hard about choosing among them.” I will refer to this as the uniform belief.

Now, there is on the face of it many things to take issue with in how I phrased the uniform belief[1], but I want to deal with two things. 1) I think the uniform belief is a ~fairly common thing to “casually” believe—it is a belief that is easy to automatically form after cursorily engaging with EA topics—and 2) it goes strictly against the belief regarding the power-law distribution of impact.

On a psychological level, I think people can come to hold the uniform belief when they fail to adequately reflect on and internalise that interventions are power-law distributed. Because once they do, the tension between the power-law belief and the uniform belief becomes clear. If the power-law (or simply a right-skewed distribution) holds, then even among the interventions and cause areas already identified, their true impact might be very different from each other. We just don’t know which ones have the highest impact.

The holding of the uniform belief is a trap that I think people who don’t reflect too heavily can fall into, and which I know because I was in it myself for a while—making statements like “Can’t go wrong with choosing among the EA-recommended topics”. Now I think you can go wrong in choosing among them, and in many different ways. To be clear, I don’t think too many people stay in this trap for too long—EA has good social mechanisms for correcting others’ beliefs [2] and I would think that it is often caught early. But it is the kind of thing that I am afraid new or cursory EAs might come away permanently believing: that someone else has already done all of the work of figuring out which interventions are the best.

The more sophisticated view, and which I think is correct, is that because no one knows ex ante the “true” impact of an intervention, or the total positive consequences of work in an area, you personally cannot, before you start doing the difficult work of figuring out what you think, know which of the interventions you will end up thinking is the most important one. So at first blush—at first encounter with the 80k problem profiles, or whatever—it is fine to think that all the areas have equal expected impact [3]. You probably won’t come in thinking this—because you have some prior knowledge—but it would be fine to think. What is not fine would be to (automatically, unconsciously) go on to directly choose a career path among them without figuring out what you think is important, what the evidence for each problem area is, and which area you would be a good personal fit for.

So newcomers see that EA has several problem areas, and are looking at a wide selection of possible interventions, and can come away thinking “any of these are high impact”, when the more correct view, taking into account the power-law distribution, would be more like “any of these could be the most impactful intervention, but we don’t know which one yet. After doing some reflection on myself and the evidence, I think problem area X is likely to be the most impactful or most important.” [4]

There is no one who has done your hard cognitive work for you. You still have to think about which things you think will lead to high impact, and which things you are a good personal fit for.

Thanks to Sam and Conor for feedback.
I’d be interested to hear if you think I’m overstating how common this trap might be.


  1. ↩︎

    For example issues regarding deferring, personal fit, and probably more.

  2. ↩︎

    Now there’s an ominous sentence if I’ve ever seen one.

  3. ↩︎

    You can of course have meta-beliefs about your expected posterior beliefs about the distribution of impact (that it will be power-law distributed), but not about the position of any single intervention/​cause area in that distribution.

  4. ↩︎

    Yes I am sneaking in here a transformation from “this area/​intervention is the most impactful” to “I can do my most impactful work in this area/​intervention”, but I don’t think that is substantial.