What are the key claims of EA?

[This is rough write-up mainly based on my experiences in EA and previous reading (I didn’t do specific reading/​research for this post)- I think it’s possible there are important points I’m missing or explaining poorly. I’m posting this still in the spirit of trying to overcome perfectionism, and because I mentioned it to a couple of people who were interested in it]

I think that EA as a worldview contains many different claims and views, and sometimes we may not realise all these distinct claims are combined in our normal view of “an EA” and instead might think EA is just “maximise positive impact”. I initially brainstormed a list of various claims I think could be important parts of the EA worldview and then tried to categorise them into themes. What I present below is the arrangement that feels most intuitive to me, although I list multiple complexities/​issues with it below. I tried to use an overall typology of claims on morality, claims about empirical facts about the world, and claims about how to reason. Again this is just based on some short intuitions I have, and is not a well defined typology.

I think this is an interesting exercise for a couple of reasons:

  • It helps us consider what are the most core ideas of EA, which inform how we pitch it and how we define the community. E.g. which claims do we focus on when first explaining EA?

  • It demonstrates the wide variety of reasons people might disagree with the common “EA worldview”

  • It demonstrates how there are some empirical claims EAs tend to believe that most people outside the community don’t, and that aren’t direct implications of the moral claims (e.g. AI poses a large threat, there’s a large variation in the impact of different charities). We might expect EA to be defined by a single key insight, not several unrelated ones (it’s one thing to notice the world is getting something wrong in one way, but feels more unlikely that we’d be the only ones to notice several independent flaws). However I do think these independent empirical claims can be explained through how EA values draw the community’s attention to specific areas, and gives it incentive to try to reason accurately about them.

(I’ve bolded the specific claims, and the other bullet points are my thoughts on these)

I’d be interested if there are important claims I’ve missed, if some of the claims below could be separated out, or if there’s a clearer path through the different claims. A lot my thinking on this was informed by Will MacAskill’s paper and Ben Todd’s podcast.

Moral Claims

Claims about what is good and what we ought to do.

  • Defining good

    • People in EA often have very similar definitions of what good means:

    • The impact of our actions are an important factor in what makes them good or not.

    • We should define good in a roughly impartial, welfarist view.

      • I roughly understand this as the defintion of good is not too dependent on who you are, and roughly depends on the impact of your actions on the net value of relevant lives in the world.

      • This definition of good then helps lead us to thinking in a scope sensitive way.

    • When considering the relevant lives, this includes all humans, animals and future people. We generally do not discount the lives of future people intrinsically at all.

      • This longtermist claim is common but not absolute in EA, and I’m brushing over mutliple population ethics questions here. (e.g. severals EA might hold person-affecting views)

  • Moral obligations

    • We should devote a large amount of our resources to trying to do good in the world

      • I think this is often missed and is not really included in common definitions of EA, which instead focus on maximising impartial impact with whatever resources you choose to do good with. But I think this misses something important. Someone who donated £2 a year based on maximising impact, and acted in their own interests the rest of the year would probably be quite out of place in the current EA community.

      • This is an important theme in Peter Singer’s work (I think), and aligns with a lot of common ideas of being a good person that involve being selfless. I think it may get less attention in EA at the moment because many people actually think that choosing a high impact career is the most important thing for you to do and that this can often align quite a lot with your own interests. I highlight this below as a separate empirical claim.

  • Maximisation

    • When trying to do good we should seek to maximise our positive impact

      • This is perhaps the most fundamental part of the EA worldview. It’s useful to note that it doesn’t include a definition of good- one could seek to maximise positive impact but defined only by impact on people in their own country. There is also some sense in which this naturally arises from any consequentialist definition of good where the more positive impact you have the better. So I sometimes struggle to disentangle this as a separate claim. Maybe often when people disagree with this claim, it’s because they view positive impact as a not a large factor in whether an action is good or bad.

Empirical Claims

  • There are also multiple parts of the EA worldview that are empirical facts about the world, and not just questions of morality. It’s interesting to think about how someone with the above moral views would act if they didn’t hold some of these empirical worldviews (it could be quite different from how the EA community looks at the moment).

  • Differences in impact

    • There is a large variety in the impact different approaches to doing good have

      • I think this really is a core EA claim that is sometimes neglected. The reason we spend time thinking about impact and getting other people to think about it is because there are huge differences. If we had the above moral views but thought there wasn’t much variation in impact, we wouldn’t spend anywhere near as much time thinking about how to have an impact or encouraging other people to.

      • This can also help justify that one doesn’t have to view impartial impact as the only thing that matters. Some people would argue that even if you care about other things as well, this huge variation in impartial impact should still be a huge concern.

  • How people normally try to do good

    • Another key part of the empirical EA worldview is that people don’t often make decisions to do good based on maximising impact. If everybody did, there’d be no need for a separate EA movement. It’s an interesting question about how much this is because people don’t hold the above moral views, don’t realise the differences in impact, or just have bad intuitions/​reasoning about what options are high impact.

    • Our intuitions about what is high impact are often wrong

    • People often do not base their decisions on maximising impact

  • Size of maximum impact

    • An individual in a rich country can at least save many lives through their actions

    • Not necessarily a core claim, but I think many people in EA are motivated by how large their potential impact could be. This is also perhaps different to differences in impact being large. One could think there is large variety in impact, but that still the best option for doing good is much less than saving one person’s life. A consequence of this would be to spend much more time on improving your own life.

  • Facts about the world

    • I think there are also then just multiple somewhat random claims that play very important roles in the prioritisations EA has today.

    • Existential risk is “high”

      • A lot of people in EA think risk of human extinction or civilisation collapse is higher than people outside EA. Although this might not be very clear cut, EAs devote a lot of attention to existential risk. However, this might just be because they view extinction as much worse than non-EAs due to a longtermist worldview, in which case they need not have a higher estimate of the risk.

    • The human world has got better

      • Not necessarily a key claim in most EA cause prioritisation, but I think it’s an important background claim that humans today live better lives than they did in the past.

      • I changed this from “the world has got better” because the effecct of factory farming in the net value of the world is uncertain for some EAs.

    • Potentially huge future

      • A key claim in much longtermist prioritisation is that there could be an astronomically huge amount of people in the future. A separate claim could also be that the expected number of people in the future is astronomically large.

    • Long-term effects persist

      • Another key claim in the longtermist worldview is that our actions today have persistent effects long into the future, as opposed to “washing out” so that we should make our prioritisation based on short-term effects.

    • Animals in factory farms have net negative lives

      • Not a fundamental claim for EA, and not a view unique to EA, but still this is an important/​common view in an unusually veggie/​vegan community. And this is a distinct claim from animal lives matter. One could care about animal lives but think they live good lives in factory farms and therefore not think factory farming was bad.

      • It might be the case though that most people outside EA don’t value animal lives and so don’t really consider this point.

    • Most humans in the world have net postive lives

      • Again not a fundamental claim, but I think our prioritisation would look different if this weren’t true. For example, if you believed large amounts of people had net negative lives, and either didn’t expect this to improve or didn’t value future lives, you would maybe not view certain global catastrophic or extinction risks as that bad.

    • Sentience is not limited to humans/​biological beings

      • It is perhaps a semantic issue where you draw the line about moral claims about sentience and sentient beings and empirical claims about sentience. But I think in particular the view that a “biological” body is not required for sentience and that e.g. digital minds could be sentient is an important consideration and relevant in a lot of longtermist EA prioritisation.

    • We live in an “unusual” time in history

      • This is quite a vague claim, and isn’t necessarily equating unusual with important/​hingy. However I think most(?) EAs have the same view that the industrial revolution was historically abnormal and that the current world is quite unusual, and that the future could be very different from the past.

Claims about Reasoning

  • I feel most unsure how to differentiate claims under this heading.

  • But there is some sense in which EAs all agree on trying to reason critically, based on evidence and honest inquiry, and on being truth-seeking.

  • We should be truth-seeking.

    • I think for some people this comes downstream from wanting to have a large positive impact- seeing the world as it is is instrumentally useful to having a large positive impact. However some people (perhaps people with more of a rationalist bend) might view this as intrinsically valuable.

I’ve been quite vague in my descriptions above and am likely missing a lot of nuance. For me personally, many of these claims are downstream of the idea of feeling morally obligated to try to improve the world as much as possible, and an impartial and welfarist definition of good.