What actually is the argument for effective altruism?
We just released a podcast with me about what the core arguments for effective altruism actually are, and potential objections to them.
I wanted to talk about this topic because I think many people – even many supporters – haven’t absorbed the core claims we’re making.
As a first step in tackling this, I think we could better clarify what the key claim of effective altruism actually is, and what the arguments for that claim actually are. I think it would also help us improve our understanding of effective altruism.
The most relevant existing work is Will MacAskill’s introduction to effective altruism in the Norton Introduction to Ethics, though it argues for the claim that we have a moral obligation to pursue effective altruism, and I wanted to formulate the argument without making it a moral obligation. What I say is also in line with MacAskill’s definition of effective altruism.
I think a lot more work is needed in this area, and don’t have any settled answers, but I hoped this episode would get discussion going. There are also many other questions about how best to message effective altruism after it’s been clarified, which I mostly don’t get into.
In brief, here’s where I’m at. Please see the episode to get more detail.
The claim: If you want to contribute to the common good, it’s a mistake not to pursue the project of effective altruism.
The project of effective altruism: is defined as the search for the actions that do the most to contribute to the common good (relative to their cost). It can be broken into (i) an intellectual project – a research field aimed at identifying these actions and, (ii) a practical project to put these findings into practice and have an impact.
I define the ‘common good’ in the same way Will MacAskill defines the good in “The definition of effective altruism”, as what most increases welfare from an impartial perspective. This is only intended as a tentative and approximate definition, which might be revised.
The three main premises supporting the claim of EA are:
Spread: there are big differences in how much different actions (with similar costs) contribute to the common good.
Identifiability: We can find some of these high-impact actions with reasonable effort.
Novelty: The high-impact actions we can find are not the same as what people who want to contribute to the common good typically do.
The idea is that if some actions do far more to contribute than others, we can find those actions, and they’re not the same as what we’re already doing, then – if you want to contribute to the common good – it’s worth searching for these actions. Otherwise, you’re failing to achieve as much for the common good as you could, and could better achieve your stated goal.
Moreover, we can say that it’s more of a mistake not to pursue the project of effective altruism the greater the degree to which each of the premises hold. For instance, the greater the degree of spread, the more you’re giving up by not searching (and same for the other two premises).
We can think of the importance of effective altruism quantitatively as how much your contribution is increased by applying effective altruism compared to what you would have done otherwise.
Unfortunately, there’s not much rigorously written up about how much actions differ in effectiveness ex ante, all considered, and I’m keen to see more research in this area.
In the episode, I also discuss:
Some broad arguments for why the premises seem plausible.
Some potential avenues to object to these premises – I don’t think these objections work as stated, but I’d like to see more work on making them better. (I think most of the best objections to EA are about EA in practice rather than the underlying ideas.)
Common misconceptions about what EA actually is, and some speculation on why these got going.
A couple of rough thoughts on how, given these issues, we might improve how we message effective altruism.
I’m keen to see people running with developing the arguments, making the objections better, and thinking about how to improve messaging. There’s a lot of work to be done.
If you’re interested in working on this, I may be able to share you on some draft documents with a little more detail.
- Clarifying the core of Effective Altruism by 15 Jan 2021 23:02 UTC; 54 points) (
- EA Organization Updates: September 2020 by 21 Oct 2020 16:19 UTC; 26 points) (
- 17 Aug 2022 15:55 UTC; 23 points) 's comment on EA criticism contest: Why I am not an effective altruist by (
- Introduzione all’Altruismo Efficace by 23 Dec 2022 1:46 UTC; 2 points) (
This isn’t much more than a rotation (or maybe just a rephrasing), but:
When I offer a 10 second or less description of Effective Altruism, it is hard avoid making it sound platitudinous. Things like “using evidence and reason to do the most good”, or “trying to find the best things to do, then doing them” are things I can imagine the typical person nodding along with, but then wondering what the fuss is about (“Sure, I’m also a fan of doing more good rather than less good—aren’t we all?”) I feel I need to elaborate with a distinctive example (e.g. “I left clinical practice because I did some amateur health econ on how much good a doctor does, and thought I could make a greater contribution elsewhere”) for someone to get a good sense of what I am driving at.
I think a related problem is the ‘thin’ version of EA can seem slippery when engaging with those who object to it. “If indeed intervention Y was the best thing to do, we would of course support intervention Y” may (hopefully!) be true, but is seldom the heart of the issue. I take most common objections are not against the principle but the application (I also suspect this may inadvertently annoy an objector, given this reply can paint them as—bizarrely - ‘preferring less good to more good’).
My best try at what makes EA distinctive is a summary of what you spell out with spread, identifiability, etc: that there are very large returns to reason for beneficence (maybe ‘deliberation’ instead of ‘reason’, or whatever). I think the typical person does “use reason and evidence to do the most good”, and can be said to be doing some sort of search for the best actions. I think the core of EA (at least the ‘E’ bit) is the appeal that people should do a lot more of this than they would otherwise—as, if they do, their beneficence would tend to accomplish much more.
Per OP, motivating this is easier said than done. The best case is for global health, as there is a lot more (common sense) evidence one can point to about some things being a lot better than others, and these object level matters a hypothetical interlocutor is fairly likely to accept also offers support for the ‘returns to reason’ story. For most other cause areas, the motivating reasons are typically controversial, and the (common sense) evidence is scant-to-absent. Perhaps the best moves are here would be pointing to these as salient considerations which plausibly could dramatically change ones priorities, and so exploring to uncover these is better than exploiting after more limited deliberation (but cf. cluelessness).
On “large returns to reason”: My favorite general-purpose example of this is to talk about looking for a good charity, and then realizing how much better the really good charities were than others I had supported. I bring up real examples of where I donated before and after discovering EA, with a few rough numbers to show how much better I think I’m now doing on the metric I care about (“amount that people are helped”).
I like this approach because it frames EA as something that can help a person make a common decision—“which charity to support?” or “should I support charity X?”—but without painting them as ignorant or preferring less good (in these conversations, I acknowledge that most people don’t think much about decisions like this, and that not thinking much is reasonable given that they don’t know how huge the differences in effectiveness can be).
Hi Greg,
I agree that when introducing EA to someone for the first, it’s often better to lead with a “thick” version, and then bring in thin later.
(I should have maybe better clarified that my aim wasn’t to provide a new popular introduction, but rather to better clarify what “thin” EA actually is. I hope this will inform future popular intros to EA, but that involves a lot of extra steps.)
I also agree that many objections are about EA in practice rather than the ‘thin’ core ideas, and that it can be annoying to retreat back to thin EA, and that it’s often better to start by responding to the objections to thick. Still, I think it would be ideal if more people understood the thin/thick distinction (I could imagine more objections starting with “I agree we should try to find the highest-impact actions, but I disagree with the current priorities of the community because...), so I think it’s worth making some efforts in that direction.
Thanks for the other thoughts!
Interesting write-up, thanks. However, I don’t think that’s quite the right claim. You said:
But this claim isn’t true. If I only want to make a contribution to the common good, but I’m not all fussed about doing more good rather than less, (given whatever resources I’m deploying) then I don’t have any reason to pursue the project of effective altruism, which you say is searching for the actions that do the most good.
A true alternative to the claim would be:
But this claim is effectively a tautology, seeing as effective altruism is defined as searching for the actions that do the most good. (I suppose someone who thought how to do the most good was just totally obvious would see no reason to pursue the project of EA).
Maybe the claim of EA should emphasise the non-obvious of what doing the most good is. Something like:
This is an empirical claim, not a conceptual one, and its justification would seem to be the three main premises you give.
That’s an interesting point. I was thinking that most people would say that if my goal is X, and I achieve far less of X than I easily could have, then that would qualify as a ‘mistake’ in normal language. I also wondered whether another premise should be something very roughly like ‘maximising: it’s better to achieve more rather than less of my goal (if the costs are the same)’. I could see contrasting with some kind of alternative approach could be another good option.
If your goal is to do X, but you’re not doing as much as you can of X, you are failing (with respect to X).
But your claim is more like “If your goal is to do X, you need to Y, otherwise you will not do as much as of X as you can”. The Y here is “the project of effective altruism”. Hence there needs to be an explanation of why you need to do Y to achieve X. If X and Y are the same thing, we have a tautology (“If you want do X, but you do not-X, you won’t do X”).
In short, it seems necessary to say that is distinctive about the project of EA.
Analogy: say I want to be a really good mountain climber. Someone could say, oh, if you want to do that, you need to “train really hard, invest in high quality gear, and get advice from pros”. That would be helpful, specific advice about what the right means to achieve my end are. Someone who says “if you want to be good at mountain climbing, follow the best advice on how to good at mountain climbing” hasn’t yet told me anything I don’t already know.
I see where you’re coming from but I actually agree with Michael.
In reality a lot of people are interested in contributing to the common good but actually aren’t interested in doing this to the greatest extent possible. A lot of people are quite happy to engage in satisficing behaviour whereby they do some amount of good that gives them a certain amount of satisfaction, but then forget about doing further good. In fact this will be the case for many in the EA community, except the satisficing level is likely to be much higher than average.
So, whilst it’s possible this is over pedantic, I think “the claim” could use a rethink. It’s too late in the evening for me to be able to advise on anything better though...
I think adding a maximizing premise like the one you mention could work to assuage these worries.
I actually think there is more needed.
If “its a mistake not to do X” means “its in alignment with the persons goal to do X”, then I think there are a few ways in which the claim could be false.
I see two cases where you want to maximize your contribution to the common good, but it would still be a mistake (in the above sense) to pursue EA:
you are already close to optimal effectiveness and the increase in effectiveness by some additional research in EA is so small that you would be maximizing by just using that time to earn money and donate it or have a direct impact
pursuing EA causes you to not achieve another goal which you value at least equally or a set of goals which you, in total, value at least equally
If that’s true, then we need to reduce the scope of the conclusion VERY much. I estimate that the fraction of people caring about the common good for whom Bens claim holds is in [1/10000,1/100000]. So in the end the claim can be made for hardly anyone right?
It seems to me there’s a fourth key premise:
0. Comparability: It is possible to make meaningful comparisons between very different kinds of contributions to the common good.
Hey, I agree something like that might be worth adding.
The way I was trying to handle it is to define ‘common good’ in such a way that different contributions are comparable (e.g. if common good = welfare). However, it’s possible I should add something like “there don’t exist other values that typically outweigh differences in the common good thus defined”.
For instance, you might think that justice is incredibly intrinsically important, such that what you should do is mainly determined by which action is most just, even if there are also large differences in terms of the common good.
I was actually assuming a welfarist approach too.
But even under a welfarist approach, it’s not obvious how to compare campaigning for criminal justice reform in the US to bednet distribution in developing countries.
Perhaps it’s the case that this is not an issue if one accepts longtermism. But that would just mean that the hidden premise is actually longtermism.
Hmm in that case, I’d probably see it as a denial of identifiability.
I do think something along these lines is one of the best counteraguments to EA. I see it as the first step in the cluelessness debate.
It’s not entirely clear to me what this means (specifically what work the “can” is doing).
If you mean that it could be the case that we find high impact actions which we not the same are what people who want to contribute to the good would typically do, then I agree this seems plausible as a premise for engaging in the project of effective altruism.
If you mean that the premise is that we actually can find high impact actions which are not the same as what people who want to contribute to the common good typically do, then it’s not so clear to me that this should be a premise in the argument for effective altruism. This sounds like we are assuming what the results of our effective altruist efforts to search for the actions that do the most to contribute to the common good (relative to their cost) will be: that the things we discover are high impact will be different from what people typically do. But, of course, it could turn out to be the case that actually the highest impact actions are those which people typically do (our investigations could turn out to vindicate common sense, after all), so it doesn’t seem like this is something we should take as a premise for effective altruism. It also seems in tension with the idea (which I think is worth preserving) that effective altruism is a question (i.e. effective altruism itself doesn’t assume that particular kinds of things are or are not high impact).
I assume, however, that you don’t actually mean to state that effective altruists should assume this latter thing to be true or that one needs to assume this in order to support effective altruism. I’m presuming that you instead mean something like: this needs to be true for engaging in effective altruism to be successful/interesting/worthwhile. In line with this interpretation, you note in the interview something that I was going to raise as another objection: that if everyone were already acting in an effective altruist way, then it would be likely false that the high impact things we discover are different from those that people typically do.
If so, then it may not be false to say that “The high-impact actions we can find are not the same as what people who want to contribute to the common good typically do”, but it seems bound to lead to confusion, with people misreading this as EAs assuming that he highest impact things are not what people typically do. It’s also not clear that this premise needs to be true for the project of effective altruism to be worthwhile and, indeed, a thing people should do: it seems like it could be the case that people who want to contribute to the common good should engage in the project of effective altruism simply because it could be the case that the highest impact actions are not those which people would typically do.
Hi David, just a very quick reply: I agree that if the first two premises were true, but the third were false, then EA would still be important in a sense, it’s just that everyone would already be doing EA, so we wouldn’t need a new movement to do it, and people wouldn’t increase their impact by learning about EA. I’m unsure about how best to handle this in the argument.
Just to be clear, this is only a small part of my concern about it sounding like EA relies on assuming (and/or that EAs actually do assume) that the things which are high impact are not the things people typically already do.
One way this premise could be false, other than everyone being an EA already, is if it turns out that the kinds of things people who want to contribute to the common good typically do are actually the highest impact ways of contributing to the common good. i.e. we investigate, as effective altruists and it turns out that the kinds of things people typically do to contribute to the common good are (the) high(est) impact. [^1]
To the non-EA reader, it likely wouldn’t seem too unlikely that the kinds of things they typically do are actually high impact. So it may seem peculiar and unappealing for EAs to just assume [^2] that the kinds of things people typically do are not high impact.
[^1] A priori, one might think there are some reasons to presume in favour of this (and so against the EA premise), i.e. James Scott type reasons, deference to common opinion etc.
[^2] As noted, I don’t think you actually do think that EAs should assume this, but labelling it as a “premise” in the “rigorous argument for EA” certainly risks giving that impression.
“The greater the degree of spread, the more you’re giving up by not searching.” This makes sense. But I don’t think you have to agree with the “big” part of premise 1 to support and engage with the project of effective altruism, e.g. you could think that there are small differences but those differences are worth pursuing anyway. The “big” part seems like a common claim within effective altruism but not necessarily a core component?
(You didn’t claim explicitly in the post above that you have to agree with the “big” part but I think it’s implied? I also haven’t listened to the episode yet.)
I’d say that pursuing the project of effective altruism is worthwhile, only if the opportunity cost of searching C is justified by the amount of additional good you do as a result of searching for better ways to do good, rather then go by common sense A. It seems to me that if C>= A, then pursuing the project of EA wouldn’t be worth it. If, however, C< A, then pursuing the project of EA would be worth it, right?
To be more concrete let us say that the difference in value between the commonsense distribution of resources to do good and the ideal might be only 0.5%. Let us also assume it would cost you only a minute to find out the ideal distribution and that the value of spending that minute in your commonsense way is smaller than getting that 0.5% increase. Surely it would still be worth seeking the ideal distribution (=engaging in the project of EA), right?
I like the idea of thinking about it quantitatively like this.
I also agree with the second paragraph. One way of thinking about this is that if identifiability is high enough, it can offset low spread.
The importance of EA is proportional to the multiple of the degree to which the three premises hold.
I don’t think I would have the patience for EA thinking if the spread weren’t big. Why bother with a bunch of sophisticated-looking models and arguments to only make a small improvement in impact? Surely it’s better to just get out there and do good?
Depends. As Ben and Aaron explain in their comments, high identifiability should in theory be able to offset low spread. In other words, if the opportunity cost of engaging in EA thinking is small enough, it might be worth engaging in it even if the gain from doing so is also small.
Certainly there’s a risk that it turns into a community wide equivalent of procrastination if the spreads are low. Would love someone to tackle that rigorously and empirically!
Hi Jamie,
I think it’s best to think about the importance of EA as a matter of degree. I briefly mention this in the post:
I agree that if there were only, say, 2x differences in the impact of actions, EA could still be very worthwhile. But it wouldn’t be as important as in a world where there are 100x differences. I talk about this a little more in the podcast.
I think ideally I’d reframe the whole argument to be about how important EA is rather than whether it’s important or not, but the phrasing gets tricky.
Thank you so much for the podcast Ben (and Arden!), it made me excited to see more podcasts and post of the format ‘explain basic frameworks and/or assumptions behind your thinking’. I particularly appreciated that you mentioned that regression to the mean has a different meaning in a technical statistical context than the more colloquial EA one you used.
One thing that I have been thinking about since reading the podcast is that you are explicitly defining increasing the amount of doing good by spending more of your resources as not part of the core idea of the EA if I understood correctly, and only trying to increase the amount of doing good per unit of resources. It was not entirely clear to me how large a role you think increasing the amount of resources people spend on doing good should play in the community.
I think I have mostly thought of increasing or meeting an unusually high threshold of resources spend on doing good as an important part of EA culture, but I am not sure whether others view it the same. I’m also not sure whether considering it as such is conducive to maximizing overall impact.
Anyway, this is not an objection, my thoughts are a bit confused and I’m not sure whether I’m actually properly interacting with something you said. I just wanted to express a weak level of surprise and that this part of your definition felt notable to me.
This is helpful. Might be worth defining EA as a movement that realises premises 1, 2, 3 are partially true, and that even if there are small differences on each, it is worth being really careful and deliberate about what we do and how much.
There was also something attractive to me as a young person many moons ago about Toby Ord & Will Mackaskill’s other early message—which is perhaps a bit more general / not specific to EA—that there are some really good opportunities to promote the common good out there, and they are worth pursuing (perhaps this is the moral element that you’re trying to abstract from?).
There is still little writing about what the fundamental claims of EA actually are, or research to investigate how well they hold, or work to communicate such claims. This post is one of the few attempts, so I think it’s still an important piece. I would still really like people to do further investigation into the questions it raises.
I’m a bit confused by this, because “what most increases welfare” is describing an action, which seems like the wrong type of thing for “the common good” to be. Do you instead mean that the common good is impartial welfare, or similar? This also seems more in line with Will.
One other quibble:
I’m not sure we actually want “relative to their cost” here. On one hand, it might be the case that the actions which are most cost-effective at doing good actually do very little good, but are also very cheap (e.g. see this post by Hanson). Alternatively, maybe the most cost-effective actions are absurdly expensive, so that knowing what they are doesn’t help us.
Rather, it seems better to just say “the search for the actions that do the most to contribute to the common good, given limited resources”. Or even just leave implicit that there are resource constraints.
I think the first argument can be rescued by including search costs in the “cost” definition. I agree that the second one cannot be, and is a serious issue with this phrasing.
Thanks Ben, even though I’ve been involved for a long time, I still found this helpful.
Nitpick: was the acronym intentionally chosen to spell “SIN”? Even though that makes me laugh, it seems a little cutesy.