This is a wonderful critique—I agreed with it much more than I thought I would.
Fundamentally, EA is about two things. The first is a belief in utilitarianism or a utilitarian-esque moral system, that there exists an optimal world we should aspire to. This is a belief I believe to be pretty universal, whether people want to admit it or not.
The second part of EA is the belief that we should try to do as much good as possible. Emphasis on “try”—there is a subtle distinction between “hope to do the most amount of good”(the previous paragraph) and “actively try to do the most amount of good”. This piece points out many ways in which doing the latter does not actually lead to the former. The focus on quantifying impact leads to a male/white community, it leads to a reliance on nonprofits that tend to be less sustainable, it leads to outsourcing of intellectual work to individual decision-makers, etc.
But the question of “does trying to optimize impact actually lead to optimal outcomes?” is just an epistemic one. The critiques mentioned are simply counter-arguments, and there are numerous arguments in favor that many others have made. But this is a question on which we have some actual evidence, and I feel that this piece understates the substantial work that EA has already done. We have very good evidence that GiveWell charities have an order of magnitude higher impact than the average one. We are supporting animal welfare policy that has had some major victories in state referenda. We have good reason to believe AI safety is a horribly neglected issue that we need to work on.
This isn’t just a theoretical debate. We know we are doing better work than the average altruistic person outside the community. Effective Altruism is working.
[1] belief in utilitarianism or a utilitarian-esque moral system, that there exists an optimal world we should aspire to.
[2] belief that we should try to do as much good as possible
I would say that is a reasonable descriptive claim about the core beliefs of many in the community (especially more of the hardcore members), but IMHO neither are what “EA is about”.
I don’t see EA as making claims about what we should do or believe.
I see it as a key question of “how can we do the most good with any given unit of resource we devote to doing good” and then taking action upon what we find when we ask that.
The community and research field have certain tools they often use (e.g. use scientific evidence when it’s available, using expected value reasoning) and many people who share certain philosophical beliefs (e.g. that outcomes are morally important) but IMHO these aren’t what “EA is about”.
I see [EA] as a key question of “how can we do the most good with any given unit of resource we devote to doing good” and then taking action upon what we find when we ask that.
I also consider this question to be the core of EA, and I have said things like the above to defend EA against the criticism that it’s too demanding. However, I have since come to think that this characterization is importantly incomplete, for at least two reasons:
It’s probably inevitable, and certainly seems to be the case in practice, that people who are serious about answering this question overlap a lot with people who are serious about devoting maximal resources to doing good. Both in the sense that they’re often the same people, and in the sense that even when they’re different people, they’ll share a lot of interests and it might make sense to share a movement.
Finding serious answers to this question can cause you to devote more resources to doing good. I feel very confident that this happened to me, for one! I don’t just donate to more effective charities than the version of me in a world with no EA analysis, I also donate a lot more money than that version does. I feel great about this, and I would usually frame it positively—I feel more confident and enthusiastic about the good my donations can do, which inspires me to donate more—but negative framings are available too.
So I think it can be a bit misleading to imply that EA is only about this key question of per-unit maximization, and contains no upwards pressures on the amount of resources we devote to doing good. But I do agree that this question is a great organizing principle.
Thanks Erin! I wouldn’t say that EA is only about the key question, I just disagree that utilitarianism and an obligation to maximise are required or ‘what EA is about’. I do agree that they are prevalent (and often good to have some upwards pressure on the amount we devote to doing good) 😀
Is “better than average” that good? Most people and projects with a really high positive impact were not related to EA (even if simply because their impact happened before EA existed). It certainly doesn’t seem like “our version of EA” is necessary to have the most impact. Wether it’s sufficient, we don’t know yet.
(Crossposting)
This is a wonderful critique—I agreed with it much more than I thought I would.
Fundamentally, EA is about two things. The first is a belief in utilitarianism or a utilitarian-esque moral system, that there exists an optimal world we should aspire to. This is a belief I believe to be pretty universal, whether people want to admit it or not.
The second part of EA is the belief that we should try to do as much good as possible. Emphasis on “try”—there is a subtle distinction between “hope to do the most amount of good”(the previous paragraph) and “actively try to do the most amount of good”. This piece points out many ways in which doing the latter does not actually lead to the former. The focus on quantifying impact leads to a male/white community, it leads to a reliance on nonprofits that tend to be less sustainable, it leads to outsourcing of intellectual work to individual decision-makers, etc.
But the question of “does trying to optimize impact actually lead to optimal outcomes?” is just an epistemic one. The critiques mentioned are simply counter-arguments, and there are numerous arguments in favor that many others have made. But this is a question on which we have some actual evidence, and I feel that this piece understates the substantial work that EA has already done. We have very good evidence that GiveWell charities have an order of magnitude higher impact than the average one. We are supporting animal welfare policy that has had some major victories in state referenda. We have good reason to believe AI safety is a horribly neglected issue that we need to work on.
This isn’t just a theoretical debate. We know we are doing better work than the average altruistic person outside the community. Effective Altruism is working.
I would say that is a reasonable descriptive claim about the core beliefs of many in the community (especially more of the hardcore members), but IMHO neither are what “EA is about”.
I don’t see EA as making claims about what we should do or believe.
I see it as a key question of “how can we do the most good with any given unit of resource we devote to doing good” and then taking action upon what we find when we ask that.
The community and research field have certain tools they often use (e.g. use scientific evidence when it’s available, using expected value reasoning) and many people who share certain philosophical beliefs (e.g. that outcomes are morally important) but IMHO these aren’t what “EA is about”.
I also consider this question to be the core of EA, and I have said things like the above to defend EA against the criticism that it’s too demanding. However, I have since come to think that this characterization is importantly incomplete, for at least two reasons:
It’s probably inevitable, and certainly seems to be the case in practice, that people who are serious about answering this question overlap a lot with people who are serious about devoting maximal resources to doing good. Both in the sense that they’re often the same people, and in the sense that even when they’re different people, they’ll share a lot of interests and it might make sense to share a movement.
Finding serious answers to this question can cause you to devote more resources to doing good. I feel very confident that this happened to me, for one! I don’t just donate to more effective charities than the version of me in a world with no EA analysis, I also donate a lot more money than that version does. I feel great about this, and I would usually frame it positively—I feel more confident and enthusiastic about the good my donations can do, which inspires me to donate more—but negative framings are available too.
So I think it can be a bit misleading to imply that EA is only about this key question of per-unit maximization, and contains no upwards pressures on the amount of resources we devote to doing good. But I do agree that this question is a great organizing principle.
Thanks Erin! I wouldn’t say that EA is only about the key question, I just disagree that utilitarianism and an obligation to maximise are required or ‘what EA is about’. I do agree that they are prevalent (and often good to have some upwards pressure on the amount we devote to doing good) 😀
Is “better than average” that good? Most people and projects with a really high positive impact were not related to EA (even if simply because their impact happened before EA existed). It certainly doesn’t seem like “our version of EA” is necessary to have the most impact. Wether it’s sufficient, we don’t know yet.