[1] belief in utilitarianism or a utilitarian-esque moral system, that there exists an optimal world we should aspire to.
[2] belief that we should try to do as much good as possible
I would say that is a reasonable descriptive claim about the core beliefs of many in the community (especially more of the hardcore members), but IMHO neither are what “EA is about”.
I don’t see EA as making claims about what we should do or believe.
I see it as a key question of “how can we do the most good with any given unit of resource we devote to doing good” and then taking action upon what we find when we ask that.
The community and research field have certain tools they often use (e.g. use scientific evidence when it’s available, using expected value reasoning) and many people who share certain philosophical beliefs (e.g. that outcomes are morally important) but IMHO these aren’t what “EA is about”.
I see [EA] as a key question of “how can we do the most good with any given unit of resource we devote to doing good” and then taking action upon what we find when we ask that.
I also consider this question to be the core of EA, and I have said things like the above to defend EA against the criticism that it’s too demanding. However, I have since come to think that this characterization is importantly incomplete, for at least two reasons:
It’s probably inevitable, and certainly seems to be the case in practice, that people who are serious about answering this question overlap a lot with people who are serious about devoting maximal resources to doing good. Both in the sense that they’re often the same people, and in the sense that even when they’re different people, they’ll share a lot of interests and it might make sense to share a movement.
Finding serious answers to this question can cause you to devote more resources to doing good. I feel very confident that this happened to me, for one! I don’t just donate to more effective charities than the version of me in a world with no EA analysis, I also donate a lot more money than that version does. I feel great about this, and I would usually frame it positively—I feel more confident and enthusiastic about the good my donations can do, which inspires me to donate more—but negative framings are available too.
So I think it can be a bit misleading to imply that EA is only about this key question of per-unit maximization, and contains no upwards pressures on the amount of resources we devote to doing good. But I do agree that this question is a great organizing principle.
Thanks Erin! I wouldn’t say that EA is only about the key question, I just disagree that utilitarianism and an obligation to maximise are required or ‘what EA is about’. I do agree that they are prevalent (and often good to have some upwards pressure on the amount we devote to doing good) 😀
I would say that is a reasonable descriptive claim about the core beliefs of many in the community (especially more of the hardcore members), but IMHO neither are what “EA is about”.
I don’t see EA as making claims about what we should do or believe.
I see it as a key question of “how can we do the most good with any given unit of resource we devote to doing good” and then taking action upon what we find when we ask that.
The community and research field have certain tools they often use (e.g. use scientific evidence when it’s available, using expected value reasoning) and many people who share certain philosophical beliefs (e.g. that outcomes are morally important) but IMHO these aren’t what “EA is about”.
I also consider this question to be the core of EA, and I have said things like the above to defend EA against the criticism that it’s too demanding. However, I have since come to think that this characterization is importantly incomplete, for at least two reasons:
It’s probably inevitable, and certainly seems to be the case in practice, that people who are serious about answering this question overlap a lot with people who are serious about devoting maximal resources to doing good. Both in the sense that they’re often the same people, and in the sense that even when they’re different people, they’ll share a lot of interests and it might make sense to share a movement.
Finding serious answers to this question can cause you to devote more resources to doing good. I feel very confident that this happened to me, for one! I don’t just donate to more effective charities than the version of me in a world with no EA analysis, I also donate a lot more money than that version does. I feel great about this, and I would usually frame it positively—I feel more confident and enthusiastic about the good my donations can do, which inspires me to donate more—but negative framings are available too.
So I think it can be a bit misleading to imply that EA is only about this key question of per-unit maximization, and contains no upwards pressures on the amount of resources we devote to doing good. But I do agree that this question is a great organizing principle.
Thanks Erin! I wouldn’t say that EA is only about the key question, I just disagree that utilitarianism and an obligation to maximise are required or ‘what EA is about’. I do agree that they are prevalent (and often good to have some upwards pressure on the amount we devote to doing good) 😀