Reading the section on utilitarianism here, I find it hard to tell exactly what practices you want the community to give up/do less of in what circumstances, beyond stopping making strong moral demands on people. (Something which its far from clear only utilitarianism and not other moral theories could be used to justify*.) Suppose an EA org does formal cost benefit analysis comparing the amount of suffering prevented by two different animal welfare interventions. Is that already “utilitarianism” and bad? If not, what features have to be added to make it bad? If it turns out that the answer is actually “not properly taking into account uncertainty about the accuracy of your cost-benefit analysis” I don’t think it’s news to people that they have to do that, whether or not they are actually good at doing so in practice.
Or is what you want with regard to utilitarianism not a change in EA practice around selecting interventions, but just for people to stop saying posititive things about utilitarianism as a philosophical tradition? Desist from writing academic work that defends it?
I mostly agree, and admittidly I don’t have very great suggestions for what I want the community to do, and also wish the report was able to offer more in that regard. My main claim around utilitarianism is that (1) the movement aligns highly with utilitarian thought and practices, and (2) this ideological belief has motivated many harmful actions.
What I want is for the movement and community to recognize these harms and take this information into account when thinking about ‘EA beliefs’ and being a part of EA.
I feel like recognising the potential harms of going too far with utilitarianism such as “the ends justify the means” is something that is regularly done in the movement?
I don’t know if you were around during the FTX scandal, but people here felt betrayed that SBF committed massive fraud, and heavily criticised it (this is one of the highest-rated posts on the forum).
Several of the posts you use that mention the risks come from very influential EA people (like Holden Karnofsky). Are you sure that this is something that is ignored by the movement?
Reading the section on utilitarianism here, I find it hard to tell exactly what practices you want the community to give up/do less of in what circumstances, beyond stopping making strong moral demands on people. (Something which its far from clear only utilitarianism and not other moral theories could be used to justify*.) Suppose an EA org does formal cost benefit analysis comparing the amount of suffering prevented by two different animal welfare interventions. Is that already “utilitarianism” and bad? If not, what features have to be added to make it bad? If it turns out that the answer is actually “not properly taking into account uncertainty about the accuracy of your cost-benefit analysis” I don’t think it’s news to people that they have to do that, whether or not they are actually good at doing so in practice.
Or is what you want with regard to utilitarianism not a change in EA practice around selecting interventions, but just for people to stop saying posititive things about utilitarianism as a philosophical tradition? Desist from writing academic work that defends it?
*https://research-repository.st-andrews.ac.uk/bitstream/handle/10023/1568/Ashford2003-Ethics113-Demandingness.pdf?sequence=1&isAllowed=y
I mostly agree, and admittidly I don’t have very great suggestions for what I want the community to do, and also wish the report was able to offer more in that regard. My main claim around utilitarianism is that (1) the movement aligns highly with utilitarian thought and practices, and (2) this ideological belief has motivated many harmful actions.
What I want is for the movement and community to recognize these harms and take this information into account when thinking about ‘EA beliefs’ and being a part of EA.
I feel like recognising the potential harms of going too far with utilitarianism such as “the ends justify the means” is something that is regularly done in the movement?
I don’t know if you were around during the FTX scandal, but people here felt betrayed that SBF committed massive fraud, and heavily criticised it (this is one of the highest-rated posts on the forum).
Several of the posts you use that mention the risks come from very influential EA people (like Holden Karnofsky). Are you sure that this is something that is ignored by the movement?