Yes! I don’t deny the positive impact that has come from EA, and the focus on quantification, and have tried to touch on that in the conclusion section as well. I very much believe everyone would benefit from better use of quantification, evidence, and rationality.
I’m not sure I’ve got the arguments or evidence to say whether EA’s utilitarian influence is net positive or negative (and I’ve seen arguments in both directions), but that’s not my point here. I’m not arguing from a utilitarian basis. I’m trying to paint a picture of the scope and impact of utilitarian thought’s negative impacts on EA to try to help EAs and other people invested in doing good better evaluate the impacts and viability of such an ideology. This is only meant to be one piece of the puzzle.
Your executive summary (quoted below) appears to outright assert that quantification is “harmful” and “results in poor decision making”. I don’t think those claims are well-supported.
If you paint a picture that focuses only on negatives and ignores positives, it’s apt to be a very misleading picture. There may be possible ways to frame such a project so that it comes off as just “one piece of the puzzle” rather than as trying to bias its readership towards a negative judgment. But it’s an inherently risky/difficult undertaking (prone to moral misdirection), and I don’t feel like the rhetorical framing of this article succeeds in conveying such neutrality.
A Utilitarian Ideology
The EA ideology, a set of moral ideas, values, and practices, includes problematic and harmful ideas. Specifically, the ideology ties morality to quantified impact which results in poor decision making, encourages ends justify the means reasoning, and disregards individuality, resulting in crippling responsibility on individuals and burnout.
Yes! I don’t deny the positive impact that has come from EA, and the focus on quantification, and have tried to touch on that in the conclusion section as well. I very much believe everyone would benefit from better use of quantification, evidence, and rationality.
I’m not sure I’ve got the arguments or evidence to say whether EA’s utilitarian influence is net positive or negative (and I’ve seen arguments in both directions), but that’s not my point here. I’m not arguing from a utilitarian basis. I’m trying to paint a picture of the scope and impact of utilitarian thought’s negative impacts on EA to try to help EAs and other people invested in doing good better evaluate the impacts and viability of such an ideology. This is only meant to be one piece of the puzzle.
Your executive summary (quoted below) appears to outright assert that quantification is “harmful” and “results in poor decision making”. I don’t think those claims are well-supported.
If you paint a picture that focuses only on negatives and ignores positives, it’s apt to be a very misleading picture. There may be possible ways to frame such a project so that it comes off as just “one piece of the puzzle” rather than as trying to bias its readership towards a negative judgment. But it’s an inherently risky/difficult undertaking (prone to moral misdirection), and I don’t feel like the rhetorical framing of this article succeeds in conveying such neutrality.