This might be the best feedback I’ve ever gotten on a piece of writing (On the Philosophical Foundations of EA). Thanks for reading so many entries and helping make the contest happen!
mm6
Appreciate the kind words!
re how EA considerations would change under different ethical theories: at the end of the piece I gesture towards the idea that a philosophically entrepreneurial EA might work out a system under which the numbers matter for Kantians when enacting the duty of beneficence. This Kantian EA would look a lot like any other EA in caring about maximizing QALYs when doing their charitable giving except that they might philosophically object to deliberately ever inflicting harm in order to help others (though might accept merely foreseen harms). So definitely no murderous organ harvesting or any similar scenario that would have you use people as a means to maximizing utility (obviously not something any EAs are advocating for, but something that straight consequentialism could theoretically require). Conversely (and very speculatively), as I mention in the piece, Kantian EAs might prioritize meat production over the harvesting of animal products as a cause in light of the intent/foresight distinction. And then even Kantianism aside, I think that EAs could potentially make conversations around applied ethics more productive by grounding the conversation in a foundational ethical theory instead of merely exchanging intuition pumps.
Wow that’s a fascinating connection/parallel – thank you so much for sharing! Anything else you’d recommend reading in that literature? Am very curious about any other similarities between Madhyamaka Buddhism and Kantian thought
Also, regarding persuading non-consequentialists on their own terms, I’ve long been meaning to write a post (tentatively) titled “Judicious Duty: Effective Altruism for Non-Consquentialists”, so this is giving me additional motivation to eventually do so :)
That sounds super interesting – definitely write it! If you ever want someone to read a draft or something, shoot me a dm!
Thanks for the thoughtful response! :)
Thanks for reading – pretty cool to get an academic philosopher’s perspective! (I also really enjoyed your piece on Nietzsche!)
I think this is right, but I’d argue that though all of the theories have to deal with the same questions, the answers to the questions will depend on the specific infrastructure each theory offers. A Kantian has access to the intent/foresight distinction when weighing beneficence in a way the consequentialist does not, eg. Whether your ethical theory treats personhood as an important concept might dictate whether death is a harm or if only pain counts as a harm.
I like this response, but I think in broadening the scope of the question, you make it harder to access the conclusion. Without already accepting consequentialism, it’s not clear that I’d primarily optimize the world I’m designing along welfare considerations as opposed to any other possible value system.
And from the link
I think a Kantian would respond that what constitutes succeeding at achieving a goal in any practical domain is dictated by the nature of the activity. If your goal is to win a basketball game, you cannot simply manipulate the scoreboard at the end so that it says you have a higher score than your oppoent: you must get the ball in the hoop more times than the opposing team (mod the complexity of free throws and 3 point shots) while abiding by the rules of the game. The ways in which you can successfully achieve the goal are inherently constrained.
Furthermore, prudence seems like a bad case to consider because we do not automatically take prudential reasoning to be normative. We can instrumentally reason about how to achieve an end, but that certain means will help us get our end does not imply that we ought to take those means – we need a way to reason about ends.