Executive summary: This exploratory post argues that effective altruism can be best understood as a form of maximizing, welfarist consequentialism—emphasizing the moral importance of outcomes that improve individual well-being—while acknowledging that most people, including effective altruists, blend multiple moral intuitions and may reject extreme conclusions from this framework.
Key points:
Effective altruism is grounded in three philosophical pillars: consequentialism (judging actions by their outcomes), welfarism (valuing things only insofar as they affect well-being), and maximization (aiming to do as much good as possible).
Most people intuitively share these values to some extent, but effective altruists prioritize them more consistently, reducing reliance on other moral foundations like purity, authority, and loyalty.
Welfarism is broader than just happiness or pleasure, encompassing anything that benefits individuals—freedom, virtue, beauty, or even more idiosyncratic ideals.
Moral foundations theory helps explain how EA diverges from typical moral reasoning: while most people mix multiple moral intuitions, effective altruists largely elevate “Care” (helping others) above the rest.
This simplification makes EA morality seem intuitive yet radical, and enables EA tools (like GiveWell) to be useful even to those who don’t fully endorse EA values.
The author emphasizes pluralism and cooperation, suggesting that EA’s methods can support broader moral goals without demanding full philosophical alignment.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This exploratory post argues that effective altruism can be best understood as a form of maximizing, welfarist consequentialism—emphasizing the moral importance of outcomes that improve individual well-being—while acknowledging that most people, including effective altruists, blend multiple moral intuitions and may reject extreme conclusions from this framework.
Key points:
Effective altruism is grounded in three philosophical pillars: consequentialism (judging actions by their outcomes), welfarism (valuing things only insofar as they affect well-being), and maximization (aiming to do as much good as possible).
Most people intuitively share these values to some extent, but effective altruists prioritize them more consistently, reducing reliance on other moral foundations like purity, authority, and loyalty.
Welfarism is broader than just happiness or pleasure, encompassing anything that benefits individuals—freedom, virtue, beauty, or even more idiosyncratic ideals.
Moral foundations theory helps explain how EA diverges from typical moral reasoning: while most people mix multiple moral intuitions, effective altruists largely elevate “Care” (helping others) above the rest.
This simplification makes EA morality seem intuitive yet radical, and enables EA tools (like GiveWell) to be useful even to those who don’t fully endorse EA values.
The author emphasizes pluralism and cooperation, suggesting that EA’s methods can support broader moral goals without demanding full philosophical alignment.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.