I used to think that the exact philosophical axiologies and the handling of corner cases were really important to guide altruistic action, but I now think that many good things are robustly good under most reasonable moral frameworks.
these practical and intuitive methods are ultimately grounded in Singerās deeply counterintuitive moral premises.
I donāt think this is necessarily true. Many (I would argue most) other moral premises can lead you to value preventing child deaths or stunting, limiting the suffering of animals in factory farms, or ensuring future generations live positive, meaningful lives.
@WillieG mentioned Christianity, and indeed, EA for Christians has many Christians who care deeply about helping others and come from a very different moral background. (I think sometimes they mention this parable)
within the EA community, beyond working on their own projects, do people have the tendency to remind & suggest to others āwhat they could have done but didnāt?ā
I think people regularly do encourage themselves and others to consider opportunity costs and counterfactuals, but I donāt think itās specific to the EA community.
The principle becomes more challenging to accept when Singer extends it to a particular edge case.
I think this is the nature of edge cases. I donāt think you need to agree with Singer on edge cases to value helping others. This vaguely reminded me of this Q&A answer from Derek Parfit where he very briefly talks about borderline cases and normative truths.
I do think things get trickier for e.g. shrimp welfare and digital sentience, and in those cases philosophical considerations are really important. But in my opinion the majority of EA work is not particularly sensitive to oneās stance on utilitarianism.
I used to think that the exact philosophical axiologies and the handling of corner cases were really important to guide altruistic action, but I now think that many good things are robustly good under most reasonable moral frameworks.
I donāt think this is necessarily true. Many (I would argue most) other moral premises can lead you to value preventing child deaths or stunting, limiting the suffering of animals in factory farms, or ensuring future generations live positive, meaningful lives.
@WillieG mentioned Christianity, and indeed, EA for Christians has many Christians who care deeply about helping others and come from a very different moral background. (I think sometimes they mention this parable)
I donāt have an answer to this question, but you might like these posts: Invisible impact loss (and why we can be too error-averse) and Uncertain Optimizing and Opportunity Costs
I think people regularly do encourage themselves and others to consider opportunity costs and counterfactuals, but I donāt think itās specific to the EA community.
I think this is the nature of edge cases. I donāt think you need to agree with Singer on edge cases to value helping others. This vaguely reminded me of this Q&A answer from Derek Parfit where he very briefly talks about borderline cases and normative truths.
I do think things get trickier for e.g. shrimp welfare and digital sentience, and in those cases philosophical considerations are really important. But in my opinion the majority of EA work is not particularly sensitive to oneās stance on utilitarianism.