I’m interested in hearing more about the cases you found for and against EA ideas/​arguments applying without utilitarianism. I personally am very much consequentialist but not necessarily fully utilitarian, so curious both for myself and as a community builder. I’m not a philosopher so my footing is probably much less certain than yours.
Concerning the case against EA, I was a moral antirealist for a while. And since I thought there are no moral truths, then we are not obligated to donate to charity, pick a highly impactful career, etc. But I thought that even if there were objective moral truths, then it would certainly not be utilitarianism (due to all the counterexamples such as the utility monster, the experience machine, etc). I mistakenly thought this would completely disqualify Peter Singer’s pond analogy/​argument.
My journey in three steps:
About a year ago, I read Michael Huemer’s Knowledge, Reality, and Value, then his Ethical Intuitionism since his ethical arguments sparked my curiosity. This convinced me of moral realism—specifically moral intuitionism. This is, roughly, the view in metaethics that we come to know moral truths through our moral intuition.
Using my moral intuition, the case against utilitarianism (and consequentialism) seems very strong. There are some cases (utility monster, experience machine, the sheriff that sacrifices one innocent to save the town, etc) that I have such a strong moral intuition against. Some kind of deontology (such as Ross’ prima facie duties theory) makes much more sense.
Revisiting Peter Singer’s pond analogy/​argument made me realize that Singer does not have utilitarianism or even consequentialism as a premise. The idea that one ought to prevent suffering without significant sacrifice is one that any plausible moral view will accept, consequentialist or not. For example, the principle of beneficence is one of Ross’ prima facie duties. And that principle is all one needs to agree with for effective altruism to get off the ground, so to speak.
I am very much not a utilitarian (though I think consequences are very important)
Using my moral intuition, the case against utilitarianism (and consequentialism) seems very strong
I’m wondering how to square these statements re: your attitude towards consequentialism (not utilitarianism). I suppose you’re saying you think consequences are very important yet you aren’t a consequentialist in the way most people who call themselves that use/​define the term?
Yes, I think consequences are very important, but I am not a consequentialist. Consequentialists claim that only consequences matter, morally speaking. I disagree. I think things like virtue, autonomy, justice, fidelity, and so on also matter, in addition to consequences.
I’m interested in hearing more about the cases you found for and against EA ideas/​arguments applying without utilitarianism. I personally am very much consequentialist but not necessarily fully utilitarian, so curious both for myself and as a community builder. I’m not a philosopher so my footing is probably much less certain than yours.
Concerning the case against EA, I was a moral antirealist for a while. And since I thought there are no moral truths, then we are not obligated to donate to charity, pick a highly impactful career, etc. But I thought that even if there were objective moral truths, then it would certainly not be utilitarianism (due to all the counterexamples such as the utility monster, the experience machine, etc). I mistakenly thought this would completely disqualify Peter Singer’s pond analogy/​argument.
My journey in three steps:
About a year ago, I read Michael Huemer’s Knowledge, Reality, and Value, then his Ethical Intuitionism since his ethical arguments sparked my curiosity. This convinced me of moral realism—specifically moral intuitionism. This is, roughly, the view in metaethics that we come to know moral truths through our moral intuition.
Using my moral intuition, the case against utilitarianism (and consequentialism) seems very strong. There are some cases (utility monster, experience machine, the sheriff that sacrifices one innocent to save the town, etc) that I have such a strong moral intuition against. Some kind of deontology (such as Ross’ prima facie duties theory) makes much more sense.
Revisiting Peter Singer’s pond analogy/​argument made me realize that Singer does not have utilitarianism or even consequentialism as a premise. The idea that one ought to prevent suffering without significant sacrifice is one that any plausible moral view will accept, consequentialist or not. For example, the principle of beneficence is one of Ross’ prima facie duties. And that principle is all one needs to agree with for effective altruism to get off the ground, so to speak.
Out of curiosity
I’m wondering how to square these statements re: your attitude towards consequentialism (not utilitarianism). I suppose you’re saying you think consequences are very important yet you aren’t a consequentialist in the way most people who call themselves that use/​define the term?
Yes, I think consequences are very important, but I am not a consequentialist. Consequentialists claim that only consequences matter, morally speaking. I disagree. I think things like virtue, autonomy, justice, fidelity, and so on also matter, in addition to consequences.
Thanks for clarifying, seems similar to 80K’s view.