Nice post, thanks for writing this! Despite being an ethical theorist myself, I actually think the central thrust of your message is mistaken, and that the precise details of ethical theory donât much affect the basic case for EA. This is something Iâve written about under the banner of âbeneficentrismâ.
A few quick additional thoughts:
(1)
The EA community is really invested in problems of applied consequentialist ethics such as âhow should we think about low probability /â high, or infinite, magnitude risksâ, âhow should we discount future utilityâ, âought the magnitudes of positive and negative utility be weighed equallyâ, etc.
These are problems of applied beneficence. Unless your moral theory says that consequences donât matter at all (which would be, as Rawls himself noted, âcrazyâ), youâll need answers to these questions no matter what ethical-theory tradition youâre working within.
(3) re: Harsanyi /â âEach of the starting questions Iâve imagined clearly load the deck in terms of the kinds of answers that are conceptually viable.â This seems easily avoided by instead asking which world one would rationally prefer from behind the veil of ignorance. (Whole possible worlds build in all the details, so do not artificially limit the potential for moral assessment in any way.)
(4) âMorality is at its core a guide for individuals to choose what to do.â Agreed! Iâd add that noting the continuity between ethical choice and rational choice more broadly is something that strongly favours consequentialism.
Thanks for reading â pretty cool to get an academic philosopherâs perspective! (I also really enjoyed your piece on Nietzsche!)
These are problems of applied beneficence. Unless your moral theory says that consequences donât matter at all (which would be, as Rawls himself noted, âcrazyâ), youâll need answers to these questions no matter what ethical-theory tradition youâre working within.
I think this is right, but Iâd argue that though all of the theories have to deal with the same questions, the answers to the questions will depend on the specific infrastructure each theory offers. A Kantian has access to the intent/âforesight distinction when weighing beneficence in a way the consequentialist does not, eg. Whether your ethical theory treats personhood as an important concept might dictate whether death is a harm or if only pain counts as a harm.
(3) re: Harsanyi /â âEach of the starting questions Iâve imagined clearly load the deck in terms of the kinds of answers that are conceptually viable.â This seems easily avoided by instead asking which world one would rationally prefer from behind the veil of ignorance. (Whole possible worlds build in all the details, so do not artificially limit the potential for moral assessment in any way.)
I like this response, but I think in broadening the scope of the question, you make it harder to access the conclusion. Without already accepting consequentialism, itâs not clear that Iâd primarily optimize the world Iâm designing along welfare considerations as opposed to any other possible value system.
(4) âMorality is at its core a guide for individuals to choose what to do.â Agreed! Iâd add that noting the continuity between ethical choice and rational choice more broadly is something that strongly favours consequentialism.
And from the link
As Scheffler (1985) argued, rational choice in general tends to be goal-directed, a conception which fits poorly with deontic constraints.18 A deontologist might claim that their goal is simply to avoid violating moral constraints themselves, which they can best achieve by not killing anyone, even if that results in more others being killed....
Schefflerâs challenge remains that such a proposal makes moral norms puzzlingly divergent from other kinds of practical norms. If morality sometimes calls for respecting value rather than promoting it, why is the same not true of prudence?
I think a Kantian would respond that what constitutes succeeding at achieving a goal in any practical domain is dictated by the nature of the activity. If your goal is to win a basketball game, you cannot simply manipulate the scoreboard at the end so that it says you have a higher score than your oppoent: you must get the ball in the hoop more times than the opposing team (mod the complexity of free throws and 3 point shots) while abiding by the rules of the game. The ways in which you can successfully achieve the goal are inherently constrained.
Furthermore, prudence seems like a bad case to consider because we do not automatically take prudential reasoning to be normative. We can instrumentally reason about how to achieve an end, but that certain means will help us get our end does not imply that we ought to take those means â we need a way to reason about ends.
Nice post, thanks for writing this! Despite being an ethical theorist myself, I actually think the central thrust of your message is mistaken, and that the precise details of ethical theory donât much affect the basic case for EA. This is something Iâve written about under the banner of âbeneficentrismâ.
A few quick additional thoughts:
(1)
These are problems of applied beneficence. Unless your moral theory says that consequences donât matter at all (which would be, as Rawls himself noted, âcrazyâ), youâll need answers to these questions no matter what ethical-theory tradition youâre working within.
(2) re: arguments for utilitarianism (and responses to objections), check out utilitarianism.netâs respective chapters.
(3) re: Harsanyi /â âEach of the starting questions Iâve imagined clearly load the deck in terms of the kinds of answers that are conceptually viable.â This seems easily avoided by instead asking which world one would rationally prefer from behind the veil of ignorance. (Whole possible worlds build in all the details, so do not artificially limit the potential for moral assessment in any way.)
(4) âMorality is at its core a guide for individuals to choose what to do.â Agreed! Iâd add that noting the continuity between ethical choice and rational choice more broadly is something that strongly favours consequentialism.
Thanks for reading â pretty cool to get an academic philosopherâs perspective! (I also really enjoyed your piece on Nietzsche!)
I think this is right, but Iâd argue that though all of the theories have to deal with the same questions, the answers to the questions will depend on the specific infrastructure each theory offers. A Kantian has access to the intent/âforesight distinction when weighing beneficence in a way the consequentialist does not, eg. Whether your ethical theory treats personhood as an important concept might dictate whether death is a harm or if only pain counts as a harm.
I like this response, but I think in broadening the scope of the question, you make it harder to access the conclusion. Without already accepting consequentialism, itâs not clear that Iâd primarily optimize the world Iâm designing along welfare considerations as opposed to any other possible value system.
And from the link
I think a Kantian would respond that what constitutes succeeding at achieving a goal in any practical domain is dictated by the nature of the activity. If your goal is to win a basketball game, you cannot simply manipulate the scoreboard at the end so that it says you have a higher score than your oppoent: you must get the ball in the hoop more times than the opposing team (mod the complexity of free throws and 3 point shots) while abiding by the rules of the game. The ways in which you can successfully achieve the goal are inherently constrained.
Furthermore, prudence seems like a bad case to consider because we do not automatically take prudential reasoning to be normative. We can instrumentally reason about how to achieve an end, but that certain means will help us get our end does not imply that we ought to take those means â we need a way to reason about ends.