Morality vs related concepts

Cross-posted to LessWrong.

How can you know I’m talk­ing about moral­ity (aka ethics), rather than some­thing else, when I say that I “should” do some­thing, that hu­man­ity “ought” to take cer­tain ac­tions, or that some­thing is “good”? What are the bor­der­lines and dis­tinc­tions be­tween moral­ity and the var­i­ous po­ten­tial “some­thing else”s? How do they over­lap and in­ter­re­late?

In this post, I try to col­lect to­gether and sum­marise philo­soph­i­cal con­cepts that are rele­vant to the above ques­tions.[1] I hope this will benefit read­ers by in­tro­duc­ing them to some thought-clar­ify­ing con­cep­tual dis­tinc­tions they may not have been aware of, as well as terms and links they can use to find more rele­vant info. In an­other post, I similarly dis­cuss how moral un­cer­tainty differs from and over­laps with re­lated con­cepts.

Epistemic sta­tus: The con­cepts cov­ered here are broad, fuzzy, and over­lap in var­i­ous ways, mak­ing defi­ni­tions and dis­tinc­tions be­tween them al­most in­evitably de­bat­able. Ad­di­tion­ally, I’m not an ex­pert in these top­ics; in­deed, I ex­pect many read­ers to know more than me about at least some of them, and one rea­son I wrote this was to help me clar­ify my own un­der­stand­ings. I’d ap­pre­ci­ate feed­back or com­ments in re­la­tion to any mis­takes, poor phras­ings, etc. (and just in gen­eral!).

Also note that my in­ten­tion here is mostly to sum­marise ex­ist­ing ideas, rather than to provide origi­nal ideas or anal­y­sis.

Normativity

A nor­ma­tive state­ment is any state­ment re­lated to what one should do, what one ought to do, which of two things are bet­ter, or similar. “Some­thing is said by philoso­phers to have ‘nor­ma­tivity’ when it en­tails that some ac­tion, at­ti­tude or men­tal state of some other kind is jus­tified, an ac­tion one ought to do or a state one ought to be in” (Dar­wall). Nor­ma­tivity is thus the over­ar­ch­ing cat­e­gory (su­per­set) of which things like moral­ity, pru­dence (in the sense ex­plained be­low), and ar­guably ra­tio­nal­ity are just sub­sets.

This matches the us­age of “nor­ma­tive” in eco­nomics, where nor­ma­tive claims re­late to “what ought to be” (e.g., “The gov­ern­ment should in­crease its spend­ing”), while pos­i­tive claims re­late to “what is” (in­clud­ing pre­dic­tions, such as what effects an in­crease in gov­ern­ment spend­ing may have). In lin­guis­tics, the equiv­a­lent dis­tinc­tion is be­tween pre­scrip­tive ap­proaches (in­volv­ing nor­ma­tive claims about “bet­ter” or “cor­rect” uses of lan­guage) and de­scrip­tive ap­proaches (which are about how lan­guage is used).

Prudence

Pru­dence es­sen­tially refers to the sub­set of nor­ma­tivity that has to do with one’s own self-in­ter­est, hap­piness, or wellbe­ing (see here and here). This con­trasts with moral­ity, which may in­clude but isn’t limited to one’s self-in­ter­est (ex­cept per­haps for ego­ist moral the­o­ries).

For ex­am­ple (based on MacAskill p. 41), we may have moral rea­sons to give money to GiveWell-recom­mended char­i­ties, but pru­den­tial rea­sons to spend the money on our­selves, and both sets of rea­sons are “nor­ma­tively rele­vant” con­sid­er­a­tions.

(The rest of this sec­tion is my own anal­y­sis, and may be mis­taken.)

I would ex­pect that the sig­nifi­cance of pru­den­tial rea­sons, and how they re­late to moral rea­sons, would differ de­pend­ing on the moral the­o­ries one is con­sid­er­ing (e.g., de­pend­ing on which moral the­o­ries one has some be­lief in). Con­sid­er­ing moral and pru­den­tial rea­sons sep­a­rately does seem to make sense in re­la­tion to moral the­o­ries that don’t pre­cisely man­date spe­cific be­havi­ours; for ex­am­ple, moral the­o­ries that sim­ply for­bid cer­tain be­havi­ours (e.g., vi­o­lat­ing peo­ple’s rights) while oth­er­wise let­ting one choose from a range of op­tions (e.g., donat­ing to char­ity or not).[2]

In con­trast, “max­imis­ing” moral the­o­ries like clas­si­cal util­i­tar­i­anism claim that the only ac­tion one is per­mit­ted to take is the very best ac­tion, leav­ing no room for choos­ing the “pru­den­tially best” ac­tion out of a range of “morally ac­cept­able” ac­tions. Thus, in re­la­tion to max­imis­ing the­o­ries, it seems like keep­ing track of pru­den­tial rea­sons in ad­di­tion to moral rea­sons, and some­times act­ing based on pru­den­tial rather than moral rea­sons, would mean that one is effec­tively ei­ther:

  • us­ing a mod­ified ver­sion of the max­imis­ing moral the­ory (rather than the the­ory it­self), or

  • act­ing as if “morally un­cer­tain” be­tween the max­imis­ing moral the­ory and a “moral the­ory” in which pru­dence is seen as “in­trin­si­cally valuable”.

Either way, the bound­ary be­tween pru­dence and moral­ity seems to be­come fuzzier or less mean­ingful in such cases.[3]

(In­stru­men­tal) Rationality

(This sec­tion is sort-of my own anal­y­sis, and may be mis­taken or use terms in un­usual ways.)

Bykvist (2017):

Ra­tion­al­ity, in one im­por­tant sense at least, has to do with what one should do or in­tend, given one’s be­liefs and prefer­ences. This is the kind of ra­tio­nal­ity that de­ci­sion the­ory of­ten is seen as in­vok­ing. It can be spel­led out in differ­ent ways. One is to see it as a mat­ter of co­her­ence: It is ra­tio­nal to do or in­tend what co­heres with one’s be­liefs and prefer­ences (Broome, 2013; for a critic, see Ar­paly, 2000).

Us­ing this defi­ni­tion, it seems to me that:

  • Ra­tion­al­ity can be con­sid­ered a sub­set of nor­ma­tivity in which the “should” state­ments, “ought” state­ments, etc. fol­low in a sys­tem­atic way from one’s be­liefs and prefer­ences.

  • Whether a “should” state­ment, “ought” state­ment, etc. is ra­tio­nal is un­re­lated to the bal­ance of moral or pru­den­tial rea­sons in­volved. E.g., what I “ra­tio­nally should” do re­lates only to moral­ity and not pru­dence if my prefer­ences re­late only to moral­ity and not pru­dence, and vice versa. (And situ­a­tions in be­tween those ex­tremes are also pos­si­ble, of course).[4]

For ex­am­ple, the state­ment “Ra­tion­ally speak­ing, I should buy a Fer­rari” is true if (a) I be­lieve that do­ing so will re­sult in me pos­sess­ing a Fer­rari, and (b) I value that out­come more than I value con­tin­u­ing to have that money. And it doesn’t mat­ter whether the rea­son I value that out­come is:

  • Pru­den­tial: based on self-in­ter­est;

  • Mo­ral: e.g., I’m a util­i­tar­ian who be­lieves that the best way I can use my money to in­crease uni­verse-wide util­ity is to buy my­self a Fer­rari (per­haps it looks re­ally red and shiny and my bi­ases are self-serv­ing the hell out of me);

  • Some mix­ture of the two.

Epistemic rationality

Note that that dis­cus­sion fo­cused on in­stru­men­tal ra­tio­nal­ity, but the same ba­sic points could be made in re­la­tion to epistemic ra­tio­nal­ity, given that epistemic ra­tio­nal­ity it­self “can be seen as a form of in­stru­men­tal ra­tio­nal­ity in which knowl­edge and truth are goals in them­selves” (LW Wiki).

For ex­am­ple, I could say that, from the per­spec­tive of epistemic ra­tio­nal­ity, I “shouldn’t” be­lieve that buy­ing that Fer­rari will cre­ate more util­ity in ex­pec­ta­tion than donat­ing the same money to AMF would. This is be­cause hold­ing that be­lief won’t help me meet the goal of hav­ing ac­cu­rate be­liefs.

Whether and how this re­lates to moral­ity would de­pend on whether the “deeper rea­sons” why I pre­fer to have ac­cu­rate be­liefs (as­sum­ing I do in­deed have that prefer­ence) are pru­den­tial, moral, or mixed.[5]

Sub­jec­tive vs objective

Sub­jec­tive nor­ma­tivity re­lates to what one should do based on what one be­lieves, whereas ob­jec­tive nor­ma­tivity re­lates to what one “ac­tu­ally” should do (i.e., based on the true state of af­fairs). Greaves and Cot­ton-Bar­ratt illus­trate this dis­tinc­tion with the fol­low­ing ex­am­ple:

Sup­pose Alice packs the wa­ter­proofs but, as the day turns out, it does not rain. Does it fol­low that Alice made the wrong de­ci­sion? In one (ob­jec­tive) sense of “wrong”, yes: thanks to that de­ci­sion, she ex­pe­rienced the mild but un­nec­es­sary in­con­ve­nience of car­ry­ing bulky raingear around all day. But in a sec­ond (more sub­jec­tive) sense, clearly it need not fol­low that the de­ci­sion was wrong: if the prob­a­bil­ity of rain was suffi­ciently high and Alice suffi­ciently dis­likes get­ting wet, her de­ci­sion could eas­ily be the ap­pro­pri­ate one to make given her state of ig­no­rance about how the weather would in fact turn out. Nor­ma­tive the­o­ries of de­ci­sion-mak­ing un­der un­cer­tainty aim to cap­ture this sec­ond, more sub­jec­tive, type of eval­u­a­tion; the stan­dard such ac­count is ex­pected util­ity the­ory.[6][7]

This dis­tinc­tion can be ap­plied to each sub­type of nor­ma­tivity (i.e., moral­ity, pru­dence, etc.).

(I dis­cuss this dis­tinc­tion fur­ther in my post Mo­ral un­cer­tainty vs re­lated con­cepts.)

Axiology

The term ax­iol­ogy is used in differ­ent ways in differ­ent ways, but the defi­ni­tion we’ll fo­cus on here is from the Stan­ford En­cy­clopae­dia of Philos­o­phy:

Tra­di­tional ax­iol­ogy seeks to in­ves­ti­gate what things are good, how good they are, and how their good­ness is re­lated to one an­other. What­ever we take the “pri­mary bear­ers” of value to be, one of the cen­tral ques­tions of tra­di­tional ax­iol­ogy is that of what stuffs are good: what is of value.

The same ar­ti­cle also states: “For in­stance, a tra­di­tional ques­tion of ax­iol­ogy con­cerns whether the ob­jects of value are sub­jec­tive psy­cholog­i­cal states, or ob­jec­tive states of the world.”

Ax­iol­ogy (in this sense) is es­sen­tially one as­pect of moral­ity/​ethics. For ex­am­ple, clas­si­cal util­i­tar­i­anism com­bines:

  • the prin­ci­ple that one must take ac­tions which will lead to the out­come with the high­est pos­si­ble level of value, rather than just do­ing things that lead to “good enough” out­comes, or just avoid­ing vi­o­lat­ing peo­ple’s rights

  • the ax­iol­ogy that “well-be­ing” is what has in­trin­sic value

The ax­iol­ogy it­self is not a moral the­ory, but plays a key role in that moral the­ory.

Thus, one can’t have an ax­iolog­i­cal “should” state­ment, but one’s ax­iol­ogy may in­fluence/​in­form one’s moral “should” state­ments.

De­ci­sion theory

(This sec­tion is sort-of my own com­men­tary, may be mis­taken, and may ac­ci­den­tally de­vi­ate from stan­dard uses of terms.)

It seems to me that the way to fit de­ci­sion the­o­ries into this pic­ture is to say that one must add a de­ci­sion the­ory to one of the “sources of nor­ma­tivity” listed above (e.g., moral­ity) in or­der to get some form of nor­ma­tive (e.g., moral) state­ments. How­ever, a de­ci­sion the­ory can’t “gen­er­ate” a nor­ma­tive state­ment by it­self.

For ex­am­ple, sup­pose that I have a moral prefer­ence for hav­ing more money rather than less, all other things held con­stant (be­cause I wish to donate it to cost-effec­tive causes). By it­self, this can’t tell me whether I “should” one-box or two-box in New­comb’s prob­lem. But once I spec­ify my de­ci­sion the­ory, I can say whether I “should” one-box or two-box. E.g., if I’m a causal de­ci­sion the­o­rist, I “should” two-box.

But if I knew only that I was a causal de­ci­sion the­o­rist, it would still be pos­si­ble that I “should” one-box, if for some rea­son I preferred to have less money. Thus, as stated, we must spec­ify (or as­sume) both a set of prefer­ences and a de­ci­sion the­ory in or­der to ar­rive at nor­ma­tive state­ments.

Metaethics

While nor­ma­tive ethics ad­dresses such ques­tions as “What should I do?”, eval­u­at­ing spe­cific prac­tices and prin­ci­ples of ac­tion, meta-ethics ad­dresses ques­tions such as “What is good­ness?” and “How can we tell what is good from what is bad?”, seek­ing to un­der­stand the na­ture of eth­i­cal prop­er­ties and eval­u­a­tions. (Wikipe­dia)

Thus, metaethics is not di­rectly nor­ma­tive at all; it isn’t about mak­ing “should”, “ought”, “bet­ter than”, or similar state­ments. In­stead, it’s about un­der­stand­ing the “na­ture” of (the moral sub­set of) such state­ments, “where they come from”, and other such fun/​spooky/​non­sense/​in­cred­ibly im­por­tant mat­ters.

Metanormativity

Me­tanor­ma­tivity re­lates to the “norms that gov­ern how one ought to act that take into ac­count one’s fun­da­men­tal nor­ma­tive un­cer­tainty”. Nor­ma­tive un­cer­tainty, in turn, is es­sen­tially a gen­er­al­i­sa­tion of moral un­cer­tainty that can also ac­count for (un­cer­tainty about) pru­den­tial rea­sons. I will thus dis­cuss the topic of metanor­ma­tivity in my next post, on Mo­ral un­cer­tainty vs re­lated con­cepts.

As stated ear­lier, I hope this use­fully added to/​clar­ified the con­cepts in your men­tal toolkit, and I’d wel­come any feed­back or com­ments!

(In par­tic­u­lar, if you think there’s an­other con­cept whose over­laps with/​dis­tinc­tions from “moral­ity” are worth high­light­ing, ei­ther let me know to add it, or just go ahead and ex­plain it in the com­ments your­self.)


  1. This post won’t at­tempt to dis­cuss spe­cific de­bates within metaethics, such as whether or not there are “ob­jec­tive moral facts”, and, if there are, whether or not these facts are “nat­u­ral”. Very loosely speak­ing, I’m not try­ing to an­swer ques­tions about what moral­ity it­self ac­tu­ally is, but rather about the over­laps and dis­tinc­tions be­tween what moral­ity is meant to be about and what other top­ics that in­volve “should” and “ought” state­ments are meant to be about. ↩︎

  2. Con­sid­er­ing moral and pru­den­tial rea­sons sep­a­rately also seems to make sense for moral the­o­ries which see su­pereroga­tion as pos­si­ble; that is, the­o­ries which see some acts as “morally good al­though not (strictly) re­quired” (SEP). If we only be­lieve in such the­o­ries, we may of­ten find our­selves de­cid­ing be­tween one act that’s morally “good enough” and an­other (su­pereroga­tory) act that’s morally bet­ter but pru­den­tially worse. (E.g., per­haps, oc­ca­sion­ally donat­ing small sums to whichever char­ity strikes one’s fancy, vs donat­ing 10% of one’s in­come to char­i­ties recom­mended by An­i­mal Char­ity Eval­u­a­tors.) ↩︎

  3. The bound­ary seems even fuzzier when you also con­sider that many moral the­o­ries, such as clas­si­cal or prefer­ence util­i­tar­i­anism, already con­sider one’s own hap­piness or prefer­ences to be morally rele­vant. This ar­guably makes also con­sid­er­ing “pru­den­tial rea­sons” look like sim­ply “dou­ble-count­ing” one’s self-in­ter­est, or giv­ing it ad­di­tional “weight”. ↩︎

  4. If we in­stead used a defi­ni­tion of ra­tio­nal­ity in which prefer­ences must only be based on self-in­ter­est, then I be­lieve ra­tio­nal­ity would be­come a sub­set of pru­dence speci­fi­cally, rather than of nor­ma­tivity as a whole. It would still be the case that the dis­tinc­tive fea­ture of ra­tio­nal “should” state­ments is that they fol­low in a sys­tem­atic way from one’s be­liefs and prefer­ences. ↩︎

  5. Some­what rele­vantly, Dar­wall writes: “Episte­mol­ogy has an ir­re­ducibly nor­ma­tive as­pect, in so far as it is con­cerned with norms for be­lief.” ↩︎

  6. We could fur­ther di­vide sub­jec­tive nor­ma­tivity up into, roughly, “what one should do based on what one ac­tu­ally be­lieves” and “what one should do based on what it would be rea­son­able for one to be­lieve”. The fol­low­ing quote is rele­vant (though doesn’t di­rectly ad­dress that ex­act dis­tinc­tion):

    Be­fore mov­ing on, we should dis­t­in­guish sub­jec­tive cre­dences, that is, de­grees of be­lief, from epistemic cre­dences, that is, the de­gree of be­lief that one is epistem­i­cally jus­tified in hav­ing, given one’s ev­i­dence. When I use the term ‘cre­dence’ I re­fer to epistemic cre­dences (though much of my dis­cus­sion could be ap­plied to a par­allel dis­cus­sion in­volv­ing sub­jec­tive cre­dences); when I want to re­fer to sub­jec­tive cre­dences I use the term ‘de­grees of be­lief’.

    The rea­son for this is that ap­pro­pri­ate­ness seems to have some sort of nor­ma­tive force: if it is most ap­pro­pri­ate for some­one to do some­thing, it seems that, other things be­ing equal, they ought, in the rele­vant sense of ‘ought’, to do it. But peo­ple can have crazy be­liefs: a psy­chopath might think that a kil­ling spree is the most moral thing to do. But there’s no sense in which the psy­chopath ought to go on a kil­ling spree: rather, he ought to re­vise his be­liefs. We can only cap­ture that idea if we talk about epistemic cre­dences, rather than de­grees of be­lief.

    (I found that quote in this com­ment, where it’s at­tributed to Will MacAskill’s BPhil the­sis. Un­for­tu­nately, I can’t seem to ac­cess the the­sis, in­clud­ing via Way­back Ma­chine.) ↩︎

  7. It also seems to me that this “sub­jec­tive vs ob­jec­tive” dis­tinc­tion is some­what re­lated to, but dis­tinct from, ex ante vs ex post think­ing. ↩︎