RSS

Mo­ral patienthood

TagLast edit: 14 Jul 2022 17:44 UTC by Pablo

Moral patienthood is the condition of deserving moral consideration. A moral patient is an entity that possesses moral patienthood.

While it is normally agreed that typical humans are moral patients, there is debate about the patienthood of many other types of beings, including human embryos, non-human animals, future people, and digital sentients.

Moral patienthood should not be confused with moral agency.[1] For example, we might think that a baby lacks moral agency—it lacks the ability to judge right from wrong, and to act on the basis of reasons—but that it is still a moral patient, in the sense that those with moral agency should care about their well-being.

If we assume a welfarist theory of the good, the question of patienthood can be divided into two sub-questions: Which entities can have well-being? and Whose well-being is morally relevant? Each question can in turn be broken down into the question of which characteristics or capacities are relevant and the question of which beings have those capacities.

First, which entities can have well-being? A majority of scientists now agree that many non-human animals, including mammals, birds, and fish, are conscious and capable of feeling pain,[2] but this claim is more contentious in philosophy.[3] This question is vital for assessing the value of interventions aimed at improving farm and/​or wild animal welfare. A smaller but growing field of study considers whether artificial intelligences might be conscious in morally relevant ways.[4]

Second, whose well-being do we care about? Some have argued that future beings have less value, even though they will be just as conscious as today’s beings are now. This reduction could be assessed in the form of a discount rate on future value, so that experiences occurring one year from now are worth, say, 3% less than they do at present. Alternatively, it could be assessed by valuing individuals who do not yet exist less than current beings, for reasons related to the non-identity problem[5] (see also population ethics). It is contentious whether these approaches are correct. Moreover, in light of the astronomical number of individuals who could potentially exist in the future, assigning some value to future people implies that virtually all value—at least for welfarist theories—will reside in the far future[6] (see also longtermism).

Further reading

Animal Ethics (2017) The relevance of sentience, Animal Ethics, September.

Bostrom, Nick & Eliezer Yudkowsky (2014) The ethics of artificial intelligence, in Keith Frankish & William M. Ramsey (eds.) The Cambridge Handbook of Artificial Intelligence, Cambridge: Cambridge University Press, pp. 316–334.

Kagan, Shelly (2019) How to Count Animals, More or Less, Oxford: Oxford University Press.

MacAskill, W. & Meissner, D. (2020) The expanding moral circle, in Introduction to Utilitarianism.

Muehlhauser, Luke (2017) 2017 report on consciousness and moral patienthood, Open Philanthropy, June.

Tomasik, Brian (2014) Do artificial reinforcement-learning agents matter morally?, arXiv:1410.8233.

Related entries

axiology | consciousness research | moral circle expansion | moral weight | speciesism | valence

  1. ^

    Wikipedia (2004) Distinction between moral agency and moral patienthood, in ‘Moral agency’, Wikipedia, September 25 (updated 14 November 2020‎).

  2. ^

    Low, Philip et al. (2012) The Cambridge declaration on consciousness, Francis Crick Memorial Conference, July 7.

  3. ^

    Allen, Colin & Michael Trestman (2016) Animal consciousness, in Edward Zalta (ed.), Stanford Encyclopedia of Philosophy.

  4. ^

    Wikipedia (2003) Artificial consciousness, Wikipedia, March 13 (updated 24 April 2021‎).

  5. ^

    Roberts, M. A. (2019) The nonidentity problem, in Edward Zalta (ed.), Stanford Encyclopedia of Philosophy.

  6. ^

The Sub­jec­tive Ex­pe­rience of Time: Welfare Implications

Jason Schukraft27 Jul 2020 13:24 UTC
113 points
21 comments63 min readEA link

Toby Ord’s The Scourge, Reviewed

ColdButtonIssues30 Aug 2022 21:01 UTC
59 points
25 comments4 min readEA link

Com­par­i­sons of Ca­pac­ity for Welfare and Mo­ral Sta­tus Across Species

Jason Schukraft18 May 2020 0:42 UTC
101 points
63 comments59 min readEA link

[Question] Why might one value an­i­mals far less than hu­mans?

IsabelHasse8 Jun 2020 1:54 UTC
28 points
14 comments1 min readEA link

Peter Singer: Non-hu­man an­i­mal ethics

EA Global28 Aug 2015 17:21 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

Rad­i­cal Empathy

Holden Karnofsky16 Feb 2017 12:41 UTC
73 points
15 comments6 min readEA link

How to Mea­sure Ca­pac­ity for Welfare and Mo­ral Status

Jason Schukraft1 Jun 2020 15:01 UTC
71 points
19 comments41 min readEA link

In­ter­view with Jon Mal­latt about in­ver­te­brate consciousness

Max_Carpendale28 Apr 2019 17:52 UTC
83 points
10 comments11 min readEA link

Hi, I’m Luke Muehlhauser. AMA about Open Philan­thropy’s new re­port on con­scious­ness and moral patienthood

lukeprog28 Jun 2017 15:49 UTC
32 points
66 comments1 min readEA link

Does Crit­i­cal Flicker-Fu­sion Fre­quency Track the Sub­jec­tive Ex­pe­rience of Time?

Jason Schukraft3 Aug 2020 13:30 UTC
68 points
20 comments26 min readEA link

Ja­son Schukraft: Mo­ral stand­ing and cause pri­ori­ti­za­tion

EA Global24 Oct 2020 19:56 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

Differ­ences in the In­ten­sity of Valenced Ex­pe­rience across Species

Jason Schukraft30 Oct 2020 1:13 UTC
93 points
42 comments52 min readEA link

EA read­ing list: util­i­tar­i­anism and consciousness

richard_ngo7 Aug 2020 19:32 UTC
17 points
3 comments1 min readEA link

Up­com­ing AMA with Luke Muehlhauser on con­scious­ness and moral pa­tient­hood (June 28, start­ing 9am Pa­cific)

Julia_Wise🔸21 Jun 2017 21:56 UTC
13 points
14 comments1 min readEA link

De­sire the­o­ries of welfare and non­hu­man animals

MichaelStJules16 Jul 2022 18:52 UTC
20 points
5 comments4 min readEA link

Solu­tion to the two en­velopes prob­lem for moral weights

MichaelStJules19 Feb 2024 0:15 UTC
66 points
26 comments27 min readEA link

Which an­i­mals re­al­ize which types of sub­jec­tive welfare?

MichaelStJules27 Feb 2024 19:31 UTC
22 points
0 comments18 min readEA link

Types of sub­jec­tive welfare

MichaelStJules2 Feb 2024 9:56 UTC
42 points
0 comments18 min readEA link

Gra­da­tions of moral weight

MichaelStJules29 Feb 2024 23:08 UTC
13 points
0 comments10 min readEA link

LLMs can­not use­fully be moral patients

LGS2 Jul 2024 4:43 UTC
35 points
24 comments4 min readEA link

The Value of Con­scious­ness as a Pivotal Question

Derek Shiller3 Jul 2024 18:50 UTC
71 points
21 comments8 min readEA link

Mak­ing AI Welfare an EA pri­or­ity re­quires jus­tifi­ca­tions that have not been given

JWS 🔸7 Jul 2024 21:38 UTC
59 points
21 comments6 min readEA link

Re­think Pri­ori­ties’ Digi­tal Con­scious­ness Pro­ject Announcement

Bob Fischer5 Jul 2024 11:15 UTC
114 points
4 comments2 min readEA link

Se­quence overview: Welfare and moral weights

MichaelStJules1 Aug 2024 16:51 UTC
34 points
0 comments1 min readEA link

The scale of an­i­mal agriculture

MichaelStJules16 May 2024 4:01 UTC
49 points
4 comments3 min readEA link

The Welfare of Digi­tal Minds: A Re­search Agenda

Derek Shiller11 Nov 2024 12:58 UTC
53 points
1 comment31 min readEA link

LLMs are weirder than you think

Derek Shiller20 Nov 2024 13:39 UTC
61 points
3 comments22 min readEA link

Notes from “Cog­ni­tion, welfare, and the prob­lem of in­ter­species com­par­i­sons”

Barry Grimes16 Nov 2021 9:17 UTC
30 points
0 comments11 min readEA link

Em­pa­tia Radicale

EA Italy31 Dec 2022 4:08 UTC
1 point
0 comments6 min readEA link

Im­ple­men­ta­tional Con­sid­er­a­tions for Digi­tal Consciousness

Derek Shiller30 Jul 2023 22:15 UTC
35 points
4 comments3 min readEA link

[Question] How wor­ried should I be about a child­less Dis­ney­land?

Will Bradshaw28 Oct 2019 15:32 UTC
31 points
8 comments1 min readEA link

Cere­bral organoids

Forumite20 Jul 2022 20:12 UTC
17 points
0 comments1 min readEA link

Is un­der­stand­ing the moral sta­tus of digi­tal minds a press­ing world prob­lem?

Cody_Fenwick30 Sep 2024 8:50 UTC
42 points
0 comments34 min readEA link
(80000hours.org)

We should pre­vent the cre­ation of ar­tifi­cial sen­tience

RichardP29 Oct 2024 12:22 UTC
106 points
11 comments15 min readEA link

Key take­aways from Famine, Affluence, and Morality

Emma Richter🔸14 Sep 2022 21:02 UTC
25 points
3 comments5 min readEA link

New Re­port on Con­scious­ness and Mo­ral Patienthood

lukeprog6 Jun 2017 13:21 UTC
21 points
1 comment2 min readEA link
(www.openphilanthropy.org)

De­tect­ing Mo­rally Sig­nifi­cant Pain in Non­hu­mans: Some Philo­soph­i­cal Difficulties

Jason Schukraft23 Dec 2018 17:49 UTC
73 points
8 comments22 min readEA link

[Question] AI con­scious­ness & moral sta­tus: What do the ex­perts think?

sableye6 Jul 2024 15:27 UTC
0 points
3 comments1 min readEA link

Why I Think All The Species Of Sig­nifi­cantly De­bated Con­scious­ness Are Con­scious And Suffer Intensely

Omnizoid20 Nov 2024 16:11 UTC
62 points
23 comments33 min readEA link

Arthro­pod (non) sentience

Arturo Macias25 Nov 2024 15:35 UTC
−7 points
4 comments4 min readEA link

Why an­i­mal char­i­ties are much more effec­tive than hu­man ones

utilitarian018 Apr 2019 17:48 UTC
14 points
15 comments2 min readEA link

Oc­to­puses (Prob­a­bly) Don’t Have Nine Minds

Bob Fischer12 Dec 2022 11:59 UTC
94 points
19 comments10 min readEA link
(docs.google.com)

Po­ten­tial Fu­ture People

TeddyW8 Jan 2023 17:20 UTC
11 points
6 comments1 min readEA link

Marginal ex­is­tence and its rele­vance to pro-na­tal­ism, longter­mism, and the re­pug­nant conclusion

dawsoneliasen23 Jan 2023 3:24 UTC
4 points
2 comments8 min readEA link

AI safety and con­scious­ness re­search: A brainstorm

Daniel_Friedrich15 Mar 2023 14:33 UTC
11 points
1 comment9 min readEA link

革新的な思いやり

EA Japan26 Jul 2023 13:35 UTC
1 point
0 comments1 min readEA link