RSS

Mo­ral patienthood

TagLast edit: 16 May 2021 13:05 UTC by EA Wiki assistant

A being is a moral patient if they are included in a theory of the good (also known as an axiology or theory of value). While it is normally agreed that typical humans are moral patients in this sense, there is debate about the patienthood of human embryos, non-human animals, future people, and non-biological sentients.

Moral patienthood should not be confused with moral agency (see Wikipedia 2004). For example, we might think that a baby lacks moral agency—it lacks the ability to judge right from wrong, and to act on the basis of reasons—but that it is still a moral patient, in the sense that those with moral agency should care about their well-being.

If we assume a welfarist theory of the good, the question of patienthood can be divided into two sub-questions: Which entities can have well-being? and Whose well-being is morally relevant? Each question can in turn be broken down into the question of which characteristics or capacities are relevant and the question of which beings have those capacities.

First, which entities can have well-being? A majority of scientists now agree that many non-human animals, including mammals, birds, and fish, are conscious and capable of feeling pain (Low et al. 2012), but this claim is more contentious in philosophy (Allen & Trestman 2016). This question is vital for assessing the value of interventions aimed at improving farm and/​or wild animal welfare. A smaller but growing field of study considers whether artificial intelligences might be conscious in morally relevant ways (Wikipedia 2003).

Second, whose well-being do we care about? Some have argued that future beings have less value, even though they will be just as conscious as today’s beings are now. This reduction could be assessed in the form of a discount rate on future value, so that experiences occurring one year from now are worth, say, 3% less than they do at present. Alternatively, it could be assessed by valuing individuals who do not yet exist less than current beings, for reasons related to the non-identity problem (Robert 2019; see also population ethics). It is contentious whether these approaches are correct. Moreover, in light of the astronomical number of individuals who could potentially exist in the future, assigning some value to future people implies that virtually all value—at least for welfarist theories—will reside in the far future (Bostrom 2009; see also longtermism).

Bibliography

Animal Ethics (2017) The relevance of sentience, Animal Ethics, September.

Allen, Colin & Michael Trestman (2016) Animal consciousness, in Edward Zalta (ed.), Stanford Encyclopedia of Philosophy.
Discusses similar questions from a philosophical perspective.

Beckstead, Nick (2013) On the overwhelming importance of shaping the far future, doctoral dissertation, Rutgers University Department of Philosophy.
Justifies its importance.

Bostrom, Nick (2009) Astronomical waste: the opportunity cost of delayed technological development, Utilitas 15(3), pp. 308-314.

Bostrom, Nick & Eliezer Yudkowsky (2014) The ethics of artificial intelligence, in Keith Frankish & William M. Ramsey (eds.) The Cambridge Handbook of Artificial Intelligence, Cambridge: Cambridge University Press, pp. 316–334.

Kagan, Shelly (2019) How to Count Animals, More or Less, Oxford: Oxford University Press.

Low, Philip et al. (2012) The Cambridge declaration on consciousness, Francis Crick Memorial Conference, July 7.
Declares that animals are capable of consciousness, from a group of leading scientists.

MacAskill, W. & Meissner, D. (2020) The expanding moral circle, in Introduction to Utilitarianism.

Muehlhauser, Luke (2017) 2017 report on consciousness and moral patienthood, Open Philanthropy, June.

Roberts, M. A. (2019) The nonidentity problem, in Edward Zalta (ed.), Stanford Encyclopedia of Philosophy.

Tomasik, Brian (2014) Do artificial reinforcement-learning agents matter morally?, arXiv:1410.8233.

Wikipedia (2003) Artificial consciousness, Wikipedia, March 13 (updated 24 April 2021‎).

Wikipedia (2004) Moral agency, Wikipedia, September 25 (updated 14 November 2020‎).

Related entries

axiology | consciousness | moral circle expansion | moral weight | speciesism | valence

Hi, I’m Luke Muehlhauser. AMA about Open Philan­thropy’s new re­port on con­scious­ness and moral patienthood

lukeprog28 Jun 2017 15:49 UTC
20 points
66 commentsEA link

Up­com­ing AMA with Luke Muehlhauser on con­scious­ness and moral pa­tient­hood (June 28, start­ing 9am Pa­cific)

Julia_Wise21 Jun 2017 21:56 UTC
13 points
14 commentsEA link

In­ter­view with Jon Mal­latt about in­ver­te­brate consciousness

MaxCarpendale28 Apr 2019 17:52 UTC
82 points
10 comments11 min readEA link

How to Mea­sure Ca­pac­ity for Welfare and Mo­ral Status

Jason Schukraft1 Jun 2020 15:01 UTC
65 points
19 comments41 min readEA link

Com­par­i­sons of Ca­pac­ity for Welfare and Mo­ral Sta­tus Across Species

Jason Schukraft18 May 2020 0:42 UTC
94 points
59 comments59 min readEA link

The Sub­jec­tive Ex­pe­rience of Time: Welfare Implications

Jason Schukraft27 Jul 2020 13:24 UTC
93 points
13 comments63 min readEA link

Differ­ences in the In­ten­sity of Valenced Ex­pe­rience across Species

Jason Schukraft30 Oct 2020 1:13 UTC
83 points
36 comments80 min readEA link

Ja­son Schukraft: Mo­ral stand­ing and cause pri­ori­ti­za­tion

EA Global24 Oct 2020 19:56 UTC
8 points
0 comments1 min readEA link
(www.youtube.com)

Does Crit­i­cal Flicker-Fu­sion Fre­quency Track the Sub­jec­tive Ex­pe­rience of Time?

Jason Schukraft3 Aug 2020 13:30 UTC
56 points
20 comments36 min readEA link

[Question] Why might one value an­i­mals far less than hu­mans?

IsabelHasse8 Jun 2020 1:54 UTC
32 points
13 comments1 min readEA link

Rad­i­cal Empathy

Holden Karnofsky16 Feb 2017 12:41 UTC
14 points
0 comments6 min readEA link

EA read­ing list: util­i­tar­i­anism and consciousness

richard_ngo7 Aug 2020 19:32 UTC
16 points
3 comments1 min readEA link

Peter Singer: Non-hu­man an­i­mal ethics

EA Global28 Aug 2015 17:21 UTC
6 points
0 comments1 min readEA link
(www.youtube.com)

De­tect­ing Mo­rally Sig­nifi­cant Pain in Non­hu­mans: Some Philo­soph­i­cal Difficulties

Jason Schukraft23 Dec 2018 17:49 UTC
65 points
8 commentsEA link

[Question] How wor­ried should I be about a child­less Dis­ney­land?

willbradshaw28 Oct 2019 15:32 UTC
24 points
8 comments1 min readEA link

Why an­i­mal char­i­ties are much more effec­tive than hu­man ones

utilitarian018 Apr 2019 17:48 UTC
11 points
15 comments2 min readEA link
No comments.