Moral patienthood is the condition of deserving moral consideration. A moral patient is an entity that possesses moral patienthood.
While it is normally agreed that typical humans are moral patients, there is debate about the patienthood of many other types of beings, including human embryos, non-human animals, future people, and digital sentients.
Moral patienthood should not be confused with moral agency.[1] For example, we might think that a baby lacks moral agency—it lacks the ability to judge right from wrong, and to act on the basis of reasons—but that it is still a moral patient, in the sense that those with moral agency should care about their well-being.
If we assume a welfarist theory of the good, the question of patienthood can be divided into two sub-questions: Which entities can have well-being? and Whose well-being is morally relevant? Each question can in turn be broken down into the question of which characteristics or capacities are relevant and the question of which beings have those capacities.
First, which entities can have well-being? A majority of scientists now agree that many non-human animals, including mammals, birds, and fish, are conscious and capable of feeling pain,[2] but this claim is more contentious in philosophy.[3] This question is vital for assessing the value of interventions aimed at improving farm and/or wild animal welfare. A smaller but growing field of study considers whether artificial intelligences might be conscious in morally relevant ways.[4]
Second, whose well-being do we care about? Some have argued that future beings have less value, even though they will be just as conscious as today’s beings are now. This reduction could be assessed in the form of a discount rate on future value, so that experiences occurring one year from now are worth, say, 3% less than they do at present. Alternatively, it could be assessed by valuing individuals who do not yet exist less than current beings, for reasons related to the non-identity problem[5] (see also population ethics). It is contentious whether these approaches are correct. Moreover, in light of the astronomical number of individuals who could potentially exist in the future, assigning some value to future people implies that virtually all value—at least for welfarist theories—will reside in the far future[6] (see also longtermism).
Further reading
Animal Ethics (2017) The relevance of sentience, Animal Ethics, September.
Bostrom, Nick & Eliezer Yudkowsky (2014) The ethics of artificial intelligence, in Keith Frankish & William M. Ramsey (eds.) The Cambridge Handbook of Artificial Intelligence, Cambridge: Cambridge University Press, pp. 316–334.
Kagan, Shelly (2019) How to Count Animals, More or Less, Oxford: Oxford University Press.
MacAskill, W. & Meissner, D. (2020) The expanding moral circle, in Introduction to Utilitarianism.
Muehlhauser, Luke (2017) 2017 report on consciousness and moral patienthood, Open Philanthropy, June.
Tomasik, Brian (2014) Do artificial reinforcement-learning agents matter morally?, arXiv:1410.8233.
Related entries
axiology | consciousness research | moral circle expansion | moral weight | speciesism | valence
- ^
Wikipedia (2004) Distinction between moral agency and moral patienthood, in ‘Moral agency’, Wikipedia, September 25 (updated 14 November 2020).
- ^
Low, Philip et al. (2012) The Cambridge declaration on consciousness, Francis Crick Memorial Conference, July 7.
- ^
Allen, Colin & Michael Trestman (2016) Animal consciousness, in Edward Zalta (ed.), Stanford Encyclopedia of Philosophy.
- ^
Wikipedia (2003) Artificial consciousness, Wikipedia, March 13 (updated 24 April 2021).
- ^
Roberts, M. A. (2019) The nonidentity problem, in Edward Zalta (ed.), Stanford Encyclopedia of Philosophy.
- ^
Bostrom, Nick (2009) Astronomical waste: the opportunity cost of delayed technological development, Utilitas 15(3), pp. 308-314.
Thanks for the overview!