Artificial Intelligence, Morality, and Sentience (AIMS) Survey: 2021

Link post

This report was edited by Michael Dello-Iacovo. The 2021 survey was designed by Janet Pauketat, Jamie Harris, Ali Ladak, and Jacy Reese Anthis. The data was collected and analyzed by Janet Pauketat, Ali Ladak, and Jacy Reese Anthis. Many thanks to David Moss, Zan (Alexander) Saeri, and Daniel Shank for their feedback on our methodology.

We published the 2021 AIMS data on Mendeley Data with some initial results. We announced the publication of the data on our blog. To cite the 2021 AIMS data in your own research, please use: Pauketat, Janet; Ladak, Ali; Harris, Jamie; Anthis, Jacy (2022), “Artificial Intelligence, Morality, and Sentience (AIMS) 2021”, Mendeley Data, V1, doi: 10.17632/​x5689yhv2n.1

To reference our results, please cite this report: Pauketat, Janet V.T., Ladak, Ali, Anthis, Jacy Reese. Artificial Intelligence, Morality, and Sentience (AIMS) Survey: 2021. 2022. PsyArXiv. https://​​doi.org/​​10.31234/​​osf.io/​​dzgsb

Summary

The Artificial Intelligence, Morality, and Sentience (AIMS) survey measures the moral and social perception of different types of artificial intelligences (AIs), particularly sentient AIs. The data provide baseline information about U.S. public opinion, and we intend to run the AIMS survey periodically to track changes over time.[1]

In this first wave, we conducted a preregistered nationally representative survey of 1,232 U.S. Americans in November and December 2021. We also included questions about sentient AIs’ situation in an imagined future world, the moral consideration of other nonhuman entities, and psychological tendencies relevant to AI-human relations. We found that 74.91% of people agreed[2] that sentient AIs deserve to be treated with respect and 48.25% of people agreed that sentient AIs deserve to be included in the moral circle. Additionally,

  • Most people agreed with being cautious about AI development by supporting bans on developing sentience (57.68%), AI-enhanced humans (63.38%), and robot-human hybrids (64.60%).

  • Most people agreed that sentient AIs should be protected from deliberate harm like non-consensual physical damage (67.77%), retaliatory punishment (75.80%), and from people who would intentionally inflict mental or physical pain on them (81.56%).

  • Most people saw currently existing AIs as having more rational (M = 51.36) and analytic (M = 62.74) capacities than emotional (M = 34.27) and feeling (M = 33.65) capacities.

  • Degree of moral concern and perceived social connectedness to humans varied by type of AI. For example, exact digital copies of human brains (M = 3.33) received more moral concern than AI video game characters (M = 2.46) and AI personal assistants (M = 3.74) were perceived as more connected to humans than exact digital copies of animals (M = 3.08).

  • Although most people agreed with practical policies to support sentient AIs like developing welfare standards to protect their well-being (58.98%), agreement was weaker for policies like granting legal rights to sentient AIs (37.16%) and thinking that the welfare of AIs is one of the most important social issues in the world today (30.31%).

  • Most people agreed that AIs should be subservient to humans (80.06%) and perceived that AIs might be harmful to people in the U.S. (64.47%) and future generations of people (69.22%).

  • There was an expectation that AIs in the future would be exploited for their labor (M = 3.26), that they would be used in scientific research (M = 3.40), and that it would be important to reduce the overall percentage of unhappy sentient AIs (M = 3.08).

  • People who showed more moral consideration of nonhuman animals and the environment tended to show more moral consideration of sentient AIs (see Correlations for details).

  • A variety of demographic characteristics and psychological tendencies predicted moral consideration of AIs, especially a vegan diet and having more exposure to AI narratives. Age and gender were also consistent predictors, and region, race/​ethnicity, religiosity, education, income and political orientation predicted some outcomes. Psychological tendencies were predictive of moral consideration: holding stronger techno-animist beliefs, having a greater tendency to anthropomorphize technology, and showing less substratist prejudice (see Predictive Analyses for details).

No comments.