Callum—interesting question. My sense is that there’s a small to moderate amount of work by EAs and EA-adjacent folks on AI sentience, AI suffering risks (S-risk), digital sentience, AI rights, etc. - although it doesn’t tend to get much attention or discussion on EA Forum.
There might be a couple reasons for this (speculatively):
First, EAs tend to be very focused on AI X-risk, and any attention to AI sentience raises uncomfortable trade-offs regarding development and regulation of AGI. For example, population ethics applied to digital sentience might lead us to become quite ‘pronatalist’ about maximizing numbers of sentient AIs over the long term. This could leads people into the kind of reckless, radical ‘e/acc’ accelerationism that says it’s fine for humanity to go extinct as long as we’re replaced by sentient AIs.
Second, sentience is a topic studied mostly by psychologists, neuroscientists, and philosophers of mind—fields that tend to be under-represented in EA compared to economists, computer scientists, and moral philosophers. Much of the EA interest in sentience is centered in the animal welfare area, where people really struggle with theoretical and empirical research about which animals are sentience versus not (e.g. whether oysters, shrimp, or crickets are sentient). If we can’t even clarify very effectively whether oysters are sentient (given that we have a relatively good understanding of how nervous systems evolve, and what adaptive functions they implement), it seems even more challenging to figure out how to determine which AI systems are sentient.
Callum—interesting question. My sense is that there’s a small to moderate amount of work by EAs and EA-adjacent folks on AI sentience, AI suffering risks (S-risk), digital sentience, AI rights, etc. - although it doesn’t tend to get much attention or discussion on EA Forum.
There might be a couple reasons for this (speculatively):
First, EAs tend to be very focused on AI X-risk, and any attention to AI sentience raises uncomfortable trade-offs regarding development and regulation of AGI. For example, population ethics applied to digital sentience might lead us to become quite ‘pronatalist’ about maximizing numbers of sentient AIs over the long term. This could leads people into the kind of reckless, radical ‘e/acc’ accelerationism that says it’s fine for humanity to go extinct as long as we’re replaced by sentient AIs.
Second, sentience is a topic studied mostly by psychologists, neuroscientists, and philosophers of mind—fields that tend to be under-represented in EA compared to economists, computer scientists, and moral philosophers. Much of the EA interest in sentience is centered in the animal welfare area, where people really struggle with theoretical and empirical research about which animals are sentience versus not (e.g. whether oysters, shrimp, or crickets are sentient). If we can’t even clarify very effectively whether oysters are sentient (given that we have a relatively good understanding of how nervous systems evolve, and what adaptive functions they implement), it seems even more challenging to figure out how to determine which AI systems are sentient.