My impression of top AGI researchers is that most take AI sentience pretty seriously as a possibility, and it seems hard for someone to think this without also believing animals can be sentient.
I am not saying this is common, but it is alarming that Eliezer Yudkowsky, a pretty prominent figure in the space, thinks that AI sentience is possible but nonhuman animals are not sentient.
Agreed, it’s a pretty bizarre take. I’d be curious whether his views have changed since he wrote that FB post
Also, Holden Karnofsky (not so confidently) think that humans matter astronomically more than nonhuman animals. At the same time he thinks that digital people is possible.
I am not saying this is common, but it is alarming that Eliezer Yudkowsky, a pretty prominent figure in the space, thinks that AI sentience is possible but nonhuman animals are not sentient.
Agreed, it’s a pretty bizarre take. I’d be curious whether his views have changed since he wrote that FB post
Also, Holden Karnofsky (not so confidently) think that humans matter astronomically more than nonhuman animals. At the same time he thinks that digital people is possible.