There are strong signals that we would/wouldn’t be able to encode good values in an AI.
Powerful people’s values shift more toward/away from caring about non-human animals (including wild animals) or sentient simulations of non-human minds.
I hear a good argument that I hadn’t already heard or thought of. (I consider this pretty likely, given how little total thought has gone into these questions.)
Off the top of my head:
We develop strong AI.
There are strong signals that we would/wouldn’t be able to encode good values in an AI.
Powerful people’s values shift more toward/away from caring about non-human animals (including wild animals) or sentient simulations of non-human minds.
I hear a good argument that I hadn’t already heard or thought of. (I consider this pretty likely, given how little total thought has gone into these questions.)