There are strong signals that we would/wouldn’t be able to encode good values in an AI.
Powerful people’s values shift more toward/away from caring about non-human animals (including wild animals) or sentient simulations of non-human minds.
I hear a good argument that I hadn’t already heard or thought of. (I consider this pretty likely, given how little total thought has gone into these questions.)
Okay. Do you see any proxies (besides other people’s views) that, if they changed in our lifetime, might shift your estimates one way or the other?
Off the top of my head:
We develop strong AI.
There are strong signals that we would/wouldn’t be able to encode good values in an AI.
Powerful people’s values shift more toward/away from caring about non-human animals (including wild animals) or sentient simulations of non-human minds.
I hear a good argument that I hadn’t already heard or thought of. (I consider this pretty likely, given how little total thought has gone into these questions.)