I usually see MIRI’s goal in its technical agenda is “to ensure that the development of smarter-than-human intelligence has a positive impact on humanity.” Is there any chance of expanding this to include all sentient beings? If not, why not? Given that nonhuman animals vastly outnumber the human ones, I would think the most pressing question for AI is its effect on nonhuman animals rather than on human ones.
The official mission statement is just “has a positive impact.” I’ll encourage people to also use phrasing that’s more inclusive to other sentients in future papers/communications.
Unless there are strategic concerns I don’t fully understand I second this. I cringe a little every time I see such goal-descriptions.
Personally I would argue that the issue of largest moral concern is ensuring that new beings that can have good experiences and have a meaningful existence are put into existence, as the quality and quantity of consciousness experienced by such not-yet-existant beings could dwarf what is experienced by currently existing beings on our small planet.
I understand that MIRI doesn’t want to take stance on all controversial ethical issues, but I would also wonder if MIRI has considered replacing “a positive impact on humanity” with “a positive impact on humanity and …”, e.g. “a positive impact on humanity and other sentient beings” or “a positive impact on humanity and the universe”.
I am not worried as much as you about the effect of AI on nonhuman animals, but I agree that it would maybe be nice if MIRI was slightly more explicitly anti-speciesist in its materials. I think they have a pretty good excuse for not being clearer about this, though.
FWIW, MIRI people seem pretty un-speciesist to me, in the strict sense of not being biased based on species. (Eliezer is AFAIK alone among MIRI employees in his confidence that chickens etc are morally irrelevant.) I have had a few conversations with Nate about nonhuman animals, and I’ve thought his opinions were thoroughly reasonable.
(Nate can probably respond to this too, but I think it’s possible that I’m a more unbiased source on MIRI’s attitude to non-human animals.)
I usually see MIRI’s goal in its technical agenda is “to ensure that the development of smarter-than-human intelligence has a positive impact on humanity.” Is there any chance of expanding this to include all sentient beings? If not, why not? Given that nonhuman animals vastly outnumber the human ones, I would think the most pressing question for AI is its effect on nonhuman animals rather than on human ones.
Yep :-)
The official mission statement is just “has a positive impact.” I’ll encourage people to also use phrasing that’s more inclusive to other sentients in future papers/communications.
Unless there are strategic concerns I don’t fully understand I second this. I cringe a little every time I see such goal-descriptions.
Personally I would argue that the issue of largest moral concern is ensuring that new beings that can have good experiences and have a meaningful existence are put into existence, as the quality and quantity of consciousness experienced by such not-yet-existant beings could dwarf what is experienced by currently existing beings on our small planet.
I understand that MIRI doesn’t want to take stance on all controversial ethical issues, but I would also wonder if MIRI has considered replacing “a positive impact on humanity” with “a positive impact on humanity and …”, e.g. “a positive impact on humanity and other sentient beings” or “a positive impact on humanity and the universe”.
I am not worried as much as you about the effect of AI on nonhuman animals, but I agree that it would maybe be nice if MIRI was slightly more explicitly anti-speciesist in its materials. I think they have a pretty good excuse for not being clearer about this, though.
FWIW, MIRI people seem pretty un-speciesist to me, in the strict sense of not being biased based on species. (Eliezer is AFAIK alone among MIRI employees in his confidence that chickens etc are morally irrelevant.) I have had a few conversations with Nate about nonhuman animals, and I’ve thought his opinions were thoroughly reasonable.
(Nate can probably respond to this too, but I think it’s possible that I’m a more unbiased source on MIRI’s attitude to non-human animals.)
P[humans and animals survive a long time]; large
P[humans survive with animal life with super AI]: small
P[humans survive without animal life with super AI]: much smaller
P[Animals survive without humans but with super AI]: nearly none?
It seems to me that by focusing on protecting humanity and its society, you’re protecting animals by implication pretty much.
Promoting animal liberation has large wierdness points.
MIRI’s efforts are already hampered by wierdness points.
So using MIRI as a platform to promote animal liberation is probably not a wise move?