But you are introducing a regress here. Already, EAs care about animal welfare and consider AI important.
But I think it’s much more like, some EAs care about animal welfare, and some EAs care about AI, and less care about both things. More importantly, of the relatively few people who care about both AI and animals, quite few of them care about them in a connected way.
Thus, I doubt that any AI safety agreements would omit non-human animals.
I actually doubt any AI safety agreements would explicitly include non-human animals. If you look at the public AI principles/statements/agreements from NGOs, universities, governments, and corporations, only the Montreal University said “all sentient beings”. From my experience in reading and discussing with the EA longtermist/AI community, I think AI safety principles published by EAs might be more likely to include all sentient beings than the world average. I think it’s still more unlikely than likely that EA AI safety principles will explicitly include animals.
Further, AI will probably consider non-human sentience, if it is sentient.
I would like to hear your argument on why you think so? It seems to me that humans didn’t simply care about other sentient beings only by being sentient ourselves.
Also, what about the scenario where there will be no sentient AI?
Also, you are assuming an erroneous dynamic. Animal welfare is important for AI safety not only because it enables it to acquire diametrically different impact but also since it provides a connection to the agriculture industry, a strategic sector in all nations.
I actually think that you might be assuming an erroneous dynamic. You might be connecting AI to the agricultural sector because you think there AI might affect farmed animals, which I actually agree will be the case (my main research focus is AI’s impacts on farmed animals). But AI won’t just affect the lives of farmed animals, but rather pretty much all animals: Farmed animals, wild animals, experimented animals, companion animals, human animals. For me the core reason animal welfare is important for AI is similar to why human welfare is important for AI—It’s because all sentient beings matter.
some EAs care about animal welfare, and some EAs care about AI, and less care about both things
It is not that people do not care as in do not consider the issue, they just do not prioritize it in their actions since they do not think that is how they make the highest impact (e. g. due to specialization).
quite few of them care about them in a connected way
Sure, that makes sense. Discourse on this topic has not extensively taken place so far, so the ways of connecting the two have not been much advanced.
doubt any AI safety agreements would explicitly include non-human animals
Yes, perhaps it is the best when it is implied that animals are included. Then, animals are included in statements other than that of the Montreal University, such as the Asilomar Principles. “[B]eneficial intelligence” and how “legal systems [can] be more fair and efficient” should be researched by teams that “actively cooperate” on the objective of “Shared Benefit.” Perhaps, by ‘people’ they meant persons, so any entities that currently have or should have legal personhood status, such as non-human animals.
humans didn’t simply care about other sentient beings only by being sentient ourselves.
But AI, even now, is smarter. It can read anything, so can figure that ‘good’ means ‘all sentience benefits.’ I have not yet askedIGPT-3 but just asking Google and skimming the results, it is clear that various forms of sentience should be considered. Perhaps, it is a matter of making AI realize this by asking them a few questions.
scenario where there will be no sentient AI?
It will be the same result, given relevant early-on entertainment in key questions. Just, less utility monsters will be included in the equations.
erroneous dynamic
Ok, no one is assuming erroneous dynamic. Farm animal welfare is a subset of animal welfare, which is a subset of sentience welfare. So, just to be safe (and fair), we should make AI consider the welfare of all sentience.
But I think it’s much more like, some EAs care about animal welfare, and some EAs care about AI, and less care about both things. More importantly, of the relatively few people who care about both AI and animals, quite few of them care about them in a connected way.
I actually doubt any AI safety agreements would explicitly include non-human animals. If you look at the public AI principles/statements/agreements from NGOs, universities, governments, and corporations, only the Montreal University said “all sentient beings”. From my experience in reading and discussing with the EA longtermist/AI community, I think AI safety principles published by EAs might be more likely to include all sentient beings than the world average. I think it’s still more unlikely than likely that EA AI safety principles will explicitly include animals.
I would like to hear your argument on why you think so? It seems to me that humans didn’t simply care about other sentient beings only by being sentient ourselves.
Also, what about the scenario where there will be no sentient AI?
I actually think that you might be assuming an erroneous dynamic. You might be connecting AI to the agricultural sector because you think there AI might affect farmed animals, which I actually agree will be the case (my main research focus is AI’s impacts on farmed animals). But AI won’t just affect the lives of farmed animals, but rather pretty much all animals: Farmed animals, wild animals, experimented animals, companion animals, human animals. For me the core reason animal welfare is important for AI is similar to why human welfare is important for AI—It’s because all sentient beings matter.
It is not that people do not care as in do not consider the issue, they just do not prioritize it in their actions since they do not think that is how they make the highest impact (e. g. due to specialization).
Sure, that makes sense. Discourse on this topic has not extensively taken place so far, so the ways of connecting the two have not been much advanced.
Yes, perhaps it is the best when it is implied that animals are included. Then, animals are included in statements other than that of the Montreal University, such as the Asilomar Principles. “[B]eneficial intelligence” and how “legal systems [can] be more fair and efficient” should be researched by teams that “actively cooperate” on the objective of “Shared Benefit.” Perhaps, by ‘people’ they meant persons, so any entities that currently have or should have legal personhood status, such as non-human animals.
But AI, even now, is smarter. It can read anything, so can figure that ‘good’ means ‘all sentience benefits.’ I have not yet asked IGPT-3 but just asking Google and skimming the results, it is clear that various forms of sentience should be considered. Perhaps, it is a matter of making AI realize this by asking them a few questions.
It will be the same result, given relevant early-on entertainment in key questions. Just, less utility monsters will be included in the equations.
Ok, no one is assuming erroneous dynamic. Farm animal welfare is a subset of animal welfare, which is a subset of sentience welfare. So, just to be safe (and fair), we should make AI consider the welfare of all sentience.