some EAs care about animal welfare, and some EAs care about AI, and less care about both things
It is not that people do not care as in do not consider the issue, they just do not prioritize it in their actions since they do not think that is how they make the highest impact (e. g. due to specialization).
quite few of them care about them in a connected way
Sure, that makes sense. Discourse on this topic has not extensively taken place so far, so the ways of connecting the two have not been much advanced.
doubt any AI safety agreements would explicitly include non-human animals
Yes, perhaps it is the best when it is implied that animals are included. Then, animals are included in statements other than that of the Montreal University, such as the Asilomar Principles. “[B]eneficial intelligence” and how “legal systems [can] be more fair and efficient” should be researched by teams that “actively cooperate” on the objective of “Shared Benefit.” Perhaps, by ‘people’ they meant persons, so any entities that currently have or should have legal personhood status, such as non-human animals.
humans didn’t simply care about other sentient beings only by being sentient ourselves.
But AI, even now, is smarter. It can read anything, so can figure that ‘good’ means ‘all sentience benefits.’ I have not yet askedIGPT-3 but just asking Google and skimming the results, it is clear that various forms of sentience should be considered. Perhaps, it is a matter of making AI realize this by asking them a few questions.
scenario where there will be no sentient AI?
It will be the same result, given relevant early-on entertainment in key questions. Just, less utility monsters will be included in the equations.
erroneous dynamic
Ok, no one is assuming erroneous dynamic. Farm animal welfare is a subset of animal welfare, which is a subset of sentience welfare. So, just to be safe (and fair), we should make AI consider the welfare of all sentience.
It is not that people do not care as in do not consider the issue, they just do not prioritize it in their actions since they do not think that is how they make the highest impact (e. g. due to specialization).
Sure, that makes sense. Discourse on this topic has not extensively taken place so far, so the ways of connecting the two have not been much advanced.
Yes, perhaps it is the best when it is implied that animals are included. Then, animals are included in statements other than that of the Montreal University, such as the Asilomar Principles. “[B]eneficial intelligence” and how “legal systems [can] be more fair and efficient” should be researched by teams that “actively cooperate” on the objective of “Shared Benefit.” Perhaps, by ‘people’ they meant persons, so any entities that currently have or should have legal personhood status, such as non-human animals.
But AI, even now, is smarter. It can read anything, so can figure that ‘good’ means ‘all sentience benefits.’ I have not yet asked IGPT-3 but just asking Google and skimming the results, it is clear that various forms of sentience should be considered. Perhaps, it is a matter of making AI realize this by asking them a few questions.
It will be the same result, given relevant early-on entertainment in key questions. Just, less utility monsters will be included in the equations.
Ok, no one is assuming erroneous dynamic. Farm animal welfare is a subset of animal welfare, which is a subset of sentience welfare. So, just to be safe (and fair), we should make AI consider the welfare of all sentience.