While this is an important question to consider, it is by no means clear that we could get any short term consensus about how moral alignment should be implemented. In practical terms, if an AI (AGI) intelligence is more long lived and/or more able to thrive and bring benefits to others of its kind, wouldn’t it be moral to value its existence over that of a human being? Yet I think you would struggle to get AI scientists to embed that value choice into their code. Similarly, looking ‘down’ the scale, in decisions where the lives or wellbeing of humans had to be balanced against animals, I am not sure there would be much likelihood of broad agreement on the relative value to attach to each in such cases.
I would encourage further research and advocacy on this point but at best this will be a long, long process. And you might not be happy with the eventual outcome!
At the moment there are no established guidelines in this area I am aware of in the existing not-AI-related space (though I have not looked hard...) but if AI-related research/discussion did establish such guidelines, it might cause the guidelines to be propagated out into the rest of the policy world and set a precedent.
I think this is exactly why we need research building a vision of how a sentient centric ASI—that works with humanity to gradually improve lives for everyone—behaves. As humanity gets stronger and more able to control the outside environment and inside body and mind, we may see less conflict of interests between animals and humans, and this can creates a monumental chance to take a stewardship role in relation to non-humans.
I hope you are right but you should be aware that the opposite may also be true. Depending on the weights we give AI in valuing human and non-human thriving, AI may discover new ways that would make humans happier at the expense of non-humans. There are people and organizations who would assign a moral weight of zero to the suffering of some of even all non-humans, and if those people win the argument then you might end up with an AI which is less to your taste than one that just emerges organically with basic guide rails.
For example, leaving aside second order effects on wider ecology, if you asked me how much intense suffering I would inflict on shrimp to save a human life, I would personally choose an almost unlimited amount.
I agree there is the opposite danger as well, and perhaps we have yet to known those dangers and new conflict of interests between humans and animals… I assume with ASI, the risk of possible future digital minds being in a conflict of interests with humans is bigger than for animals.
While this is an important question to consider, it is by no means clear that we could get any short term consensus about how moral alignment should be implemented. In practical terms, if an AI (AGI) intelligence is more long lived and/or more able to thrive and bring benefits to others of its kind, wouldn’t it be moral to value its existence over that of a human being? Yet I think you would struggle to get AI scientists to embed that value choice into their code. Similarly, looking ‘down’ the scale, in decisions where the lives or wellbeing of humans had to be balanced against animals, I am not sure there would be much likelihood of broad agreement on the relative value to attach to each in such cases.
I would encourage further research and advocacy on this point but at best this will be a long, long process. And you might not be happy with the eventual outcome!
At the moment there are no established guidelines in this area I am aware of in the existing not-AI-related space (though I have not looked hard...) but if AI-related research/discussion did establish such guidelines, it might cause the guidelines to be propagated out into the rest of the policy world and set a precedent.
I think this is exactly why we need research building a vision of how a sentient centric ASI—that works with humanity to gradually improve lives for everyone—behaves. As humanity gets stronger and more able to control the outside environment and inside body and mind, we may see less conflict of interests between animals and humans, and this can creates a monumental chance to take a stewardship role in relation to non-humans.
I hope you are right but you should be aware that the opposite may also be true. Depending on the weights we give AI in valuing human and non-human thriving, AI may discover new ways that would make humans happier at the expense of non-humans. There are people and organizations who would assign a moral weight of zero to the suffering of some of even all non-humans, and if those people win the argument then you might end up with an AI which is less to your taste than one that just emerges organically with basic guide rails.
For example, leaving aside second order effects on wider ecology, if you asked me how much intense suffering I would inflict on shrimp to save a human life, I would personally choose an almost unlimited amount.
I agree there is the opposite danger as well, and perhaps we have yet to known those dangers and new conflict of interests between humans and animals…
I assume with ASI, the risk of possible future digital minds being in a conflict of interests with humans is bigger than for animals.