I have so many axes of disagreement that is hard to figure out which one is most relevant. I guess let’s go one by one.
Me: “What do you mean when you say AIs might be unaligned with human values?”
I would say that pretty much every agent other than me (and probably me in different times and moods) are “misaligned” with me, in the sense that I would not like a world where they get to dictate everything that happens without consulting me in any way.
This is a quibble because in fact I think if many people were put in such a position they would try asking others what they want and try to make it happen.
Consider a random retirement home. Compared to the rest of the world, it has basically no power. If the rest of humanity decided to destroy or loot the retirement home, there would be virtually no serious opposition.
This hypothetical assumes too much, because people outside care about the lovely people in the retirement home, and they represent their interests. The question is, will some future AIs with relevance and power care for humans, as humans become obsolete?
I think this is relevant, because in the current world there is a lot of variety. There are people who care about retirement homes and people who don’t. The people who care about retirement homes work hard toale sure retirement homes are well cared for.
But we could imagine a future world where the AI that pulls ahead of the pack is very indifferent about humans, while the AI that cares about humans falls behind; perhaps this is because caring about humans puts you at a disadvantage (if you are not willing to squish humans in your territory your space to build servers gets reduced or something; I think this is unlikely but possible) and/or because there is a winner-take-all mechanism and the first AI systems that gets there coincidentally don’t care about humans (unlikely but possible). Then we would be without representation and in possibly quite a sucky situation.
I’m asking why it matters morally. Why should I care if a human takes my place after I die compared to an AI?
Stop that train, I do not want to be replaced by either human or AI. I want to be in the future and have relevance, or at least be empowered through agents that represent my interests.
I also want my fellow humans to be there, if they want to, and have their own interests be represented.
Humans seem to get their moral values from cultural learning and emulation, which seems broadly similar to the way that AIs will get their moral values.
I don’t think AIs learn in a similar way to humans, and future AI might learn in a even more dissimilar way. The argument I would find more persuasive is pointing out that humans learn in different ways to one another, from very different data and situations, and yet end with similar values that include caring for one another. That I find suggestive, though it’s hard to be confident.
I have so many axes of disagreement that is hard to figure out which one is most relevant. I guess let’s go one by one.
I would say that pretty much every agent other than me (and probably me in different times and moods) are “misaligned” with me, in the sense that I would not like a world where they get to dictate everything that happens without consulting me in any way.
This is a quibble because in fact I think if many people were put in such a position they would try asking others what they want and try to make it happen.
This hypothetical assumes too much, because people outside care about the lovely people in the retirement home, and they represent their interests. The question is, will some future AIs with relevance and power care for humans, as humans become obsolete?
I think this is relevant, because in the current world there is a lot of variety. There are people who care about retirement homes and people who don’t. The people who care about retirement homes work hard toale sure retirement homes are well cared for.
But we could imagine a future world where the AI that pulls ahead of the pack is very indifferent about humans, while the AI that cares about humans falls behind; perhaps this is because caring about humans puts you at a disadvantage (if you are not willing to squish humans in your territory your space to build servers gets reduced or something; I think this is unlikely but possible) and/or because there is a winner-take-all mechanism and the first AI systems that gets there coincidentally don’t care about humans (unlikely but possible). Then we would be without representation and in possibly quite a sucky situation.
Stop that train, I do not want to be replaced by either human or AI. I want to be in the future and have relevance, or at least be empowered through agents that represent my interests.
I also want my fellow humans to be there, if they want to, and have their own interests be represented.
I don’t think AIs learn in a similar way to humans, and future AI might learn in a even more dissimilar way. The argument I would find more persuasive is pointing out that humans learn in different ways to one another, from very different data and situations, and yet end with similar values that include caring for one another. That I find suggestive, though it’s hard to be confident.