This is a question about some anthropocentrism that seems latent in the AI safety research that I’ve seen so far:
Why do AI alignment researchers seem to focus only on aligning with human values, preferences, and goals, without considering alignment with the values, preferences, and goals of non-human animals?
I see a disconnect between EA work on AI alignment and EA work on animal welfare, and it’s puzzling to me, given that any transformative AI will transform not just 8 billion human lives, but trillions of other sentient lives on Earth. Are any AI researchers trying to figure out how AI can align with even simple cases like the interests of the few species of pets and livestock?
If we view AI development not just as a matter of human technology, but as a ‘major evolutionary transition’ for life on our planet more generally, it would seem prudent to consider broader issues of alignment with the other 5,400 species of mammals, the other 45,000 species of vertebrates, etc...
IMO: Most of the difficulty in technical alignment is figuring out how to robustly align to any particular values whatsoever. Mine, yours, humanitys, all sentient life on earth, etc. All roughly equally difficult. “Human values” is probably the catchphrase mostly for instrumental reasons—we are talking to other humans, after all, and in particular to liberal egalitarian humans who are concerned about some people being left out and especially concerned about an individual or small group hoarding power. Insofar as lots of humans were super concerned that animals would be left out too, we’d be saying humans+animals. The hard part isn’t deciding who to align to, it’s figuring out how to align to anything at all.
I’ve heard this argument several times, that once we figure out how to align AI with the values of any sentient being, the rest of AI alignment with all the other billions/trillions of different sentient beings will be trivially easy.
I’m not at all convinced that this is true, or even superficially plausible, given the diversity, complexity, and heterogeneity of values, and given that sentient beings are severely & ubiquitously unaligned with each other (see: evolutionary game theory, economic competition, ideological conflict).
What is the origin of this faith among the AI alignment community that ‘alignment in general is hard, but once we solve generic alignment, alignment with billions/trillions of specific beings and specific values will be easy’?
I’m truly puzzled on this point, and can’t figure out how it became such a common view in AI safety.
I’ve heard this argument several times, that once we figure out how to align AI with the values of any sentient being, the rest of AI alignment with all the other billions/trillions of different sentient beings will be trivially easy.
One perhaps obvious point: if you make some rationality assumptions, there is a single unique solution to how those preferences should be aggregated. So if you are able to align an AI with a single individual, you can iterate this alignment with all the individuals and use Harsanyi’s theorem to aggregate their preferences.
This (assuming rationality) is the uniquely best method to aggregate preferences.
There are criticisms to be made of this solution, but it at least seems reasonable, and I don’t think there’s an analogous simple “reasonably good” solution to aligning AI with an individual.
Trouble is, (1) the rationality assumption is demonstrably false, (2) there’s no reason for human groups to agree to aggregate their preferences in this way—any more than they’d be willing to dissolve their nation-states and hand unlimited power over to a United Nations that promises to use Harsanyi’s theorem fairly and incorruptibly.
Yes, we could try to align AI with some kind of lowest-common-denominator aggregated human (or mammal, or vertebrate) preferences. But if most humans would not be happy with that strategy, it’s a non-starter for solving alignment.
I agree that a lot of people believe alignment to any agent is the hard part, and aligning to a particular human is relatively easy, or a mere “AI capabilities” problem. Why? I think it’s a sincere belief, but ultimately most people think it because it’s an agreed assumption by the AIS community, held for a mixture of intrinsic and instrumental reasons. The intrinsic reasons are that a lot of the fundamental conceptual problems in AI safety seem not to care which human you’re aligning the AI system to, e.g. the fact that human values are complex, that wireheading may arise, and that it’s hard to describe how the AI system should want to change its values over time.
The instrumental reason is that it’s a central premise of the field, similar to the “DNA->RNA->protein ->cellular functions” perspective in molecular biology. The vision for AIS as a field is that we try not to indulge futurist and political topics, and why try not to argue with each other about things like whose values to align the AI to.
You can see some of this instrumentalist perspective in Eliezer’s Coherent Extrapolated Volition paper:
Anyone who wants to argue about whether extrapolated volition will favor Democrats or Republicans should recall that currently the Earth is scheduled to vanish in a puff of tiny smiley faces, with an unknown deadline and Moore’s Law ticking. As an experiment, I am instituting the following policy on the SL4 mailing list: None may argue on the SL4 mailing list about the output of CEV, or what kind of world it will create, unless they donate to the Singularity Institute: • $10 to argue for 48 hours. • $50 to argue for one month. • $200 to argue for one year. • $1000 to get a free pass until the Singularity.
Past donations count toward this total. It’s okay to have fun, and speculate, so long as you’re not doing it at the expense of actually helping
Presumably the prices have gone up with the increased EA wealth, and down again this year..
Ryan—thanks for this helpful post about this ‘central dogma’ in AI safety.
It sounds like much of this view may have been shaped by Yudkowsky’s initial writings about alignment and coherent extrapolated volition? And maybe reflects a LessWrong ethos that cosmic-scale considerations mean we should ignore current political, religious, and ideological conflicts of values and interests among humans?
My main concern here is that if this central dogma about AI alignment (that ‘alignment to any agent is the hard part, and aligning to a particular human is relatively easy, or a mere “AI capabilities” problem’, as you put it) is wrong—then we may be radically underestimating the difficult of alignment, and it might end up being much harder to align with the specific & conflicting values of 8 billion people and trillions of animals, than it is to just ‘align in principle’ with one example agent.
And that would be very bad news for our species. IMHO, one might even argue that failure to challenge this central dogma in AI safety is a big potential failure mode, and perhaps an X risk in its own right....
Yes, I personally think it was shaped by EY and that broader LessWrong ethos.
I don’t really have a strong sense of whether you’re right about aligning to many agents being much harder than one ideal agent. I suppose that if you have an AHI system that can align to one human, then you could align many of them to different randomly selected humans, and simulate a debates between the resulting agents. You could then could consult the humans regarding whether their positions were adequately represented in that parliament. I suppose it wouldn’t be that much harder than just aligning to one agent.
A broader thought is that you may want to be clear about how an inability to align to n humans would cause catastrophe. It could be directly catastrophic, because it means we make a less ethical AI. Or it could be indirectly catastrophic, because our inability to design a system that aligns to n humans makes nations less able to cooperate, exacerbating any arms race.
I think that it is unfair to characterize it as something that hasn’t been questioned. It has in fact been argued for at length. See e.g. the literature on the inner alignment problem. I agree there are also instrumental reasons supporting this dogma, but even if there weren’t, I’d still believe it and most alignment researchers would still believe it, because it is a pretty straightforward inference to make if you understand the alignment literature.
I don’t see how the so-called ‘inner alignment problem’ is relevant here, or what you mean by ‘instrumental reasons supporting this dogma’.
And it sounds like you’re saying I’d agree with the AI alignment experts if only I understood the alignment literature… but I’m moderately familiar with the literature; I just don’t agree with some of its key assumptions.
Instrumental reasons supporting this dogma: The dogma helps us all stay sane and focused on the mission instead of fighting each other, so we have reason to promote it that is independent of whether or not it is true. (By contrast, an epistemic reason supporting the dogma would be a reason to think it is true, rather than merely a reason to think it is helpful/useful/etc.)
Inner alignment problem: Well, it’s generally considered to be an open unsolved problem. We don’t know how to make the goals/values/etc of the hypothetical superhuman AGI correspond in any predictable way to the reward signal or training setup—I mean, yeah, no doubt there is a correspondence, but we don’t understand it well enough to say “Given such-and-such a training environment and reward signal, the eventual goals/values/etc of the eventual AGI will be so-and-so.” So we can’t make the learning process zero in on even fairly simple goals like “maximize the amount of diamond in the universe.” For an example of an attempt to do so, a proposal that maaaybe might work, see https://www.lesswrong.com/posts/k4AQqboXz8iE5TNXK/a-shot-at-the-diamond-alignment-problem Though actually this isn’t even a proposal to get that, it’s a proposal to get the much weaker thing of an AGI that makes a lot of diamond eventually.
My perspective, I think, is that most of the difficulties that people think of as being the extra, hard part of to one->many alignment, are already present in one->one alignment. A single human is already a barely coherent mess of conflicting wants and goals interacting chaotically, and the strong form of “being aligned to one human” requires a solution that can resolve values conflicts between incompatible ‘parts’ of that human and find outcomes that are satisfactory to all interests. Expanding this to more than one person is a change of degree but not kind.
There is a weaker form of “being aligned to one human” that’s just like “don’t kill that human and follow their commands in more or less the way they intend”, and if that’s all we can get then that only translates to “don’t drive humanity extinct and follow the wishes of at least some subset of people”, and I’d consider that a dramatically suboptimal outcome. At this point I’d take it though.
Hi Robert, thanks for your perspective on this. I love your YouTube videos by the way—very informative and clear, and helpful for AI alignment newbies like me.
My main concern is that we still have massive uncertainty about what proportion of ‘alignment with all humans’ can be solved by ‘alignment with one human’. It sounds like your bet is that it’s somewhere above 50% (maybe?? I’m just guessing); whereas my bet is that it’s under 20% -- i.e. I think that aligning with one human leaves most of the hard problems, and the X risk, unsolved.
And part of my skepticism in that regard is that a great many humans—perhaps most of the 8 billion on Earth—would be happy to use AI to inflict harm, up to and including death and genocide, on certain other individuals and groups of humans. So, AI that’s aligned with frequently homicidal/genocidal individual humans would be AI that’s deeply anti-aligned with other individuals and groups.
Intent alignment seeks to build an AI that does what its designer wants. You seem to want an alternative: build an AI that does what is best for all sentient life (or at least for humanity). Some reasons that we (maybe) shouldn’t focus on this problem:
it seems horribly intractable (but I’d love to hear your ideas for solutions!) at both a technical and philosophical level—this is my biggest qualm
with an AGI that does exactly what Facebook engineer no. 13,882 wants, we “only” need that engineer to want things that are good for all sentient life
(maybe) scenarios with advanced AI killing all sentient life are substantially more likely than scenarios with animal suffering
There are definitely counterarguments to these. E.g. maybe animal suffering scenarios are still higher expected value to work on because of their severity (imagine factory farms continuing to exist for billions of years).
It sounds quite innocuous to ‘build an AI that does what its designer wants’—as long as we ignore the true diversity of its designers (and users) might actually want.
If an AI designer or user is a misanthropic nihilist who wants humanity to go extinct, or it a religious or political terrorist, or is an authoritarian censor who wants to suppress free speech, then we shouldn’t want the AI to do what they want.
Is this problem ‘horribly intractable’? Maybe it is. But if we ignore the truly, horribly intractable problems in AI alignment, then we increase X risk.
I increasingly get the sense that AI alignment as a field is defining itself so narrowly, and limiting the alignment problems that it considers ‘legitimate’ to consider so narrowly, that it we could end up in a situation where alignment looks ‘solved’ at a narrow technical level, and this gives reassurance to corporate AI development teams that they can go full steam ahead towards AGI—but where alignment is very very far from solved at the actual real-world level of billions of diverse people with seriously conflicting interests.
Totally agree that intent alignment does basically nothing to solve misuse risks. To weigh the importance of misuse risks, we should consider (a) how quickly AI to AGI happens, (b) whether the first group to deploy AGI will use it to prevent other groups from developing AGI, (c) how quickly AGI to superintelligence happens, (d) how widely accessible AI will be to the public as it develops, (e) the destructive power of AI misuse at various stages of AI capability, etc.
I increasingly get the sense that AI alignment as a field is defining itself so narrowly...
Paul Christiano’s 2019 EAG-SF talk highlights how there are so many other important subproblems within “make AI go well” besides intent alignment. Of course, Paul doesn’t speak for “AI alignment as a field.”
This is a question about some anthropocentrism that seems latent in the AI safety research that I’ve seen so far:
Why do AI alignment researchers seem to focus only on aligning with human values, preferences, and goals, without considering alignment with the values, preferences, and goals of non-human animals?
I see a disconnect between EA work on AI alignment and EA work on animal welfare, and it’s puzzling to me, given that any transformative AI will transform not just 8 billion human lives, but trillions of other sentient lives on Earth. Are any AI researchers trying to figure out how AI can align with even simple cases like the interests of the few species of pets and livestock?
If we view AI development not just as a matter of human technology, but as a ‘major evolutionary transition’ for life on our planet more generally, it would seem prudent to consider broader issues of alignment with the other 5,400 species of mammals, the other 45,000 species of vertebrates, etc...
IMO: Most of the difficulty in technical alignment is figuring out how to robustly align to any particular values whatsoever. Mine, yours, humanitys, all sentient life on earth, etc. All roughly equally difficult. “Human values” is probably the catchphrase mostly for instrumental reasons—we are talking to other humans, after all, and in particular to liberal egalitarian humans who are concerned about some people being left out and especially concerned about an individual or small group hoarding power. Insofar as lots of humans were super concerned that animals would be left out too, we’d be saying humans+animals. The hard part isn’t deciding who to align to, it’s figuring out how to align to anything at all.
I’ve heard this argument several times, that once we figure out how to align AI with the values of any sentient being, the rest of AI alignment with all the other billions/trillions of different sentient beings will be trivially easy.
I’m not at all convinced that this is true, or even superficially plausible, given the diversity, complexity, and heterogeneity of values, and given that sentient beings are severely & ubiquitously unaligned with each other (see: evolutionary game theory, economic competition, ideological conflict).
What is the origin of this faith among the AI alignment community that ‘alignment in general is hard, but once we solve generic alignment, alignment with billions/trillions of specific beings and specific values will be easy’?
I’m truly puzzled on this point, and can’t figure out how it became such a common view in AI safety.
One perhaps obvious point: if you make some rationality assumptions, there is a single unique solution to how those preferences should be aggregated. So if you are able to align an AI with a single individual, you can iterate this alignment with all the individuals and use Harsanyi’s theorem to aggregate their preferences.
This (assuming rationality) is the uniquely best method to aggregate preferences.
There are criticisms to be made of this solution, but it at least seems reasonable, and I don’t think there’s an analogous simple “reasonably good” solution to aligning AI with an individual.
Ben—thanks for the reminder about Harsanyi.
Trouble is, (1) the rationality assumption is demonstrably false, (2) there’s no reason for human groups to agree to aggregate their preferences in this way—any more than they’d be willing to dissolve their nation-states and hand unlimited power over to a United Nations that promises to use Harsanyi’s theorem fairly and incorruptibly.
Yes, we could try to align AI with some kind of lowest-common-denominator aggregated human (or mammal, or vertebrate) preferences. But if most humans would not be happy with that strategy, it’s a non-starter for solving alignment.
I agree that a lot of people believe alignment to any agent is the hard part, and aligning to a particular human is relatively easy, or a mere “AI capabilities” problem. Why? I think it’s a sincere belief, but ultimately most people think it because it’s an agreed assumption by the AIS community, held for a mixture of intrinsic and instrumental reasons. The intrinsic reasons are that a lot of the fundamental conceptual problems in AI safety seem not to care which human you’re aligning the AI system to, e.g. the fact that human values are complex, that wireheading may arise, and that it’s hard to describe how the AI system should want to change its values over time.
The instrumental reason is that it’s a central premise of the field, similar to the “DNA->RNA->protein ->cellular functions” perspective in molecular biology. The vision for AIS as a field is that we try not to indulge futurist and political topics, and why try not to argue with each other about things like whose values to align the AI to.
You can see some of this instrumentalist perspective in Eliezer’s Coherent Extrapolated Volition paper:
Presumably the prices have gone up with the increased EA wealth, and down again this year..
Ryan—thanks for this helpful post about this ‘central dogma’ in AI safety.
It sounds like much of this view may have been shaped by Yudkowsky’s initial writings about alignment and coherent extrapolated volition? And maybe reflects a LessWrong ethos that cosmic-scale considerations mean we should ignore current political, religious, and ideological conflicts of values and interests among humans?
My main concern here is that if this central dogma about AI alignment (that ‘alignment to any agent is the hard part, and aligning to a particular human is relatively easy, or a mere “AI capabilities” problem’, as you put it) is wrong—then we may be radically underestimating the difficult of alignment, and it might end up being much harder to align with the specific & conflicting values of 8 billion people and trillions of animals, than it is to just ‘align in principle’ with one example agent.
And that would be very bad news for our species. IMHO, one might even argue that failure to challenge this central dogma in AI safety is a big potential failure mode, and perhaps an X risk in its own right....
Yes, I personally think it was shaped by EY and that broader LessWrong ethos.
I don’t really have a strong sense of whether you’re right about aligning to many agents being much harder than one ideal agent. I suppose that if you have an AHI system that can align to one human, then you could align many of them to different randomly selected humans, and simulate a debates between the resulting agents. You could then could consult the humans regarding whether their positions were adequately represented in that parliament. I suppose it wouldn’t be that much harder than just aligning to one agent.
A broader thought is that you may want to be clear about how an inability to align to n humans would cause catastrophe. It could be directly catastrophic, because it means we make a less ethical AI. Or it could be indirectly catastrophic, because our inability to design a system that aligns to n humans makes nations less able to cooperate, exacerbating any arms race.
I think that it is unfair to characterize it as something that hasn’t been questioned. It has in fact been argued for at length. See e.g. the literature on the inner alignment problem. I agree there are also instrumental reasons supporting this dogma, but even if there weren’t, I’d still believe it and most alignment researchers would still believe it, because it is a pretty straightforward inference to make if you understand the alignment literature.
Could you please say more about this?
I don’t see how the so-called ‘inner alignment problem’ is relevant here, or what you mean by ‘instrumental reasons supporting this dogma’.
And it sounds like you’re saying I’d agree with the AI alignment experts if only I understood the alignment literature… but I’m moderately familiar with the literature; I just don’t agree with some of its key assumptions.
OK, sure.
Instrumental reasons supporting this dogma: The dogma helps us all stay sane and focused on the mission instead of fighting each other, so we have reason to promote it that is independent of whether or not it is true. (By contrast, an epistemic reason supporting the dogma would be a reason to think it is true, rather than merely a reason to think it is helpful/useful/etc.)
Inner alignment problem: Well, it’s generally considered to be an open unsolved problem. We don’t know how to make the goals/values/etc of the hypothetical superhuman AGI correspond in any predictable way to the reward signal or training setup—I mean, yeah, no doubt there is a correspondence, but we don’t understand it well enough to say “Given such-and-such a training environment and reward signal, the eventual goals/values/etc of the eventual AGI will be so-and-so.” So we can’t make the learning process zero in on even fairly simple goals like “maximize the amount of diamond in the universe.” For an example of an attempt to do so, a proposal that maaaybe might work, see https://www.lesswrong.com/posts/k4AQqboXz8iE5TNXK/a-shot-at-the-diamond-alignment-problem Though actually this isn’t even a proposal to get that, it’s a proposal to get the much weaker thing of an AGI that makes a lot of diamond eventually.
Thanks; those are helpful clarifications. Appreciate it.
My perspective, I think, is that most of the difficulties that people think of as being the extra, hard part of to one->many alignment, are already present in one->one alignment. A single human is already a barely coherent mess of conflicting wants and goals interacting chaotically, and the strong form of “being aligned to one human” requires a solution that can resolve values conflicts between incompatible ‘parts’ of that human and find outcomes that are satisfactory to all interests. Expanding this to more than one person is a change of degree but not kind.
There is a weaker form of “being aligned to one human” that’s just like “don’t kill that human and follow their commands in more or less the way they intend”, and if that’s all we can get then that only translates to “don’t drive humanity extinct and follow the wishes of at least some subset of people”, and I’d consider that a dramatically suboptimal outcome. At this point I’d take it though.
Hi Robert, thanks for your perspective on this. I love your YouTube videos by the way—very informative and clear, and helpful for AI alignment newbies like me.
My main concern is that we still have massive uncertainty about what proportion of ‘alignment with all humans’ can be solved by ‘alignment with one human’. It sounds like your bet is that it’s somewhere above 50% (maybe?? I’m just guessing); whereas my bet is that it’s under 20% -- i.e. I think that aligning with one human leaves most of the hard problems, and the X risk, unsolved.
And part of my skepticism in that regard is that a great many humans—perhaps most of the 8 billion on Earth—would be happy to use AI to inflict harm, up to and including death and genocide, on certain other individuals and groups of humans. So, AI that’s aligned with frequently homicidal/genocidal individual humans would be AI that’s deeply anti-aligned with other individuals and groups.
Intent alignment seeks to build an AI that does what its designer wants. You seem to want an alternative: build an AI that does what is best for all sentient life (or at least for humanity). Some reasons that we (maybe) shouldn’t focus on this problem:
it seems horribly intractable (but I’d love to hear your ideas for solutions!) at both a technical and philosophical level—this is my biggest qualm
with an AGI that does exactly what Facebook engineer no. 13,882 wants, we “only” need that engineer to want things that are good for all sentient life
(maybe) scenarios with advanced AI killing all sentient life are substantially more likely than scenarios with animal suffering
There are definitely counterarguments to these. E.g. maybe animal suffering scenarios are still higher expected value to work on because of their severity (imagine factory farms continuing to exist for billions of years).
It sounds quite innocuous to ‘build an AI that does what its designer wants’—as long as we ignore the true diversity of its designers (and users) might actually want.
If an AI designer or user is a misanthropic nihilist who wants humanity to go extinct, or it a religious or political terrorist, or is an authoritarian censor who wants to suppress free speech, then we shouldn’t want the AI to do what they want.
Is this problem ‘horribly intractable’? Maybe it is. But if we ignore the truly, horribly intractable problems in AI alignment, then we increase X risk.
I increasingly get the sense that AI alignment as a field is defining itself so narrowly, and limiting the alignment problems that it considers ‘legitimate’ to consider so narrowly, that it we could end up in a situation where alignment looks ‘solved’ at a narrow technical level, and this gives reassurance to corporate AI development teams that they can go full steam ahead towards AGI—but where alignment is very very far from solved at the actual real-world level of billions of diverse people with seriously conflicting interests.
Totally agree that intent alignment does basically nothing to solve misuse risks. To weigh the importance of misuse risks, we should consider (a) how quickly AI to AGI happens, (b) whether the first group to deploy AGI will use it to prevent other groups from developing AGI, (c) how quickly AGI to superintelligence happens, (d) how widely accessible AI will be to the public as it develops, (e) the destructive power of AI misuse at various stages of AI capability, etc.
Paul Christiano’s 2019 EAG-SF talk highlights how there are so many other important subproblems within “make AI go well” besides intent alignment. Of course, Paul doesn’t speak for “AI alignment as a field.”
ICYMI: Steering AI to care for animals, and soon discusses this, as do some posts in this topic.
Thank you! Appreciate the links.