I agree that a lot of people believe alignment to any agent is the hard part, and aligning to a particular human is relatively easy, or a mere “AI capabilities” problem. Why? I think it’s a sincere belief, but ultimately most people think it because it’s an agreed assumption by the AIS community, held for a mixture of intrinsic and instrumental reasons. The intrinsic reasons are that a lot of the fundamental conceptual problems in AI safety seem not to care which human you’re aligning the AI system to, e.g. the fact that human values are complex, that wireheading may arise, and that it’s hard to describe how the AI system should want to change its values over time.
The instrumental reason is that it’s a central premise of the field, similar to the “DNA->RNA->protein ->cellular functions” perspective in molecular biology. The vision for AIS as a field is that we try not to indulge futurist and political topics, and why try not to argue with each other about things like whose values to align the AI to.
You can see some of this instrumentalist perspective in Eliezer’s Coherent Extrapolated Volition paper:
Anyone who wants to argue about whether extrapolated volition will favor Democrats or Republicans should recall that currently the Earth is scheduled to vanish in a puff of tiny smiley faces, with an unknown deadline and Moore’s Law ticking. As an experiment, I am instituting the following policy on the SL4 mailing list: None may argue on the SL4 mailing list about the output of CEV, or what kind of world it will create, unless they donate to the Singularity Institute: • $10 to argue for 48 hours. • $50 to argue for one month. • $200 to argue for one year. • $1000 to get a free pass until the Singularity.
Past donations count toward this total. It’s okay to have fun, and speculate, so long as you’re not doing it at the expense of actually helping
Presumably the prices have gone up with the increased EA wealth, and down again this year..
Ryan—thanks for this helpful post about this ‘central dogma’ in AI safety.
It sounds like much of this view may have been shaped by Yudkowsky’s initial writings about alignment and coherent extrapolated volition? And maybe reflects a LessWrong ethos that cosmic-scale considerations mean we should ignore current political, religious, and ideological conflicts of values and interests among humans?
My main concern here is that if this central dogma about AI alignment (that ‘alignment to any agent is the hard part, and aligning to a particular human is relatively easy, or a mere “AI capabilities” problem’, as you put it) is wrong—then we may be radically underestimating the difficult of alignment, and it might end up being much harder to align with the specific & conflicting values of 8 billion people and trillions of animals, than it is to just ‘align in principle’ with one example agent.
And that would be very bad news for our species. IMHO, one might even argue that failure to challenge this central dogma in AI safety is a big potential failure mode, and perhaps an X risk in its own right....
Yes, I personally think it was shaped by EY and that broader LessWrong ethos.
I don’t really have a strong sense of whether you’re right about aligning to many agents being much harder than one ideal agent. I suppose that if you have an AHI system that can align to one human, then you could align many of them to different randomly selected humans, and simulate a debates between the resulting agents. You could then could consult the humans regarding whether their positions were adequately represented in that parliament. I suppose it wouldn’t be that much harder than just aligning to one agent.
A broader thought is that you may want to be clear about how an inability to align to n humans would cause catastrophe. It could be directly catastrophic, because it means we make a less ethical AI. Or it could be indirectly catastrophic, because our inability to design a system that aligns to n humans makes nations less able to cooperate, exacerbating any arms race.
I think that it is unfair to characterize it as something that hasn’t been questioned. It has in fact been argued for at length. See e.g. the literature on the inner alignment problem. I agree there are also instrumental reasons supporting this dogma, but even if there weren’t, I’d still believe it and most alignment researchers would still believe it, because it is a pretty straightforward inference to make if you understand the alignment literature.
I don’t see how the so-called ‘inner alignment problem’ is relevant here, or what you mean by ‘instrumental reasons supporting this dogma’.
And it sounds like you’re saying I’d agree with the AI alignment experts if only I understood the alignment literature… but I’m moderately familiar with the literature; I just don’t agree with some of its key assumptions.
Instrumental reasons supporting this dogma: The dogma helps us all stay sane and focused on the mission instead of fighting each other, so we have reason to promote it that is independent of whether or not it is true. (By contrast, an epistemic reason supporting the dogma would be a reason to think it is true, rather than merely a reason to think it is helpful/useful/etc.)
Inner alignment problem: Well, it’s generally considered to be an open unsolved problem. We don’t know how to make the goals/values/etc of the hypothetical superhuman AGI correspond in any predictable way to the reward signal or training setup—I mean, yeah, no doubt there is a correspondence, but we don’t understand it well enough to say “Given such-and-such a training environment and reward signal, the eventual goals/values/etc of the eventual AGI will be so-and-so.” So we can’t make the learning process zero in on even fairly simple goals like “maximize the amount of diamond in the universe.” For an example of an attempt to do so, a proposal that maaaybe might work, see https://www.lesswrong.com/posts/k4AQqboXz8iE5TNXK/a-shot-at-the-diamond-alignment-problem Though actually this isn’t even a proposal to get that, it’s a proposal to get the much weaker thing of an AGI that makes a lot of diamond eventually.
I agree that a lot of people believe alignment to any agent is the hard part, and aligning to a particular human is relatively easy, or a mere “AI capabilities” problem. Why? I think it’s a sincere belief, but ultimately most people think it because it’s an agreed assumption by the AIS community, held for a mixture of intrinsic and instrumental reasons. The intrinsic reasons are that a lot of the fundamental conceptual problems in AI safety seem not to care which human you’re aligning the AI system to, e.g. the fact that human values are complex, that wireheading may arise, and that it’s hard to describe how the AI system should want to change its values over time.
The instrumental reason is that it’s a central premise of the field, similar to the “DNA->RNA->protein ->cellular functions” perspective in molecular biology. The vision for AIS as a field is that we try not to indulge futurist and political topics, and why try not to argue with each other about things like whose values to align the AI to.
You can see some of this instrumentalist perspective in Eliezer’s Coherent Extrapolated Volition paper:
Presumably the prices have gone up with the increased EA wealth, and down again this year..
Ryan—thanks for this helpful post about this ‘central dogma’ in AI safety.
It sounds like much of this view may have been shaped by Yudkowsky’s initial writings about alignment and coherent extrapolated volition? And maybe reflects a LessWrong ethos that cosmic-scale considerations mean we should ignore current political, religious, and ideological conflicts of values and interests among humans?
My main concern here is that if this central dogma about AI alignment (that ‘alignment to any agent is the hard part, and aligning to a particular human is relatively easy, or a mere “AI capabilities” problem’, as you put it) is wrong—then we may be radically underestimating the difficult of alignment, and it might end up being much harder to align with the specific & conflicting values of 8 billion people and trillions of animals, than it is to just ‘align in principle’ with one example agent.
And that would be very bad news for our species. IMHO, one might even argue that failure to challenge this central dogma in AI safety is a big potential failure mode, and perhaps an X risk in its own right....
Yes, I personally think it was shaped by EY and that broader LessWrong ethos.
I don’t really have a strong sense of whether you’re right about aligning to many agents being much harder than one ideal agent. I suppose that if you have an AHI system that can align to one human, then you could align many of them to different randomly selected humans, and simulate a debates between the resulting agents. You could then could consult the humans regarding whether their positions were adequately represented in that parliament. I suppose it wouldn’t be that much harder than just aligning to one agent.
A broader thought is that you may want to be clear about how an inability to align to n humans would cause catastrophe. It could be directly catastrophic, because it means we make a less ethical AI. Or it could be indirectly catastrophic, because our inability to design a system that aligns to n humans makes nations less able to cooperate, exacerbating any arms race.
I think that it is unfair to characterize it as something that hasn’t been questioned. It has in fact been argued for at length. See e.g. the literature on the inner alignment problem. I agree there are also instrumental reasons supporting this dogma, but even if there weren’t, I’d still believe it and most alignment researchers would still believe it, because it is a pretty straightforward inference to make if you understand the alignment literature.
Could you please say more about this?
I don’t see how the so-called ‘inner alignment problem’ is relevant here, or what you mean by ‘instrumental reasons supporting this dogma’.
And it sounds like you’re saying I’d agree with the AI alignment experts if only I understood the alignment literature… but I’m moderately familiar with the literature; I just don’t agree with some of its key assumptions.
OK, sure.
Instrumental reasons supporting this dogma: The dogma helps us all stay sane and focused on the mission instead of fighting each other, so we have reason to promote it that is independent of whether or not it is true. (By contrast, an epistemic reason supporting the dogma would be a reason to think it is true, rather than merely a reason to think it is helpful/useful/etc.)
Inner alignment problem: Well, it’s generally considered to be an open unsolved problem. We don’t know how to make the goals/values/etc of the hypothetical superhuman AGI correspond in any predictable way to the reward signal or training setup—I mean, yeah, no doubt there is a correspondence, but we don’t understand it well enough to say “Given such-and-such a training environment and reward signal, the eventual goals/values/etc of the eventual AGI will be so-and-so.” So we can’t make the learning process zero in on even fairly simple goals like “maximize the amount of diamond in the universe.” For an example of an attempt to do so, a proposal that maaaybe might work, see https://www.lesswrong.com/posts/k4AQqboXz8iE5TNXK/a-shot-at-the-diamond-alignment-problem Though actually this isn’t even a proposal to get that, it’s a proposal to get the much weaker thing of an AGI that makes a lot of diamond eventually.
Thanks; those are helpful clarifications. Appreciate it.