I don’t see how the so-called ‘inner alignment problem’ is relevant here, or what you mean by ‘instrumental reasons supporting this dogma’.
And it sounds like you’re saying I’d agree with the AI alignment experts if only I understood the alignment literature… but I’m moderately familiar with the literature; I just don’t agree with some of its key assumptions.
Instrumental reasons supporting this dogma: The dogma helps us all stay sane and focused on the mission instead of fighting each other, so we have reason to promote it that is independent of whether or not it is true. (By contrast, an epistemic reason supporting the dogma would be a reason to think it is true, rather than merely a reason to think it is helpful/useful/etc.)
Inner alignment problem: Well, it’s generally considered to be an open unsolved problem. We don’t know how to make the goals/values/etc of the hypothetical superhuman AGI correspond in any predictable way to the reward signal or training setup—I mean, yeah, no doubt there is a correspondence, but we don’t understand it well enough to say “Given such-and-such a training environment and reward signal, the eventual goals/values/etc of the eventual AGI will be so-and-so.” So we can’t make the learning process zero in on even fairly simple goals like “maximize the amount of diamond in the universe.” For an example of an attempt to do so, a proposal that maaaybe might work, see https://www.lesswrong.com/posts/k4AQqboXz8iE5TNXK/a-shot-at-the-diamond-alignment-problem Though actually this isn’t even a proposal to get that, it’s a proposal to get the much weaker thing of an AGI that makes a lot of diamond eventually.
Could you please say more about this?
I don’t see how the so-called ‘inner alignment problem’ is relevant here, or what you mean by ‘instrumental reasons supporting this dogma’.
And it sounds like you’re saying I’d agree with the AI alignment experts if only I understood the alignment literature… but I’m moderately familiar with the literature; I just don’t agree with some of its key assumptions.
OK, sure.
Instrumental reasons supporting this dogma: The dogma helps us all stay sane and focused on the mission instead of fighting each other, so we have reason to promote it that is independent of whether or not it is true. (By contrast, an epistemic reason supporting the dogma would be a reason to think it is true, rather than merely a reason to think it is helpful/useful/etc.)
Inner alignment problem: Well, it’s generally considered to be an open unsolved problem. We don’t know how to make the goals/values/etc of the hypothetical superhuman AGI correspond in any predictable way to the reward signal or training setup—I mean, yeah, no doubt there is a correspondence, but we don’t understand it well enough to say “Given such-and-such a training environment and reward signal, the eventual goals/values/etc of the eventual AGI will be so-and-so.” So we can’t make the learning process zero in on even fairly simple goals like “maximize the amount of diamond in the universe.” For an example of an attempt to do so, a proposal that maaaybe might work, see https://www.lesswrong.com/posts/k4AQqboXz8iE5TNXK/a-shot-at-the-diamond-alignment-problem Though actually this isn’t even a proposal to get that, it’s a proposal to get the much weaker thing of an AGI that makes a lot of diamond eventually.
Thanks; those are helpful clarifications. Appreciate it.