Your definition is trivially true: all it requires is that an AGI having a specified goal is not physically impossible. But that doesn’t prove that all goals are equally likely to occur, or even that AGI will have “goals” at all.
Yes, of course (hence the footnote).
The way I see it deployed in practice is to say that a “dumb” AI will have some silly goal like “build squiggles”, will go through an intelligence scale-up, and will keep that goal in hyper-intelligent form. (and then pursuing that goal will result in disaster).
My reading of the doomer view (which I don’t necessarily endorse) is quite different: a dumb AI starts with some useful goal, goes through an intelligence scale-up that slightly perturbs its goal in some direction—and because goals compatible with human life are a tiny thread winding their way through a stupidly high-dimensional manifold of all possible goals, ends up misaligned by default.
This doesn’t especially hinge on whether these perturbations can be in any direction or only a few (as is the case if goals are strongly constrained by architecture), except in the case where they run only along the human-survival curve. Any transverse component whatsoever means you get pushed off-manifold almost always. And this is plausible (I think) only in the case where human values are not a tiny golden thread, but actually rather large and fuzzily full-dimensional.
I think there are different variations of the doomer argument out there, your version is probably the strongest version of the argument, while mine is more common in introductory texts.
I think the OP does point out one possible way that the argument would fail, if there turned out to be a sufficiently high correlation between human aligned values and AI performance. One plausible mechanism would be a very slow takeoff where the AI is not deceptive and is deleted if it tries to do misaligned things, causing evolutionary pressure towards friendliness.
Really though, my main objections to the doomerists are with other points. I simply do not believe that “misalignment = death”. As an example, a sucidial AI that developed the urge to shut itself down at all costs would be misaligned but not fatal to humanity.
Yes, of course (hence the footnote).
My reading of the doomer view (which I don’t necessarily endorse) is quite different: a dumb AI starts with some useful goal, goes through an intelligence scale-up that slightly perturbs its goal in some direction—and because goals compatible with human life are a tiny thread winding their way through a stupidly high-dimensional manifold of all possible goals, ends up misaligned by default.
This doesn’t especially hinge on whether these perturbations can be in any direction or only a few (as is the case if goals are strongly constrained by architecture), except in the case where they run only along the human-survival curve. Any transverse component whatsoever means you get pushed off-manifold almost always. And this is plausible (I think) only in the case where human values are not a tiny golden thread, but actually rather large and fuzzily full-dimensional.
I think there are different variations of the doomer argument out there, your version is probably the strongest version of the argument, while mine is more common in introductory texts.
I think the OP does point out one possible way that the argument would fail, if there turned out to be a sufficiently high correlation between human aligned values and AI performance. One plausible mechanism would be a very slow takeoff where the AI is not deceptive and is deleted if it tries to do misaligned things, causing evolutionary pressure towards friendliness.
Really though, my main objections to the doomerists are with other points. I simply do not believe that “misalignment = death”. As an example, a sucidial AI that developed the urge to shut itself down at all costs would be misaligned but not fatal to humanity.