Just skimmed this, but I notice there seems to be something inconsistent between this and the usual AI dooomerism stuff. For instance, above you claim that we should be worried about values lock-in because we will be able to align AI—cf doomerism that says alignment won’t work; equally, above you state the value drift could be prevented by ‘turning the AGI off and on again’ - which is, again, at odds with the doomerist claim that we can’t do this. I’m unsure what to make of this tension.
Thus, we suspect that an adequate solution to AI alignment could be achieved given sufficient time and effort. (Though whether that will actually happen is a different question, not addressed since our focus is on feasibility rather than likelihood.)
AI doomers tend to agree with this claim. See e.g. Eliezer in list of lethalities:
None of this is about anything being impossible in principle. The metaphor I usually use is that if a textbook from one hundred years in the future fell into our hands, containing all of the simple ideas that actually work robustly in practice, we could probably build an aligned superintelligence in six months. (...) What’s lethal is that we do not have the Textbook From The Future telling us all the simple solutions that actually in real life just work and are robust; we’re going to be doing everything with metaphorical sigmoids on the first critical try. No difficulty discussed here about AGI alignment is claimed by me to be impossible—to merely human science and engineering, let alone in principle—if we had 100 years to solve it using unlimited retries, the way that science usually has an unbounded time budget and unlimited retries. This list of lethalities is about things we are not on course to solve in practice in time on the first critical try; none of it is meant to make a much stronger claim about things that are impossible in principle.
Just skimmed this, but I notice there seems to be something inconsistent between this and the usual AI dooomerism stuff. For instance, above you claim that we should be worried about values lock-in because we will be able to align AI—cf doomerism that says alignment won’t work; equally, above you state the value drift could be prevented by ‘turning the AGI off and on again’ - which is, again, at odds with the doomerist claim that we can’t do this. I’m unsure what to make of this tension.
Quoting from the post:
AI doomers tend to agree with this claim. See e.g. Eliezer in list of lethalities:
Stipulate, for the sake of the argument, that Lukas et al. actually disagree with the doomers about various points. What would follow from that?