Thus, we suspect that an adequate solution to AI alignment could be achieved given sufficient time and effort. (Though whether that will actually happen is a different question, not addressed since our focus is on feasibility rather than likelihood.)
AI doomers tend to agree with this claim. See e.g. Eliezer in list of lethalities:
None of this is about anything being impossible in principle. The metaphor I usually use is that if a textbook from one hundred years in the future fell into our hands, containing all of the simple ideas that actually work robustly in practice, we could probably build an aligned superintelligence in six months. (...) What’s lethal is that we do not have the Textbook From The Future telling us all the simple solutions that actually in real life just work and are robust; we’re going to be doing everything with metaphorical sigmoids on the first critical try. No difficulty discussed here about AGI alignment is claimed by me to be impossible—to merely human science and engineering, let alone in principle—if we had 100 years to solve it using unlimited retries, the way that science usually has an unbounded time budget and unlimited retries. This list of lethalities is about things we are not on course to solve in practice in time on the first critical try; none of it is meant to make a much stronger claim about things that are impossible in principle.
Quoting from the post:
AI doomers tend to agree with this claim. See e.g. Eliezer in list of lethalities: