This is akin to expecting a single paper titled “The Solution to Geopolitical Stability” or “How to Achieve Permanent Marital Bliss.” These are not problems that are solved; they are conditions that are managed, maintained, and negotiated on an ongoing basis.
If I have an automated system which is generally competent at managing, maintaining and negotiating, then can I not say I have the solution to those things? This is the sense in which it means to solve alignment. It is a lofty goal, yes. I don’t think it’s incoherent, but I do tend to think that it means a system that both “wins” (against other systems, including humans) and is “aligned” (behaves to the greatest extent possible, while still winning, in the general best interest of <insert X>). Take that how you will—many do not think aiming for such a thing is advisable, due to the implications of “winning”.
The “to whom” is either not considered part of alignment itself, or it is assumed to be 5. on your list. 1, 2, 3 & 4 would not typically considered “solving alignment”, although 1. is sometimes advocated for by hardcore democracy believers. I personally think if it’s not 5., then it’s not really achieving anything of note, as it still leaves the world with many competing mutually-malign agents.
This is all to defend the technical (but arguably useless) meaning of “solving alignment”. I agree with everything else in your post. It is absolutely a “wicked problem”, involving competition with intelligent adversaries, an evolving landscape, and a moving target.
If I have an automated system which is generally competent at managing, maintaining and negotiating, then can I not say I have the solution to those things? This is the sense in which it means to solve alignment. It is a lofty goal, yes. I don’t think it’s incoherent, but I do tend to think that it means a system that both “wins” (against other systems, including humans) and is “aligned” (behaves to the greatest extent possible, while still winning, in the general best interest of <insert X>). Take that how you will—many do not think aiming for such a thing is advisable, due to the implications of “winning”.
The “to whom” is either not considered part of alignment itself, or it is assumed to be 5. on your list. 1, 2, 3 & 4 would not typically considered “solving alignment”, although 1. is sometimes advocated for by hardcore democracy believers. I personally think if it’s not 5., then it’s not really achieving anything of note, as it still leaves the world with many competing mutually-malign agents.
This is all to defend the technical (but arguably useless) meaning of “solving alignment”. I agree with everything else in your post. It is absolutely a “wicked problem”, involving competition with intelligent adversaries, an evolving landscape, and a moving target.