My perspective, I think, is that most of the difficulties that people think of as being the extra, hard part of to one->many alignment, are already present in one->one alignment. A single human is already a barely coherent mess of conflicting wants and goals interacting chaotically, and the strong form of “being aligned to one human” requires a solution that can resolve values conflicts between incompatible ‘parts’ of that human and find outcomes that are satisfactory to all interests. Expanding this to more than one person is a change of degree but not kind.
There is a weaker form of “being aligned to one human” that’s just like “don’t kill that human and follow their commands in more or less the way they intend”, and if that’s all we can get then that only translates to “don’t drive humanity extinct and follow the wishes of at least some subset of people”, and I’d consider that a dramatically suboptimal outcome. At this point I’d take it though.
I have a couple of videos that talk about this! This one sets up the general idea:
This one talks about how like this is to happen in practice: