It does seem like alignment researchers often focus on the case of aligning AI to a single human. Here are some views that might explain this. I think these views are at least somewhat common among alignment researchers.
-
Aligning with a single human contains most of the difficulty of the problem of aligning with groups of humans. Once we figure out how to align AI with a single human, figuring out how to align it with groups of humans will be relatively easy. We should focus on the hard part first, which is aligning AI with a single human. (edit: I am not saying that aligning with a single human is harder than aligning with groups of humans. See also my comment below.)
-
If AI is aligned with a single random human, this is still much better than unaligned AI. Therefore this kind of research is very valuable.
-
If the AI acts according to the CEV of a single random human, then the results will be probably good for humanity as a whole.
You are probably aware, but someone recently drafted an email and intended to send it, but was convinced not to send the email.