My guess is that, even then, there’ll be a lot of people for whom it remains counterintuitive. (People may no longer use the strong word “repugnant” to describe it, but I think many will still find it counterintuitive.)
Which would support my point that many people find the repugnant conclusion counterintuitive not (just) because of aggregation concerns, but also because they have the intuition that adding new people doesn’t make things better.
This is still in brainstorming stage; I think there’s probably a convincing line of argument for “AI alignment difficulty is high at least on priors” that includes the following points:
Many humans don’t seem particularly aligned to “human values” (not just thinking of dark triad traits, but also things like self-deception, cowardice, etc.)
There’s a loose analogy where AI is “more technological progress,” and “technological progress” so far hasn’t always been aligned to human flourishing (it has solved or improved a lot of long-term problems of civilization, like infant mortality, but has also created some new ones, like political polarization, obesity, unhappiness from constant bombardement with images of people who are richer and more successful than you, etc.). So, based on this analogy, why think things will somehow fall into place with AI training so that the new forces that be will for once become aligned?
AI will accelerate everything, and if you accelerate something that isn’t set up in a secure way, it goes off the rails (“small issues will be magnified”).