I’m still not sure how the consciousness issue can just be ignored. Yes, given the assumption that AIs will be mindless machines with no moral value, obviously we need to build them to serve humans. But if AIs will be conscious creatures with moral value like us, then...? In this case finding the right thing to do seems like a much harder problem, as it would be far from clear that a future in which machine intelligences gradually replace human intelligences represents a nightmare scenario, or even an existential risk at all. It’s especially frustrating to see AI-risk folks treat this question as an irrelevance, since it seems to have such enormous implications on how important AI alignment actually is.
(Note that I’m not invoking ‘ghost in the machine’, I am making the very reasonable guess that our consciousness is a physical process that occurs in our brains, that it’s there for the same reason other features of our minds are there—because it’s adaptive—and that similar functionality might very plausibly be useful for an AI too.)
One implication of a multi-agent scenario is that there would likely be enormous variety in the types of minds that exist, as each mind design could be optimised for a different niche. So in such a scenario, it seems quite plausible that each feature of our minds would turn out to be a good solution in at least a few situations, and so would be reimplemented in minds designed for those particular niches.