Why do you think this? What make you think that it’s possible at all?[1] And what do you mean by “large minority”? Can you give an approximate percentage?
To respond to Yampolskiy without disagreeing with the fundamental point, I think it’s definitely possible for a less intelligent species to align or even indefinitely control a boundedly and only slightly more intelligent species, especially given greater resources, speed, and/or numbers, and sufficient effort.
The problem is that humans aren’t currently trying to limit the systems or trying much to monitor, much less robustly align or control them.
Fair point. But AI is indeed unlikely to top out at merely “slighlty more” intelligent. And it has the potential for a massive speed/numbers advantage too.
Why do you think this? What make you think that it’s possible at all?[1] And what do you mean by “large minority”? Can you give an approximate percentage?
Or to paraphrase Yampolskiy: it’s not possible for a less intelligent species to indefinitely control a more intelligent species.
To respond to Yampolskiy without disagreeing with the fundamental point, I think it’s definitely possible for a less intelligent species to align or even indefinitely control a boundedly and only slightly more intelligent species, especially given greater resources, speed, and/or numbers, and sufficient effort.
The problem is that humans aren’t currently trying to limit the systems or trying much to monitor, much less robustly align or control them.
Fair point. But AI is indeed unlikely to top out at merely “slighlty more” intelligent. And it has the potential for a massive speed/numbers advantage too.
Yes, by default self-improving AI goes very poorly, but this is a plausible case where would could have aligned AGI, if not ASI.