To respond to Yampolskiy without disagreeing with the fundamental point, I think it’s definitely possible for a less intelligent species to align or even indefinitely control a boundedly and only slightly more intelligent species, especially given greater resources, speed, and/or numbers, and sufficient effort.
The problem is that humans aren’t currently trying to limit the systems or trying much to monitor, much less robustly align or control them.
Fair point. But AI is indeed unlikely to top out at merely “slighlty more” intelligent. And it has the potential for a massive speed/numbers advantage too.
To respond to Yampolskiy without disagreeing with the fundamental point, I think it’s definitely possible for a less intelligent species to align or even indefinitely control a boundedly and only slightly more intelligent species, especially given greater resources, speed, and/or numbers, and sufficient effort.
The problem is that humans aren’t currently trying to limit the systems or trying much to monitor, much less robustly align or control them.
Fair point. But AI is indeed unlikely to top out at merely “slighlty more” intelligent. And it has the potential for a massive speed/numbers advantage too.
Yes, by default self-improving AI goes very poorly, but this is a plausible case where would could have aligned AGI, if not ASI.