It treats the problem as a struggle between humans and rogue AIs, giving the incorrect impression that we can (or should) keep AIs under our complete control forever.
I’m confused: surely we should want to avoid an AI coup? We may decide to give up control of our future to a singleton, but if we do this, then it should be intentional.
I agree we should try avoid an AI coup. Perhaps you are falling victim to the following false dichotomy?
We either allow a set of AIs to overthrow our institutions, or
We construct a singleton: a sovereign world government managed by AI that rules over everyone
Notably, there is a third option:
We incorporate AIs into our existing social, economic, and legal institutions, flexibly adapting our social structures to cope with technological change without our whole system collapsing
I wasn’t claiming that these were the only two possibilities here (for example, another possibility would be that we never actually build AGI).
My suspicion is that a lot of your ideas here sound reasonable on the abstract level, but once you dive into what it actually means on a concrete-level and how these mechanisms will concretely operate, it’ll be clear that it’s a lot less appealing. Anyway, that’s just a gut intuition, obvs. it’ll be easier to judge when you publish your write-up.
I’m confused: surely we should want to avoid an AI coup? We may decide to give up control of our future to a singleton, but if we do this, then it should be intentional.
I agree we should try avoid an AI coup. Perhaps you are falling victim to the following false dichotomy?
We either allow a set of AIs to overthrow our institutions, or
We construct a singleton: a sovereign world government managed by AI that rules over everyone
Notably, there is a third option:
We incorporate AIs into our existing social, economic, and legal institutions, flexibly adapting our social structures to cope with technological change without our whole system collapsing
I wasn’t claiming that these were the only two possibilities here (for example, another possibility would be that we never actually build AGI).
My suspicion is that a lot of your ideas here sound reasonable on the abstract level, but once you dive into what it actually means on a concrete-level and how these mechanisms will concretely operate, it’ll be clear that it’s a lot less appealing. Anyway, that’s just a gut intuition, obvs. it’ll be easier to judge when you publish your write-up.