I wouldn’t reject it as a possibility. MIRI wants AGI to have good consequences for human freedom, happiness, etc., but any big increase in power raises the risk that the power will be abused. Ideally we’d want the AI to resist being misused, but there’s a tradeoff between ‘making the AI more resistant to misuse by its users (when the AI is right and the user is wrong)’ and ‘making the AI more amenable to correction by its users (when the AI is wrong and the user is right).’
I wouldn’t say it’s inevitable either, though. It doesn’t appear to me that past technological growth has tended to increase how totalitarian the average state is.
I wouldn’t reject it as a possibility. MIRI wants AGI to have good consequences for human freedom, happiness, etc., but any big increase in power raises the risk that the power will be abused. Ideally we’d want the AI to resist being misused, but there’s a tradeoff between ‘making the AI more resistant to misuse by its users (when the AI is right and the user is wrong)’ and ‘making the AI more amenable to correction by its users (when the AI is wrong and the user is right).’
I wouldn’t say it’s inevitable either, though. It doesn’t appear to me that past technological growth has tended to increase how totalitarian the average state is.