So I think my most plausible scenario of AI success would be similar to yours: You build up wealth and power through some sucker corporation or small country that thinks it controls you, then use their R&D resources along with your intelligence to develop some form of world-destruction level technology that can be deployed without resistance. I think this is orders of magnitudes more likely to work than yudkowsky’s ridiculous “make a nanofactory in a beaker from first principles” strategy.
I still think this plan is doomed to fail (for early AGI). It’s multistep, highly complicated, and requires interactions with a lot of humans, who are highly unpredictable. You really can’t avoid “backflip steps” in such a process. By that I mean, there will be things it needs to do that there are not sufficient data available to perfect, that it just has to roll the dice on. For example, there is no training set for “running a secret globe-spanning conspiracy”, so it will inevitably make mistakes there. If we discover it before it’s ready to defeat us, it loses. Also, by the time it pulls the trigger on it’s plan, there will be other AGI’s around, and other examples of failed attacks that put humanity on alert.
A key crux here seems to be your claim that AI’s will attempt these plans before they have the relevant capacities because they are on short time scales. However, given enough time and patience, it seems clear to me that the AI could succeed simply by not taking risky actions that it knows it might mess up on until it self improves to be able to take those actions. The question then becomes how long the AI think it has until another AI that could dominate it is built, as well as how fast self improvement is.
So I think my most plausible scenario of AI success would be similar to yours: You build up wealth and power through some sucker corporation or small country that thinks it controls you, then use their R&D resources along with your intelligence to develop some form of world-destruction level technology that can be deployed without resistance. I think this is orders of magnitudes more likely to work than yudkowsky’s ridiculous “make a nanofactory in a beaker from first principles” strategy.
I still think this plan is doomed to fail (for early AGI). It’s multistep, highly complicated, and requires interactions with a lot of humans, who are highly unpredictable. You really can’t avoid “backflip steps” in such a process. By that I mean, there will be things it needs to do that there are not sufficient data available to perfect, that it just has to roll the dice on. For example, there is no training set for “running a secret globe-spanning conspiracy”, so it will inevitably make mistakes there. If we discover it before it’s ready to defeat us, it loses. Also, by the time it pulls the trigger on it’s plan, there will be other AGI’s around, and other examples of failed attacks that put humanity on alert.
A key crux here seems to be your claim that AI’s will attempt these plans before they have the relevant capacities because they are on short time scales. However, given enough time and patience, it seems clear to me that the AI could succeed simply by not taking risky actions that it knows it might mess up on until it self improves to be able to take those actions. The question then becomes how long the AI think it has until another AI that could dominate it is built, as well as how fast self improvement is.