I think the choice of reference class is itself a major part of the object level argument. For example, instead of asking “do more intelligent entities disempower less intelligent entities”, why not ask “does the side of a war starting off with vastly more weapons, manpower and resources usually win?”. Or “do test subjects usually escape and overpower their captors?” Or ” Has any intelligent entity existed without sufficient flaws to prevent them from executing world domination?”. These reference classes intuit a much lower estimation.
Now, all of these reference classes are flawed in that none of them correspond 1 to 1 with the actual situation at hand. But neither does yours! For example, in none of the previous cases of higher intelligence overpowering lower intelligences has the lower intelligence had the ability to write the brain of the higher intelligence. Is this a big factor or a small factor? Who knows?
As for b), I just don’t agree that predictions about the outcome of future AI wars are in a similar class to questions like “will there be manned missions to mars” or “predicting the smartphone”.
Anyway, I’m not too interested in going in depth on the object level right now. Ultimately I’ve only barely scratched the surfaces of the flaws leading to overestimation of AI risk, and it will take time to break through, so I thank you for your illuminating discussion!
I think the choice of reference class is itself a major part of the object level argument. For example, instead of asking “do more intelligent entities disempower less intelligent entities”, why not ask “does the side of a war starting off with vastly more weapons, manpower and resources usually win?”. Or “do test subjects usually escape and overpower their captors?” Or ” Has any intelligent entity existed without sufficient flaws to prevent them from executing world domination?”. These reference classes intuit a much lower estimation.
Now, all of these reference classes are flawed in that none of them correspond 1 to 1 with the actual situation at hand. But neither does yours! For example, in none of the previous cases of higher intelligence overpowering lower intelligences has the lower intelligence had the ability to write the brain of the higher intelligence. Is this a big factor or a small factor? Who knows?
As for b), I just don’t agree that predictions about the outcome of future AI wars are in a similar class to questions like “will there be manned missions to mars” or “predicting the smartphone”.
Anyway, I’m not too interested in going in depth on the object level right now. Ultimately I’ve only barely scratched the surfaces of the flaws leading to overestimation of AI risk, and it will take time to break through, so I thank you for your illuminating discussion!
I agree that the choice of reference class matters a lot and is non-obvious (and hope I didn’t imply otherwise!).