1. For each AGI, there will be tasks that have difficulty beyond it’s capabilities.
2. You can make the task “subjugate humanity under these constraints” arbitrarily more difficult or undesirable by adding more and more constraints to a goal function.
(Apologies for terseness here, I do appreciate that you put effort that went into writing this up.)
1. It seems to me you underestimate the capabilities of early AGI. Speed alone is sufficient for superintelligence, FOOM isn’t necessary for AI to be overwhelmingly more mentally capable.
2. One can’t actually make the task “subjugate humanity under these constraints” arbitrarily more difficult or undesirable by adding more constraints to the goal function. Constraints aren’t uncorrelated with each other—you can’t make invading medieval France arbitrarily hard by adding more pikemen, archers, cavalry, walls, trenches, sailboats. Innovative methods to bypass pikemen from outside your paradigm also sidestep archers, cavalry, walls, etc. If you impose all the constraints available to you, they are correlated because you/your culture/your species came up with them. Saying that you can pile on more safeguards to reduce the probability of failure to zero is like saying that if a wall made out of red bricks is only 50% likely to be breached, creating a second wall out of blue bricks will drop the probability of a breach to 25%.
1. It seems to me you underestimate the capabilities of early AGI. Speed alone is sufficient for superintelligence, FOOM isn’t necessary for AI to be overwhelmingly more mentally capable.
I think this comic provides an easy rebuttal here. Speed is by no means sufficient, you also have to be extremely capable and rational in domains outside of your training set. A paranoid schizophrenic conspiracy theorist AI will probably fail to take over the world, no matter how much computing power it has. I don’t think I’m underestimating early AGI, I think people here are overestimating AI abilities and underestimating just how insanely difficult a task it is to defeat humanity.
2. One can’t actually make the task “subjugate humanity under these constraints” arbitrarily more difficult or undesirable by adding more constraints to the goal function. Constraints aren’t uncorrelated with each other
Sure you can. Straightforwardly, the task “conquer medieval france in one second (including prep time)” is about as close to impossible as you can get, unless you already have access to a supernuke or something.
I think you’re treating the AI as an alien race here, with unknown powers coming from the outside. But that’s ignoring our biggest advantage: we’re the ones who build the damn things. If the french had direct access to the brains of the invading armies, it really would be quite easy to arbitrarily constrain them.
(Apologies for terseness here, I do appreciate that you put effort that went into writing this up.)
1. It seems to me you underestimate the capabilities of early AGI. Speed alone is sufficient for superintelligence, FOOM isn’t necessary for AI to be overwhelmingly more mentally capable.
2. One can’t actually make the task “subjugate humanity under these constraints” arbitrarily more difficult or undesirable by adding more constraints to the goal function. Constraints aren’t uncorrelated with each other—you can’t make invading medieval France arbitrarily hard by adding more pikemen, archers, cavalry, walls, trenches, sailboats. Innovative methods to bypass pikemen from outside your paradigm also sidestep archers, cavalry, walls, etc. If you impose all the constraints available to you, they are correlated because you/your culture/your species came up with them. Saying that you can pile on more safeguards to reduce the probability of failure to zero is like saying that if a wall made out of red bricks is only 50% likely to be breached, creating a second wall out of blue bricks will drop the probability of a breach to 25%.
I think this comic provides an easy rebuttal here. Speed is by no means sufficient, you also have to be extremely capable and rational in domains outside of your training set. A paranoid schizophrenic conspiracy theorist AI will probably fail to take over the world, no matter how much computing power it has. I don’t think I’m underestimating early AGI, I think people here are overestimating AI abilities and underestimating just how insanely difficult a task it is to defeat humanity.
Sure you can. Straightforwardly, the task “conquer medieval france in one second (including prep time)” is about as close to impossible as you can get, unless you already have access to a supernuke or something.
I think you’re treating the AI as an alien race here, with unknown powers coming from the outside. But that’s ignoring our biggest advantage: we’re the ones who build the damn things. If the french had direct access to the brains of the invading armies, it really would be quite easy to arbitrarily constrain them.