This article seems like a reasonable summary of the pause argument, although it relies a bit too heavily on the “god-like AI” hypothesis.
Like playing chess against grandmaster Magnus Carlsen, we cannot predict the moves he will play, but we can predict the outcome: we lose.
I’ve been seeing this analogy a lot lately, and I think it’s bad. The more intelligent side of a conflict does not always win if the less intelligent side starts with a more advantageous position. I can easily beat stockfish in chess if I take away it’s queen, and an angry bear can easily defeat me in a cage match, despite my PhD.
We know that Magnus will beat me in a fair game of chess because there is ample empirical evidence for this, in the forms of all his previous games against other players. That’s why we know he is a grandmaster. There is no such empirical basis to know the outcome of a human-AI war.
Lastly, while we can’t predict the exact moves Magnus will make, we can make general predictions about how the game will go. For example, we can confidently predict that he won’t make obvious blunders, that his structure will probably be stronger, that he will capitalise on mistakes I make, etc.
I’m not saying it’s absurd to think an AGI would win such a war (though I personally believe it is unlikely), just that if you do think the AGI would win, you have to actually prove it, not rely on faulty analogies.
the real world has secret information, way more possible strategies, the potential for technological advancements, defections and betrayal, etc. which all favor the more intelligent party.
Also, consider that the AI has ingested ~all the world’s information. That, to me, sounds like a huge resource advantage; a huge strategic advantage—it’s not just more intelligent, it’s more knowledgeable.
It’s somewhat hard to outthink a missile headed for your server farm at 800 km/h.
This actually made me think of the AI launching the missile, and the humans not having time to think (see this or this). The AI will have a huge speed advantage over us—we will basically be like plants or rocks to it.
if you do think the AGI would win, you have to actually prove it
What would count as “proof” to you, short of an actual global catastrophe?
This article seems like a reasonable summary of the pause argument, although it relies a bit too heavily on the “god-like AI” hypothesis.
I’ve been seeing this analogy a lot lately, and I think it’s bad. The more intelligent side of a conflict does not always win if the less intelligent side starts with a more advantageous position. I can easily beat stockfish in chess if I take away it’s queen, and an angry bear can easily defeat me in a cage match, despite my PhD.
We know that Magnus will beat me in a fair game of chess because there is ample empirical evidence for this, in the forms of all his previous games against other players. That’s why we know he is a grandmaster. There is no such empirical basis to know the outcome of a human-AI war.
Lastly, while we can’t predict the exact moves Magnus will make, we can make general predictions about how the game will go. For example, we can confidently predict that he won’t make obvious blunders, that his structure will probably be stronger, that he will capitalise on mistakes I make, etc.
I’m not saying it’s absurd to think an AGI would win such a war (though I personally believe it is unlikely), just that if you do think the AGI would win, you have to actually prove it, not rely on faulty analogies.
As you recognise yourself in your linked post:
Also, consider that the AI has ingested ~all the world’s information. That, to me, sounds like a huge resource advantage; a huge strategic advantage—it’s not just more intelligent, it’s more knowledgeable.
This actually made me think of the AI launching the missile, and the humans not having time to think (see this or this). The AI will have a huge speed advantage over us—we will basically be like plants or rocks to it.
What would count as “proof” to you, short of an actual global catastrophe?