There is a huge difference between statement (a): “AI is more dangerous than nuclear war”, and statement (b): “we should, as a last resort, use nuclear weapons to stop AI”. It is irresponsible to downplay the danger and horror of (b) by claiming Yudkowsky is merely displaying intellectual honesty by making explicit what treaty enforcement entails (not the least because everyone studying or working on international treaties is already aware of this, and is willing to discuss it openly).
Yudkowsky is making a clear and precise declaration of what he is willing to do, if necessary. To see this, one only needs to consider the opposite position, statement (c): “we should not start nuclear war over AI under any circumstance”. Statement (c) can reasonably be included in an international treaty dealing with this problem, without that treaty loosing all enforceability. There are plenty of other enforcement mechanisms.
Finally, the last thing anyone defending Yudkowsky can claim is that there is a low probability we will need to use nuclear weapons. There is a higher probability of AI research continuing, than of AI research leading to human annihilation. Yudkowsky is gambling that by threatening the use of force he will prevent a catastrophe, but there is every reason to believe his threats increase the chances of a similarly devastating catastrophe.
I applaud you for writing this post.
There is a huge difference between statement (a): “AI is more dangerous than nuclear war”, and statement (b): “we should, as a last resort, use nuclear weapons to stop AI”. It is irresponsible to downplay the danger and horror of (b) by claiming Yudkowsky is merely displaying intellectual honesty by making explicit what treaty enforcement entails (not the least because everyone studying or working on international treaties is already aware of this, and is willing to discuss it openly). Yudkowsky is making a clear and precise declaration of what he is willing to do, if necessary. To see this, one only needs to consider the opposite position, statement (c): “we should not start nuclear war over AI under any circumstance”. Statement (c) can reasonably be included in an international treaty dealing with this problem, without that treaty loosing all enforceability. There are plenty of other enforcement mechanisms. Finally, the last thing anyone defending Yudkowsky can claim is that there is a low probability we will need to use nuclear weapons. There is a higher probability of AI research continuing, than of AI research leading to human annihilation. Yudkowsky is gambling that by threatening the use of force he will prevent a catastrophe, but there is every reason to believe his threats increase the chances of a similarly devastating catastrophe.