I accept that I don’t know actual procedure for firing a nuclear weapon. And no one in the west knows what North Korea’s nuclear weapons cybersecurity is like, and ChatGPT tells me its connected to digital networks. So there’s definitely some uncertainty and I wouldn’t dismiss the possibility outright that nuclear weapons would be more likely to be hacked if superintelligence existed. So I’d guess maybe a 10-20% chance that it’s possible to hack nuclear weapons based on what I know.
And I agree that it may be impossible to create drexlerian style nanotech. Maybe a 0.5% chance an ASI could do something like that?
But I don’t think the debate here is about any particular scenario that I came up with.
I think if I tried really hard I could come up with about 20 scenarios where an artificial superintelligence might be able to destroy humanity (if you really want me to I can try and list them). And I guess my proposed scenarios would have an average chance of actually working of 1-2%, so maybe around 10% chance that one of my proposed scenarios would work.
But are you saying that the chance of ASI being able to kill us is 0%? In which case every conceivable scenario (including any plan that an ASI could come up with) would have to have a 0% chance of working? I just don’t find that possible, human civilisation isn’t that robust. It must be at least a 10% chance that one of those plans could work right? In which case significant efforts in AI safety to mitigate this risk are warranted.
I accept that I don’t know actual procedure for firing a nuclear weapon. And no one in the west knows what North Korea’s nuclear weapons cybersecurity is like, and ChatGPT tells me its connected to digital networks. So there’s definitely some uncertainty and I wouldn’t dismiss the possibility outright that nuclear weapons would be more likely to be hacked if superintelligence existed. So I’d guess maybe a 10-20% chance that it’s possible to hack nuclear weapons based on what I know.
And I agree that it may be impossible to create drexlerian style nanotech. Maybe a 0.5% chance an ASI could do something like that?
But I don’t think the debate here is about any particular scenario that I came up with.
I think if I tried really hard I could come up with about 20 scenarios where an artificial superintelligence might be able to destroy humanity (if you really want me to I can try and list them). And I guess my proposed scenarios would have an average chance of actually working of 1-2%, so maybe around 10% chance that one of my proposed scenarios would work.
But are you saying that the chance of ASI being able to kill us is 0%? In which case every conceivable scenario (including any plan that an ASI could come up with) would have to have a 0% chance of working? I just don’t find that possible, human civilisation isn’t that robust. It must be at least a 10% chance that one of those plans could work right? In which case significant efforts in AI safety to mitigate this risk are warranted.