Just because something is difficult, doesn’t mean it isn’t worth trying to do, or at least trying to learn more about so you have some sense of what to do. Calling something “unknowable”—when the penalty for not knowing it “civilization might end with unknown probability”—is a claim that should be challenged vociferously, because if it turns out to be wrong in any aspect, that’s very important for us to know.
I cannot imagine future humans being so stupid as to have AI connected to the internet and a robot army able to be hijacked by said AI at the same time.
I’d recommend reading more about how people worried about AI conceive of the risk; I’ve heard zero people in all of EA say that this scenario is what worries them. There are many places you could start: Stuart Russell’s “Human Compatible” is a good book, but there’s also the free Wait But Why series on superintelligence (plus Luke Muehlhauser’s blog post correcting some errors in that series).
There are many good reasons to think that AI risk may be fairly low (this is an ongoing debate in EA), but before you say one side is wrong, you have to understand what they really believe.
Just because something is difficult, doesn’t mean it isn’t worth trying to do, or at least trying to learn more about so you have some sense of what to do. Calling something “unknowable”—when the penalty for not knowing it “civilization might end with unknown probability”—is a claim that should be challenged vociferously, because if it turns out to be wrong in any aspect, that’s very important for us to know.
I’d recommend reading more about how people worried about AI conceive of the risk; I’ve heard zero people in all of EA say that this scenario is what worries them. There are many places you could start: Stuart Russell’s “Human Compatible” is a good book, but there’s also the free Wait But Why series on superintelligence (plus Luke Muehlhauser’s blog post correcting some errors in that series).
There are many good reasons to think that AI risk may be fairly low (this is an ongoing debate in EA), but before you say one side is wrong, you have to understand what they really believe.