More simplistic ones. Machines are getting smarter and more complex and have the potential to surpass humans in intelligence, in the sense of being able to do the things we can do or harder things we haven’t cracked yet, all the while having a vast advantage in computing power and speed. Stories we invent about how machines can get out of control are often weird and require them to ‘think outside the box’ and reason about themselves—but since we ourselves can do it, there’s no reason a machine couldn’t. All of this, together with the perils of maximization.
The thing is, every part of this might or might not happen. Machine intelligence may remain too narrow to do any of this. Or may not decide to break out of its cage. Or we may find ways to contain it by the time any of this happens. Given the current state of AI, I strongly think none of this will happen soon.
Mostly agree, though maybe not with the last sentence on certain readings (i.e. I’m “only” 95% confident we won’t have human-like agents by 2032, not 99.9% confident.) But strong agreement on the basic “hey intelligent agents could be dangerous, humans are”, being much more convincing than detailed AI doomer stuff.
More simplistic ones. Machines are getting smarter and more complex and have the potential to surpass humans in intelligence, in the sense of being able to do the things we can do or harder things we haven’t cracked yet, all the while having a vast advantage in computing power and speed. Stories we invent about how machines can get out of control are often weird and require them to ‘think outside the box’ and reason about themselves—but since we ourselves can do it, there’s no reason a machine couldn’t. All of this, together with the perils of maximization.
The thing is, every part of this might or might not happen. Machine intelligence may remain too narrow to do any of this. Or may not decide to break out of its cage. Or we may find ways to contain it by the time any of this happens. Given the current state of AI, I strongly think none of this will happen soon.
Mostly agree, though maybe not with the last sentence on certain readings (i.e. I’m “only” 95% confident we won’t have human-like agents by 2032, not 99.9% confident.) But strong agreement on the basic “hey intelligent agents could be dangerous, humans are”, being much more convincing than detailed AI doomer stuff.