More simplistic ones. Machines are getting smarter and more complex and have the potential to surpass humans in intelligence, in the sense of being able to do the things we can do or harder things we havenât cracked yet, all the while having a vast advantage in computing power and speed. Stories we invent about how machines can get out of control are often weird and require them to âthink outside the boxâ and reason about themselvesâbut since we ourselves can do it, thereâs no reason a machine couldnât. All of this, together with the perils of maximization.
The thing is, every part of this might or might not happen. Machine intelligence may remain too narrow to do any of this. Or may not decide to break out of its cage. Or we may find ways to contain it by the time any of this happens. Given the current state of AI, I strongly think none of this will happen soon.
Mostly agree, though maybe not with the last sentence on certain readings (i.e. Iâm âonlyâ 95% confident we wonât have human-like agents by 2032, not 99.9% confident.) But strong agreement on the basic âhey intelligent agents could be dangerous, humans areâ, being much more convincing than detailed AI doomer stuff.
What are your reasons for being worried?
More simplistic ones. Machines are getting smarter and more complex and have the potential to surpass humans in intelligence, in the sense of being able to do the things we can do or harder things we havenât cracked yet, all the while having a vast advantage in computing power and speed. Stories we invent about how machines can get out of control are often weird and require them to âthink outside the boxâ and reason about themselvesâbut since we ourselves can do it, thereâs no reason a machine couldnât. All of this, together with the perils of maximization.
The thing is, every part of this might or might not happen. Machine intelligence may remain too narrow to do any of this. Or may not decide to break out of its cage. Or we may find ways to contain it by the time any of this happens. Given the current state of AI, I strongly think none of this will happen soon.
Mostly agree, though maybe not with the last sentence on certain readings (i.e. Iâm âonlyâ 95% confident we wonât have human-like agents by 2032, not 99.9% confident.) But strong agreement on the basic âhey intelligent agents could be dangerous, humans areâ, being much more convincing than detailed AI doomer stuff.