Yeah, I think I sort of Aumann-absorbed the idea that AIs would skip over the human level because it has no special significance for them without wondering exactly how wide that human level should be. I think what I had in mind was competence greater than what I thought was humanly possible in one intellectual field that I respected, so more like physics or infosec than climbing. I think my prior was that it would be easier to build specialized systems, so that it had probably not crossed my mind that an AI could be superhuman by being a bit above average in way more fields than any human could.
Eliezer mentioned in a recent interview that he also considers himself to have been wrong about that. This could be a bit of a silver lining. If AI goes from capybara levels of smart straight to NZT-48 levels of smart, no one will be prepared. As it stands, no one will be prepared either, but it’s at least a bit less dignified to not be prepared now.
Yeah, I think I sort of Aumann-absorbed the idea that AIs would skip over the human level because it has no special significance for them without wondering exactly how wide that human level should be. I think what I had in mind was competence greater than what I thought was humanly possible in one intellectual field that I respected, so more like physics or infosec than climbing. I think my prior was that it would be easier to build specialized systems, so that it had probably not crossed my mind that an AI could be superhuman by being a bit above average in way more fields than any human could.
Eliezer mentioned in a recent interview that he also considers himself to have been wrong about that. This could be a bit of a silver lining. If AI goes from capybara levels of smart straight to NZT-48 levels of smart, no one will be prepared. As it stands, no one will be prepared either, but it’s at least a bit less dignified to not be prepared now.