Mostly agree, though maybe not with the last sentence on certain readings (i.e. I’m “only” 95% confident we won’t have human-like agents by 2032, not 99.9% confident.) But strong agreement on the basic “hey intelligent agents could be dangerous, humans are”, being much more convincing than detailed AI doomer stuff.
Mostly agree, though maybe not with the last sentence on certain readings (i.e. I’m “only” 95% confident we won’t have human-like agents by 2032, not 99.9% confident.) But strong agreement on the basic “hey intelligent agents could be dangerous, humans are”, being much more convincing than detailed AI doomer stuff.