I agree that the extent to which individual humans are rational agents is often overstated. Nevertheless, there are many examples of humans who spend decades striving towards distant and abstract goals, who learn whatever skills and perform whatever tasks are required to reach them, and who strategically plan around or manipulate the actions of other people. If AGI is anywhere near as agentlike as humans in the sense of possessing the long-term goal-directedness I just described, that’s cause for significant concern.
A lifetime learning to be a 9th Dan master at go perhaps? Building on the back of thousands of years of human knowledge and wisdom? Demolished in hours.… I still look at the game and it looks incredibly abstract!!
Don’t get my wrong I am really concerned, I just consider the danger much closer than others, but also more soluble if we look at the right problem and ask the right questions.
I agree that the extent to which individual humans are rational agents is often overstated. Nevertheless, there are many examples of humans who spend decades striving towards distant and abstract goals, who learn whatever skills and perform whatever tasks are required to reach them, and who strategically plan around or manipulate the actions of other people. If AGI is anywhere near as agentlike as humans in the sense of possessing the long-term goal-directedness I just described, that’s cause for significant concern.
A lifetime learning to be a 9th Dan master at go perhaps? Building on the back of thousands of years of human knowledge and wisdom? Demolished in hours.… I still look at the game and it looks incredibly abstract!!
Don’t get my wrong I am really concerned, I just consider the danger much closer than others, but also more soluble if we look at the right problem and ask the right questions.