⌠Why wouldnât AIs be good at doing these things? [...]
More broadly, if you object to the implication âsuperintelligence implies ability to dominate the worldâ, then just take whatever mental property P you think does allow an agent to dominate the world; I suspect both Toby and I would agree with âthere is a non-trivial chance that future AI systems will be superhuman at P and so would be able to dominate the worldâ.
I think this is a key point, and that three related/âsupporting points can also be made:
The timeline surveys discussed in this essay related to when AI will be âable to accomplish every task better and more cheaply than human workersâ, and some of the tasks/âjobs asked about would seem to rely on a wide range of skills, including social skills.
Intelligence is already often defined in a very expansive way that would likely include all mental properties that may be relevant to world-dominance. E.g., âIntelligence measures an agentâs ability to achieve goals in a wide range of environmentsâ (Legg and Hutter). That definition is from AI researchers, and Iâd guess that AI researchers often have something like that (rather than something like performance on IQ tests) in mind as what theyâre ultimately aiming for or predicting arrival dates of (though I donât have insider knowledge on AI researchersâ views).
That said, I do think intelligence is also often used in a narrower way, and so I do value the OP having helped highlight that itâs worth being more specific about what abilities are being discussed.
Iâd guess that there are economic incentives to create AI systems that are very strong in whatever mental abilities are important for achieving goals. If it turns out that a narrow sense of intelligent is not sufficient for that, it doesnât seem likely that people will settle for AI systems that are very strong in only that dimension.
Good points!
I think this is a key point, and that three related/âsupporting points can also be made:
The timeline surveys discussed in this essay related to when AI will be âable to accomplish every task better and more cheaply than human workersâ, and some of the tasks/âjobs asked about would seem to rely on a wide range of skills, including social skills.
Intelligence is already often defined in a very expansive way that would likely include all mental properties that may be relevant to world-dominance. E.g., âIntelligence measures an agentâs ability to achieve goals in a wide range of environmentsâ (Legg and Hutter). That definition is from AI researchers, and Iâd guess that AI researchers often have something like that (rather than something like performance on IQ tests) in mind as what theyâre ultimately aiming for or predicting arrival dates of (though I donât have insider knowledge on AI researchersâ views).
That said, I do think intelligence is also often used in a narrower way, and so I do value the OP having helped highlight that itâs worth being more specific about what abilities are being discussed.
Iâd guess that there are economic incentives to create AI systems that are very strong in whatever mental abilities are important for achieving goals. If it turns out that a narrow sense of intelligent is not sufficient for that, it doesnât seem likely that people will settle for AI systems that are very strong in only that dimension.