ā¦ Why wouldnāt AIs be good at doing these things? [...]
More broadly, if you object to the implication āsuperintelligence implies ability to dominate the worldā, then just take whatever mental property P you think does allow an agent to dominate the world; I suspect both Toby and I would agree with āthere is a non-trivial chance that future AI systems will be superhuman at P and so would be able to dominate the worldā.
I think this is a key point, and that three related/āsupporting points can also be made:
The timeline surveys discussed in this essay related to when AI will be āable to accomplish every task better and more cheaply than human workersā, and some of the tasks/ājobs asked about would seem to rely on a wide range of skills, including social skills.
Intelligence is already often defined in a very expansive way that would likely include all mental properties that may be relevant to world-dominance. E.g., āIntelligence measures an agentās ability to achieve goals in a wide range of environmentsā (Legg and Hutter). That definition is from AI researchers, and Iād guess that AI researchers often have something like that (rather than something like performance on IQ tests) in mind as what theyāre ultimately aiming for or predicting arrival dates of (though I donāt have insider knowledge on AI researchersā views).
That said, I do think intelligence is also often used in a narrower way, and so I do value the OP having helped highlight that itās worth being more specific about what abilities are being discussed.
Iād guess that there are economic incentives to create AI systems that are very strong in whatever mental abilities are important for achieving goals. If it turns out that a narrow sense of intelligent is not sufficient for that, it doesnāt seem likely that people will settle for AI systems that are very strong in only that dimension.
Good points!
I think this is a key point, and that three related/āsupporting points can also be made:
The timeline surveys discussed in this essay related to when AI will be āable to accomplish every task better and more cheaply than human workersā, and some of the tasks/ājobs asked about would seem to rely on a wide range of skills, including social skills.
Intelligence is already often defined in a very expansive way that would likely include all mental properties that may be relevant to world-dominance. E.g., āIntelligence measures an agentās ability to achieve goals in a wide range of environmentsā (Legg and Hutter). That definition is from AI researchers, and Iād guess that AI researchers often have something like that (rather than something like performance on IQ tests) in mind as what theyāre ultimately aiming for or predicting arrival dates of (though I donāt have insider knowledge on AI researchersā views).
That said, I do think intelligence is also often used in a narrower way, and so I do value the OP having helped highlight that itās worth being more specific about what abilities are being discussed.
Iād guess that there are economic incentives to create AI systems that are very strong in whatever mental abilities are important for achieving goals. If it turns out that a narrow sense of intelligent is not sufficient for that, it doesnāt seem likely that people will settle for AI systems that are very strong in only that dimension.