… Why wouldn’t AIs be good at doing these things? [...]
More broadly, if you object to the implication “superintelligence implies ability to dominate the world”, then just take whatever mental property P you think does allow an agent to dominate the world; I suspect both Toby and I would agree with “there is a non-trivial chance that future AI systems will be superhuman at P and so would be able to dominate the world”.
I think this is a key point, and that three related/supporting points can also be made:
The timeline surveys discussed in this essay related to when AI will be “able to accomplish every task better and more cheaply than human workers”, and some of the tasks/jobs asked about would seem to rely on a wide range of skills, including social skills.
Intelligence is already often defined in a very expansive way that would likely include all mental properties that may be relevant to world-dominance. E.g., “Intelligence measures an agent’s ability to achieve goals in a wide range of environments” (Legg and Hutter). That definition is from AI researchers, and I’d guess that AI researchers often have something like that (rather than something like performance on IQ tests) in mind as what they’re ultimately aiming for or predicting arrival dates of (though I don’t have insider knowledge on AI researchers’ views).
That said, I do think intelligence is also often used in a narrower way, and so I do value the OP having helped highlight that it’s worth being more specific about what abilities are being discussed.
I’d guess that there are economic incentives to create AI systems that are very strong in whatever mental abilities are important for achieving goals. If it turns out that a narrow sense of intelligent is not sufficient for that, it doesn’t seem likely that people will settle for AI systems that are very strong in only that dimension.
Good points!
I think this is a key point, and that three related/supporting points can also be made:
The timeline surveys discussed in this essay related to when AI will be “able to accomplish every task better and more cheaply than human workers”, and some of the tasks/jobs asked about would seem to rely on a wide range of skills, including social skills.
Intelligence is already often defined in a very expansive way that would likely include all mental properties that may be relevant to world-dominance. E.g., “Intelligence measures an agent’s ability to achieve goals in a wide range of environments” (Legg and Hutter). That definition is from AI researchers, and I’d guess that AI researchers often have something like that (rather than something like performance on IQ tests) in mind as what they’re ultimately aiming for or predicting arrival dates of (though I don’t have insider knowledge on AI researchers’ views).
That said, I do think intelligence is also often used in a narrower way, and so I do value the OP having helped highlight that it’s worth being more specific about what abilities are being discussed.
I’d guess that there are economic incentives to create AI systems that are very strong in whatever mental abilities are important for achieving goals. If it turns out that a narrow sense of intelligent is not sufficient for that, it doesn’t seem likely that people will settle for AI systems that are very strong in only that dimension.