Thanks for the clarification, Erich! Strongly upvoted.
Let me see if I can rephrase your argument
I think your rephrasement was great.
Now Iām a bit unsure about whether youāre saying that you find it extremely unlikely that any AI will be vastly better in the areas I mentioned than all humans, or that you find it extremely unlikely that any AI will be vastly better than all humans and all other AIs in those areas.
The latter.
If you mean 1-4 to suggest that no AI is will be better than all humans and other AIs, Iām not sure about whether 4 follows from 1-3, but I think that seems plausible at least. But if this is what you mean, Iām not sure what youāre original comment (āNote humans are also trained on all those abilities, but no single human is trained to be a specialist in all those areas. Likewise for AIs.ā) was meant to say in response to my original comment, which was meant as pushback against the view that AGI would be bad at taking over the planet since it wouldnāt be intended for that purpose.
I think a single AI agent would have to be better than the vast majority of agents (including both human and AI agents) to gain control over the world, which I consider extremely unlikely given gains from specialisation.
If you mean 1-4 to suggest that no AI will be better than all humans, I donāt think the analogy holds, because the underlying factor (IQ versus AI scale/āalgorithms) is different. Like, it seems possible that even unspecialized AIs could just sweep past the most intelligent and specialized humans, given enough time.
I agree.
Iād be curious to hear if you have thoughts about which specific abilities you expect an AGI would need to have to take control over humanity that itās unlikely to actually possess?
I believe the probability of a rogue (human or AI) agent gaining control over the world mostly depends on its level of capabilities relative to those of the other agents, not on the absolute level of capabilities of the rogue agent. So I mostly worry about concentration of capabilities rather than increases in capabilities per se. In theory, the capabilities of a given group of (human or AI) agents could increase a lot in a short period of time such that capabilities become so concentrated that the group would be in a position to gain control over the world. However, I think this is very unlikely in practice. I guess the annual probability of human extinction over the next 10 years is around 10^-6.
Thanks for the clarification, Erich! Strongly upvoted.
I think your rephrasement was great.
The latter.
I think a single AI agent would have to be better than the vast majority of agents (including both human and AI agents) to gain control over the world, which I consider extremely unlikely given gains from specialisation.
I agree.
I believe the probability of a rogue (human or AI) agent gaining control over the world mostly depends on its level of capabilities relative to those of the other agents, not on the absolute level of capabilities of the rogue agent. So I mostly worry about concentration of capabilities rather than increases in capabilities per se. In theory, the capabilities of a given group of (human or AI) agents could increase a lot in a short period of time such that capabilities become so concentrated that the group would be in a position to gain control over the world. However, I think this is very unlikely in practice. I guess the annual probability of human extinction over the next 10 years is around 10^-6.