For an agent to conquer to world, I think it would have to be close to the best across all those areas, but I think this is super unlikely based on it being super unlikely for a human to be close to the best across all those areas.
For an agent to conquer to world, I think it would have to be close to the best across all those areas
That seems right.
I think this is super unlikely based on it being super unlikely for a human to be close to the best across all those areas
Iām not sure that follows? I would expect improvements on these types of tasks to be highly correlated in general-purpose AIs. I think weāve seen that with GPT-3 to GPT-4, for example: GPT-4 got better pretty much across the board (excluding the tasks that neither of them can do, and the tasks that GPT-3 could already do perfectly). That is not the case for a human who will typically improve in just one domain or a few domains from one year to the next, depending on where they focus their effort.
I would expect improvements on these types of tasks to be highly correlated in general-purpose AIs.
Higher IQ in humans is correlated with better performance in all sorts of tasks too, but the probability of finding a single human performing better than 99.9 % of (human or AI) workers in each of the areas you mentioned is still astronomically low. So I do not expect a single AI system to become better than 99.9 % of (human or AI) workers in each of the areas you mentioned. It can still be the case that the AI systems share a baseline common architecture, in the same way that humans share the same underlying biology, but I predict the top performers in each area will still be specialised systems.
I think weāve seen that with GPT-3 to GPT-4, for example: GPT-4 got better pretty much across the board (excluding the tasks that neither of them can do, and the tasks that GPT-3 could already do perfectly). That is not the case for a human who will typically improve in just one domain or a few domains from one year to the next, depending on where they focus their effort.
Going from GPT-3 to GPT-4 seems more analogous to a human going from 10 to 20 years old. There are improvements across the board during this phase, but specialisation still matters among adults. Likewise, I assume specialisation will matter among frontier AI systems (although I am quite open to a single future AI system being better than all humans at any task). GPT-4 is still far from being better than 99.9 % of (human or AI) workers in the areas you mentioned.
Let me see if I can rephrase your argument, because Iām not sure I get it. As I understand it, youāre saying:
In humans, higher IQ means better performance across a variety of tasks. This is analogous to AI, where more compute/āparameters/ādata etc. means better performance across a variety of tasks.
AI systems tend to share a common underlying architecture, just as humans share the same basic biology.
For humans, when IQ increases, there are improvements across the board, but still specialization, meaning no single human (the one with the most IQ) will be better than all other humans at all of those things.
By analogy: For AIs, when theyāre scaled up, there are improvements across the board, but (likely) still specialization, meaning no single AI (the one with the most compute/āparameters/ādata/āetc.) will be better than all other AIs at all of those things.
Now Iām a bit unsure about whether youāre saying that you find it extremely unlikely that any AI will be vastly better in the areas I mentioned than all humans, or that you find it extremely unlikely that any AI will be vastly better than all humans and all other AIs in those areas.
If you mean 1-4 to suggest that no AI is will be better than all humans and other AIs, Iām not sure about whether 4 follows from 1-3, but I think that seems plausible at least. But if this is what you mean, Iām not sure what youāre original comment (āNote humans are also trained on all those abilities, but no single human is trained to be a specialist in all those areas. Likewise for AIs.ā) was meant to say in response to my original comment, which was meant as pushback against the view that AGI would be bad at taking over the planet since it wouldnāt be intended for that purpose.
If you mean 1-4 to suggest that no AI will be better than all humans, I donāt think the analogy holds, because the underlying factor (IQ versus AI scale/āalgorithms) is different. Like, it seems possible that even unspecialized AIs could just sweep past the most intelligent and specialized humans, given enough time.
Thanks for the clarification, Erich! Strongly upvoted.
Let me see if I can rephrase your argument
I think your rephrasement was great.
Now Iām a bit unsure about whether youāre saying that you find it extremely unlikely that any AI will be vastly better in the areas I mentioned than all humans, or that you find it extremely unlikely that any AI will be vastly better than all humans and all other AIs in those areas.
The latter.
If you mean 1-4 to suggest that no AI is will be better than all humans and other AIs, Iām not sure about whether 4 follows from 1-3, but I think that seems plausible at least. But if this is what you mean, Iām not sure what youāre original comment (āNote humans are also trained on all those abilities, but no single human is trained to be a specialist in all those areas. Likewise for AIs.ā) was meant to say in response to my original comment, which was meant as pushback against the view that AGI would be bad at taking over the planet since it wouldnāt be intended for that purpose.
I think a single AI agent would have to be better than the vast majority of agents (including both human and AI agents) to gain control over the world, which I consider extremely unlikely given gains from specialisation.
If you mean 1-4 to suggest that no AI will be better than all humans, I donāt think the analogy holds, because the underlying factor (IQ versus AI scale/āalgorithms) is different. Like, it seems possible that even unspecialized AIs could just sweep past the most intelligent and specialized humans, given enough time.
I agree.
Iād be curious to hear if you have thoughts about which specific abilities you expect an AGI would need to have to take control over humanity that itās unlikely to actually possess?
I believe the probability of a rogue (human or AI) agent gaining control over the world mostly depends on its level of capabilities relative to those of the other agents, not on the absolute level of capabilities of the rogue agent. So I mostly worry about concentration of capabilities rather than increases in capabilities per se. In theory, the capabilities of a given group of (human or AI) agents could increase a lot in a short period of time such that capabilities become so concentrated that the group would be in a position to gain control over the world. However, I think this is very unlikely in practice. I guess the annual probability of human extinction over the next 10 years is around 10^-6.
Hi Erich,
Note humans are also trained on all those abilities, but no single human is trained to be a specialist in all those areas. Likewise for AIs.
Yes, thatās true. Can you spell out for me what you think that implies in a little more detail?
For an agent to conquer to world, I think it would have to be close to the best across all those areas, but I think this is super unlikely based on it being super unlikely for a human to be close to the best across all those areas.
That seems right.
Iām not sure that follows? I would expect improvements on these types of tasks to be highly correlated in general-purpose AIs. I think weāve seen that with GPT-3 to GPT-4, for example: GPT-4 got better pretty much across the board (excluding the tasks that neither of them can do, and the tasks that GPT-3 could already do perfectly). That is not the case for a human who will typically improve in just one domain or a few domains from one year to the next, depending on where they focus their effort.
Higher IQ in humans is correlated with better performance in all sorts of tasks too, but the probability of finding a single human performing better than 99.9 % of (human or AI) workers in each of the areas you mentioned is still astronomically low. So I do not expect a single AI system to become better than 99.9 % of (human or AI) workers in each of the areas you mentioned. It can still be the case that the AI systems share a baseline common architecture, in the same way that humans share the same underlying biology, but I predict the top performers in each area will still be specialised systems.
Going from GPT-3 to GPT-4 seems more analogous to a human going from 10 to 20 years old. There are improvements across the board during this phase, but specialisation still matters among adults. Likewise, I assume specialisation will matter among frontier AI systems (although I am quite open to a single future AI system being better than all humans at any task). GPT-4 is still far from being better than 99.9 % of (human or AI) workers in the areas you mentioned.
Let me see if I can rephrase your argument, because Iām not sure I get it. As I understand it, youāre saying:
In humans, higher IQ means better performance across a variety of tasks. This is analogous to AI, where more compute/āparameters/ādata etc. means better performance across a variety of tasks.
AI systems tend to share a common underlying architecture, just as humans share the same basic biology.
For humans, when IQ increases, there are improvements across the board, but still specialization, meaning no single human (the one with the most IQ) will be better than all other humans at all of those things.
By analogy: For AIs, when theyāre scaled up, there are improvements across the board, but (likely) still specialization, meaning no single AI (the one with the most compute/āparameters/ādata/āetc.) will be better than all other AIs at all of those things.
Now Iām a bit unsure about whether youāre saying that you find it extremely unlikely that any AI will be vastly better in the areas I mentioned than all humans, or that you find it extremely unlikely that any AI will be vastly better than all humans and all other AIs in those areas.
If you mean 1-4 to suggest that no AI is will be better than all humans and other AIs, Iām not sure about whether 4 follows from 1-3, but I think that seems plausible at least. But if this is what you mean, Iām not sure what youāre original comment (āNote humans are also trained on all those abilities, but no single human is trained to be a specialist in all those areas. Likewise for AIs.ā) was meant to say in response to my original comment, which was meant as pushback against the view that AGI would be bad at taking over the planet since it wouldnāt be intended for that purpose.
If you mean 1-4 to suggest that no AI will be better than all humans, I donāt think the analogy holds, because the underlying factor (IQ versus AI scale/āalgorithms) is different. Like, it seems possible that even unspecialized AIs could just sweep past the most intelligent and specialized humans, given enough time.
Thanks for the clarification, Erich! Strongly upvoted.
I think your rephrasement was great.
The latter.
I think a single AI agent would have to be better than the vast majority of agents (including both human and AI agents) to gain control over the world, which I consider extremely unlikely given gains from specialisation.
I agree.
I believe the probability of a rogue (human or AI) agent gaining control over the world mostly depends on its level of capabilities relative to those of the other agents, not on the absolute level of capabilities of the rogue agent. So I mostly worry about concentration of capabilities rather than increases in capabilities per se. In theory, the capabilities of a given group of (human or AI) agents could increase a lot in a short period of time such that capabilities become so concentrated that the group would be in a position to gain control over the world. However, I think this is very unlikely in practice. I guess the annual probability of human extinction over the next 10 years is around 10^-6.