I believe the chance of a worldwide AI pause is under 1 percent.
In fact I think it is a flat zero. The reason is simple.
The reason a world government can’t happen is certain parties will disagree with this. The obvious ones being China and Russia, but others as well.
Those parties have vast nuclear arsenals and the ability at any time of their choosing to turn keys and kill essentially the urban population of the Western world.
You would need to invade to destroy the chip fabs.
They explicitly have stated that were say they to be facing an invasion they will turn the keys.
China specifically is making their nuclear arsenal larger at this time.
Now yes, right now, the West has a stranglehold on the IC fabrication technology. A comfortable 5-10 year lead probably. That won’t last during an indefinite AI ban—a model just a little bit stronger than what is banned could let a party with it develop their tech faster and so on in a runaway feedback loop. China has also publicly not said anything about supporting a ban and has recently stated they intend to replicate the capacity of the human brain.
I haven’t even addressed the market dynamics on the western side. Where does the money come to lobby for AI bans? The money for lobbying against bans comes from some of the hundreds of billions of dollars that is flooding into AI at this time.
It is possible that AI bans will be an orphan issue like animal rights, which neither major political party supports.
Can you please try to expand on your reasoning, how do you expand from a flat 0 - race to the AGI—to 10-50 percent? What causes the probability shift? There is no scientific or empirical evidence for AGI dangers at this time, just a bunch of convincing arguments without proof.
Can you please try to expand on your reasoning, how do you expand from a flat 0 - race to the AGI—to 10-50 percent? What causes the probability shift?
Sure. I think there are natural reasons for people to fear AI. It will probably take their job, and therefore their ability to earn income through work. There is also a sizable portion of intellectuals who think that AI will probably lead to human extinction if we do not take drastic measures, and these intellectuals influence policy.
Humans tend to be fairly risk-averse about many powerful new technologies. For example, many politicians are currently seeking to strictly regulate tech companies out of traditional concerns regarding the internet and computers, which I personally find kind of baffling. AIs will also be pretty alien and AIs seem likely to take over management of the world if we let them have that type of control.
Environmentalists might fear that uncontrolled AI growth will lead to an environmental catastrophe. Cultural conservatives could fear the decay of traditional values in a post-AGI world. We could go through a list of popular ideologies and find similar reasons for fear in most of them.
It doesn’t seem surprising, given all these factors, that people will want to put a long pause on AI, even given the incentives to race to the finish line. The status quo is well-guarded, albeit against a formidable foe. If that reasoning doesn’t get you above 10% chance on a >10 year AI delay, then I’m honestly a bit surprised.
The 0 is because it’s a worldwide AI pause. EU AND UK AND China AND Russia AND Israel AND Saudi Arabia AND USA AND Canada AND Japan AND Taiwan.
To name all the parties that would be capable of competing even in the face of sanctions. Russia maybe doesn’t belong in the list but if the AI pause had no effective controls—someone sells inference and training accelerators and Russia can buy them—then no pause.
Let’s see, 10 parties. If they all simultaneously decide on AI pausing at a 20 percent chance that’s 0.2^10 = a number that’s basically 0.
Another issue is you might think “peer pressure” would couple the decisions together. Except....think about the gain if you defect. It rises the greater the number of AI pausers. If you are the only defector you take the planet and have a high chance of winning.
The only thing an AI pauser can do if they find out too late is threaten to nuke, their conventional military would be slaughtered by drone swarms. But the parties I mentioned all either have nuclear arsenals now or can build one in 1-2 years (Saudi Arabia can’t, the others though...). And that’s without the help of AGI to mass produce the missiles.
So the pauser parties have in this scenario a choice between “surrender to the new government and hope it’s not that bad” and “death of entire nation”. (Or anticipate facing this choice and defect which is what superpowers will do)
Does “worldwide AI pause” and “game winning defector advantage” change your estimate from 10-40 percent?
My other comment is even if you focus on just the USA and just the interest groups you mentioned. What about money? 100+ billion USD is the annual 2023 AI investment at least. It may be over 200 if you simply look at Nvidia revenue increases and project. Just 1 percent of that money is a lot of lobbying. (Source : Stanford estimates 2022 investment at 91 billion. There’s been a step function increase with the end of 2022 release of good llms. I am not sure all the totals for 2023 but it’s doubled Nvidias quarterly revenue)
Where can the pausers scrape together a few billion? USA politics are somewhat financing dependent for a side to get a voice.
For example the animal rights topic here is not supported in a meaningful way by any mainstream party....
Drone swarms do take time to build. Also, nuclear war is “only” going to kill a large percentage of your country’s citizens; if you’re sufficiently convinced that any monkey getting the banana means Doom, then even nuclear war is worth it.
I think getting the great powers on-side is plausible; the Western and Chinese alliance systems already cover the majority. Do I think a full stop can be implemented without some kind of war? Probably not. But not necessarily WWIII (though IMO that would still be worth it).
Let’s see, 10 parties. If they all simultaneously decide on AI pausing at a 20 percent chance that’s 0.2^10 = a number that’s basically 0.
I don’t think you should treat these probabilities as independent. I think the intuition that a global pause is plausible comes from these states’ interest in a moratorium being highly correlated, because the reasons for wanting a pause are based on facts about the world that everyone has access to (e.g. AI is difficult to control) and motivations that are fairly general (e.g. powerful, difficult-to-control influences in the world are bad from most people’s perspective, and the other things that Matthew mentioned).
See the next sentence I wrote. They aren’t independent but the kinds of groups Matthew mention—people concerned about their jobs, etc—are not the same percentage in every country. They have to be the majority in all 10.
And then some of those countries the population effectively gets no vote. So the ‘central committee’ or whatever government structure they used also has to decide on a reason not to build AGI, and it has to be a different reason, because such a committee has different incentives regulating it.
And then there’s the defector’s prize. There’s no real benefit to racing for the AGI if you’re far behind, you won’t win the race and you should just license the tech when it’s out. Focus on your competencies so you have the money to do that.
Note also we can simply look at where the wind is blowing today. What is China and Israel and Russia and other parties saying. They are saying they are going to make an AGI at the earliest opportunity.
What is the probability they, without direct evidence of the danger of AGI (by building one), will they change their minds?
Matthew is badly miscalibrated. The chances are near 0 that they all change their minds, for different reasons. There are no examples in human history where this has ever happened.
Humans are tool users, and you’re expecting that they will leave a powerful tool fallow after having spent 70 years of exponential progress to develop it. That’s not going to happen. (if you thought it would always blow up, like an antimatter bomb, that would be a different situation, but current AI systems that are approaching human level don’t have this property and larger multimodal ones likely will not either)
Mathew, I have to take issues with your numbers.
I believe the chance of a worldwide AI pause is under 1 percent.
In fact I think it is a flat zero. The reason is simple.
The reason a world government can’t happen is certain parties will disagree with this. The obvious ones being China and Russia, but others as well.
Those parties have vast nuclear arsenals and the ability at any time of their choosing to turn keys and kill essentially the urban population of the Western world.
You would need to invade to destroy the chip fabs.
They explicitly have stated that were say they to be facing an invasion they will turn the keys.
China specifically is making their nuclear arsenal larger at this time.
Now yes, right now, the West has a stranglehold on the IC fabrication technology. A comfortable 5-10 year lead probably. That won’t last during an indefinite AI ban—a model just a little bit stronger than what is banned could let a party with it develop their tech faster and so on in a runaway feedback loop. China has also publicly not said anything about supporting a ban and has recently stated they intend to replicate the capacity of the human brain.
I haven’t even addressed the market dynamics on the western side. Where does the money come to lobby for AI bans? The money for lobbying against bans comes from some of the hundreds of billions of dollars that is flooding into AI at this time.
It is possible that AI bans will be an orphan issue like animal rights, which neither major political party supports.
Can you please try to expand on your reasoning, how do you expand from a flat 0 - race to the AGI—to 10-50 percent? What causes the probability shift? There is no scientific or empirical evidence for AGI dangers at this time, just a bunch of convincing arguments without proof.
Sure. I think there are natural reasons for people to fear AI. It will probably take their job, and therefore their ability to earn income through work. There is also a sizable portion of intellectuals who think that AI will probably lead to human extinction if we do not take drastic measures, and these intellectuals influence policy.
Humans tend to be fairly risk-averse about many powerful new technologies. For example, many politicians are currently seeking to strictly regulate tech companies out of traditional concerns regarding the internet and computers, which I personally find kind of baffling. AIs will also be pretty alien and AIs seem likely to take over management of the world if we let them have that type of control.
Environmentalists might fear that uncontrolled AI growth will lead to an environmental catastrophe. Cultural conservatives could fear the decay of traditional values in a post-AGI world. We could go through a list of popular ideologies and find similar reasons for fear in most of them.
It doesn’t seem surprising, given all these factors, that people will want to put a long pause on AI, even given the incentives to race to the finish line. The status quo is well-guarded, albeit against a formidable foe. If that reasoning doesn’t get you above 10% chance on a >10 year AI delay, then I’m honestly a bit surprised.
The 0 is because it’s a worldwide AI pause. EU AND UK AND China AND Russia AND Israel AND Saudi Arabia AND USA AND Canada AND Japan AND Taiwan.
To name all the parties that would be capable of competing even in the face of sanctions. Russia maybe doesn’t belong in the list but if the AI pause had no effective controls—someone sells inference and training accelerators and Russia can buy them—then no pause.
Let’s see, 10 parties. If they all simultaneously decide on AI pausing at a 20 percent chance that’s 0.2^10 = a number that’s basically 0.
Another issue is you might think “peer pressure” would couple the decisions together. Except....think about the gain if you defect. It rises the greater the number of AI pausers. If you are the only defector you take the planet and have a high chance of winning.
The only thing an AI pauser can do if they find out too late is threaten to nuke, their conventional military would be slaughtered by drone swarms. But the parties I mentioned all either have nuclear arsenals now or can build one in 1-2 years (Saudi Arabia can’t, the others though...). And that’s without the help of AGI to mass produce the missiles.
So the pauser parties have in this scenario a choice between “surrender to the new government and hope it’s not that bad” and “death of entire nation”. (Or anticipate facing this choice and defect which is what superpowers will do)
Does “worldwide AI pause” and “game winning defector advantage” change your estimate from 10-40 percent?
My other comment is even if you focus on just the USA and just the interest groups you mentioned. What about money? 100+ billion USD is the annual 2023 AI investment at least. It may be over 200 if you simply look at Nvidia revenue increases and project. Just 1 percent of that money is a lot of lobbying. (Source : Stanford estimates 2022 investment at 91 billion. There’s been a step function increase with the end of 2022 release of good llms. I am not sure all the totals for 2023 but it’s doubled Nvidias quarterly revenue)
Where can the pausers scrape together a few billion? USA politics are somewhat financing dependent for a side to get a voice.
For example the animal rights topic here is not supported in a meaningful way by any mainstream party....
Drone swarms do take time to build. Also, nuclear war is “only” going to kill a large percentage of your country’s citizens; if you’re sufficiently convinced that any monkey getting the banana means Doom, then even nuclear war is worth it.
I think getting the great powers on-side is plausible; the Western and Chinese alliance systems already cover the majority. Do I think a full stop can be implemented without some kind of war? Probably not. But not necessarily WWIII (though IMO that would still be worth it).
I don’t think you should treat these probabilities as independent. I think the intuition that a global pause is plausible comes from these states’ interest in a moratorium being highly correlated, because the reasons for wanting a pause are based on facts about the world that everyone has access to (e.g. AI is difficult to control) and motivations that are fairly general (e.g. powerful, difficult-to-control influences in the world are bad from most people’s perspective, and the other things that Matthew mentioned).
See the next sentence I wrote. They aren’t independent but the kinds of groups Matthew mention—people concerned about their jobs, etc—are not the same percentage in every country. They have to be the majority in all 10.
And then some of those countries the population effectively gets no vote. So the ‘central committee’ or whatever government structure they used also has to decide on a reason not to build AGI, and it has to be a different reason, because such a committee has different incentives regulating it.
And then there’s the defector’s prize. There’s no real benefit to racing for the AGI if you’re far behind, you won’t win the race and you should just license the tech when it’s out. Focus on your competencies so you have the money to do that.
Note also we can simply look at where the wind is blowing today. What is China and Israel and Russia and other parties saying. They are saying they are going to make an AGI at the earliest opportunity.
What is the probability they, without direct evidence of the danger of AGI (by building one), will they change their minds?
Matthew is badly miscalibrated. The chances are near 0 that they all change their minds, for different reasons. There are no examples in human history where this has ever happened.
Humans are tool users, and you’re expecting that they will leave a powerful tool fallow after having spent 70 years of exponential progress to develop it. That’s not going to happen. (if you thought it would always blow up, like an antimatter bomb, that would be a different situation, but current AI systems that are approaching human level don’t have this property and larger multimodal ones likely will not either)