The 0 is because it’s a worldwide AI pause. EU AND UK AND China AND Russia AND Israel AND Saudi Arabia AND USA AND Canada AND Japan AND Taiwan.
To name all the parties that would be capable of competing even in the face of sanctions. Russia maybe doesn’t belong in the list but if the AI pause had no effective controls—someone sells inference and training accelerators and Russia can buy them—then no pause.
Let’s see, 10 parties. If they all simultaneously decide on AI pausing at a 20 percent chance that’s 0.2^10 = a number that’s basically 0.
Another issue is you might think “peer pressure” would couple the decisions together. Except....think about the gain if you defect. It rises the greater the number of AI pausers. If you are the only defector you take the planet and have a high chance of winning.
The only thing an AI pauser can do if they find out too late is threaten to nuke, their conventional military would be slaughtered by drone swarms. But the parties I mentioned all either have nuclear arsenals now or can build one in 1-2 years (Saudi Arabia can’t, the others though...). And that’s without the help of AGI to mass produce the missiles.
So the pauser parties have in this scenario a choice between “surrender to the new government and hope it’s not that bad” and “death of entire nation”. (Or anticipate facing this choice and defect which is what superpowers will do)
Does “worldwide AI pause” and “game winning defector advantage” change your estimate from 10-40 percent?
My other comment is even if you focus on just the USA and just the interest groups you mentioned. What about money? 100+ billion USD is the annual 2023 AI investment at least. It may be over 200 if you simply look at Nvidia revenue increases and project. Just 1 percent of that money is a lot of lobbying. (Source : Stanford estimates 2022 investment at 91 billion. There’s been a step function increase with the end of 2022 release of good llms. I am not sure all the totals for 2023 but it’s doubled Nvidias quarterly revenue)
Where can the pausers scrape together a few billion? USA politics are somewhat financing dependent for a side to get a voice.
For example the animal rights topic here is not supported in a meaningful way by any mainstream party....
Drone swarms do take time to build. Also, nuclear war is “only” going to kill a large percentage of your country’s citizens; if you’re sufficiently convinced that any monkey getting the banana means Doom, then even nuclear war is worth it.
I think getting the great powers on-side is plausible; the Western and Chinese alliance systems already cover the majority. Do I think a full stop can be implemented without some kind of war? Probably not. But not necessarily WWIII (though IMO that would still be worth it).
Let’s see, 10 parties. If they all simultaneously decide on AI pausing at a 20 percent chance that’s 0.2^10 = a number that’s basically 0.
I don’t think you should treat these probabilities as independent. I think the intuition that a global pause is plausible comes from these states’ interest in a moratorium being highly correlated, because the reasons for wanting a pause are based on facts about the world that everyone has access to (e.g. AI is difficult to control) and motivations that are fairly general (e.g. powerful, difficult-to-control influences in the world are bad from most people’s perspective, and the other things that Matthew mentioned).
See the next sentence I wrote. They aren’t independent but the kinds of groups Matthew mention—people concerned about their jobs, etc—are not the same percentage in every country. They have to be the majority in all 10.
And then some of those countries the population effectively gets no vote. So the ‘central committee’ or whatever government structure they used also has to decide on a reason not to build AGI, and it has to be a different reason, because such a committee has different incentives regulating it.
And then there’s the defector’s prize. There’s no real benefit to racing for the AGI if you’re far behind, you won’t win the race and you should just license the tech when it’s out. Focus on your competencies so you have the money to do that.
Note also we can simply look at where the wind is blowing today. What is China and Israel and Russia and other parties saying. They are saying they are going to make an AGI at the earliest opportunity.
What is the probability they, without direct evidence of the danger of AGI (by building one), will they change their minds?
Matthew is badly miscalibrated. The chances are near 0 that they all change their minds, for different reasons. There are no examples in human history where this has ever happened.
Humans are tool users, and you’re expecting that they will leave a powerful tool fallow after having spent 70 years of exponential progress to develop it. That’s not going to happen. (if you thought it would always blow up, like an antimatter bomb, that would be a different situation, but current AI systems that are approaching human level don’t have this property and larger multimodal ones likely will not either)
The 0 is because it’s a worldwide AI pause. EU AND UK AND China AND Russia AND Israel AND Saudi Arabia AND USA AND Canada AND Japan AND Taiwan.
To name all the parties that would be capable of competing even in the face of sanctions. Russia maybe doesn’t belong in the list but if the AI pause had no effective controls—someone sells inference and training accelerators and Russia can buy them—then no pause.
Let’s see, 10 parties. If they all simultaneously decide on AI pausing at a 20 percent chance that’s 0.2^10 = a number that’s basically 0.
Another issue is you might think “peer pressure” would couple the decisions together. Except....think about the gain if you defect. It rises the greater the number of AI pausers. If you are the only defector you take the planet and have a high chance of winning.
The only thing an AI pauser can do if they find out too late is threaten to nuke, their conventional military would be slaughtered by drone swarms. But the parties I mentioned all either have nuclear arsenals now or can build one in 1-2 years (Saudi Arabia can’t, the others though...). And that’s without the help of AGI to mass produce the missiles.
So the pauser parties have in this scenario a choice between “surrender to the new government and hope it’s not that bad” and “death of entire nation”. (Or anticipate facing this choice and defect which is what superpowers will do)
Does “worldwide AI pause” and “game winning defector advantage” change your estimate from 10-40 percent?
My other comment is even if you focus on just the USA and just the interest groups you mentioned. What about money? 100+ billion USD is the annual 2023 AI investment at least. It may be over 200 if you simply look at Nvidia revenue increases and project. Just 1 percent of that money is a lot of lobbying. (Source : Stanford estimates 2022 investment at 91 billion. There’s been a step function increase with the end of 2022 release of good llms. I am not sure all the totals for 2023 but it’s doubled Nvidias quarterly revenue)
Where can the pausers scrape together a few billion? USA politics are somewhat financing dependent for a side to get a voice.
For example the animal rights topic here is not supported in a meaningful way by any mainstream party....
Drone swarms do take time to build. Also, nuclear war is “only” going to kill a large percentage of your country’s citizens; if you’re sufficiently convinced that any monkey getting the banana means Doom, then even nuclear war is worth it.
I think getting the great powers on-side is plausible; the Western and Chinese alliance systems already cover the majority. Do I think a full stop can be implemented without some kind of war? Probably not. But not necessarily WWIII (though IMO that would still be worth it).
I don’t think you should treat these probabilities as independent. I think the intuition that a global pause is plausible comes from these states’ interest in a moratorium being highly correlated, because the reasons for wanting a pause are based on facts about the world that everyone has access to (e.g. AI is difficult to control) and motivations that are fairly general (e.g. powerful, difficult-to-control influences in the world are bad from most people’s perspective, and the other things that Matthew mentioned).
See the next sentence I wrote. They aren’t independent but the kinds of groups Matthew mention—people concerned about their jobs, etc—are not the same percentage in every country. They have to be the majority in all 10.
And then some of those countries the population effectively gets no vote. So the ‘central committee’ or whatever government structure they used also has to decide on a reason not to build AGI, and it has to be a different reason, because such a committee has different incentives regulating it.
And then there’s the defector’s prize. There’s no real benefit to racing for the AGI if you’re far behind, you won’t win the race and you should just license the tech when it’s out. Focus on your competencies so you have the money to do that.
Note also we can simply look at where the wind is blowing today. What is China and Israel and Russia and other parties saying. They are saying they are going to make an AGI at the earliest opportunity.
What is the probability they, without direct evidence of the danger of AGI (by building one), will they change their minds?
Matthew is badly miscalibrated. The chances are near 0 that they all change their minds, for different reasons. There are no examples in human history where this has ever happened.
Humans are tool users, and you’re expecting that they will leave a powerful tool fallow after having spent 70 years of exponential progress to develop it. That’s not going to happen. (if you thought it would always blow up, like an antimatter bomb, that would be a different situation, but current AI systems that are approaching human level don’t have this property and larger multimodal ones likely will not either)