Let’s see, 10 parties. If they all simultaneously decide on AI pausing at a 20 percent chance that’s 0.2^10 = a number that’s basically 0.
I don’t think you should treat these probabilities as independent. I think the intuition that a global pause is plausible comes from these states’ interest in a moratorium being highly correlated, because the reasons for wanting a pause are based on facts about the world that everyone has access to (e.g. AI is difficult to control) and motivations that are fairly general (e.g. powerful, difficult-to-control influences in the world are bad from most people’s perspective, and the other things that Matthew mentioned).
See the next sentence I wrote. They aren’t independent but the kinds of groups Matthew mention—people concerned about their jobs, etc—are not the same percentage in every country. They have to be the majority in all 10.
And then some of those countries the population effectively gets no vote. So the ‘central committee’ or whatever government structure they used also has to decide on a reason not to build AGI, and it has to be a different reason, because such a committee has different incentives regulating it.
And then there’s the defector’s prize. There’s no real benefit to racing for the AGI if you’re far behind, you won’t win the race and you should just license the tech when it’s out. Focus on your competencies so you have the money to do that.
Note also we can simply look at where the wind is blowing today. What is China and Israel and Russia and other parties saying. They are saying they are going to make an AGI at the earliest opportunity.
What is the probability they, without direct evidence of the danger of AGI (by building one), will they change their minds?
Matthew is badly miscalibrated. The chances are near 0 that they all change their minds, for different reasons. There are no examples in human history where this has ever happened.
Humans are tool users, and you’re expecting that they will leave a powerful tool fallow after having spent 70 years of exponential progress to develop it. That’s not going to happen. (if you thought it would always blow up, like an antimatter bomb, that would be a different situation, but current AI systems that are approaching human level don’t have this property and larger multimodal ones likely will not either)
I don’t think you should treat these probabilities as independent. I think the intuition that a global pause is plausible comes from these states’ interest in a moratorium being highly correlated, because the reasons for wanting a pause are based on facts about the world that everyone has access to (e.g. AI is difficult to control) and motivations that are fairly general (e.g. powerful, difficult-to-control influences in the world are bad from most people’s perspective, and the other things that Matthew mentioned).
See the next sentence I wrote. They aren’t independent but the kinds of groups Matthew mention—people concerned about their jobs, etc—are not the same percentage in every country. They have to be the majority in all 10.
And then some of those countries the population effectively gets no vote. So the ‘central committee’ or whatever government structure they used also has to decide on a reason not to build AGI, and it has to be a different reason, because such a committee has different incentives regulating it.
And then there’s the defector’s prize. There’s no real benefit to racing for the AGI if you’re far behind, you won’t win the race and you should just license the tech when it’s out. Focus on your competencies so you have the money to do that.
Note also we can simply look at where the wind is blowing today. What is China and Israel and Russia and other parties saying. They are saying they are going to make an AGI at the earliest opportunity.
What is the probability they, without direct evidence of the danger of AGI (by building one), will they change their minds?
Matthew is badly miscalibrated. The chances are near 0 that they all change their minds, for different reasons. There are no examples in human history where this has ever happened.
Humans are tool users, and you’re expecting that they will leave a powerful tool fallow after having spent 70 years of exponential progress to develop it. That’s not going to happen. (if you thought it would always blow up, like an antimatter bomb, that would be a different situation, but current AI systems that are approaching human level don’t have this property and larger multimodal ones likely will not either)