To be clear, I have wide uncertainty on ‘ground truth’ here. From that POV, ‘[People and organisations in] China [have has] made several efforts...’ is the ‘clear and honest’ version, while coarse and lossy speech like ‘China has made several efforts...’ is not. I further expect the cost of nuanced speech is low, while the cost of foregone-conclusion speech (if wrong) is high,
What makes it a foregone conclusion is the powerful nature of race dynamics are convergent. Actions that would cause a party to definitely lose a race have feedback. Over time multiple competing agents will choose winning strategies, and others will copy those, leading to strategy mirroring. Certain forms of strategy (like nationalizing all the AI labs) are also convergent and optimal. And see a party could fail to play optimally, then observe they are losing, and be forced to choose optimal play in order to lose less.
So my seeming overconfidence is because I am convinced the overall game will force all these disparate uncoordinated individual events to converge on what it must.
I wrote ‘perhaps mutually-knowably so’ anticipating this kind of ‘ooh AI big stick’ thing, though I remain uncertain. Do you think harm-excludability seems difficult for AGI? Do you think enough people currently/might agree that it’s not like a nuke and more like a bioweapon?
I expect there are several views, but let’s look at the bioweapon argument for a second.
In what computers can the “escaped” AI exist in? There is no biosphere of computers. You need at least (1600 Gb x 2 / 80 x 2) = 80 H100s to host a GPT-4 instance. The real number is rumored to be about 128. And that’s a subhuman AGI at best without vision and other critical features.
How many cards will a dangerous ASI need to exist? I won’t go into the derivation here but I think the number is > 10,000, and they must be in a cluster with high bandwidth interconnects.
As for the second part, “how are we going to use it as a stick”. Simple. If you are unconcerned with the AI “breaking out”, you train and try a lot of techniques, and only use “in production” (industrial automation, killer robots etc) the most powerful model you have that is measurably reliable and efficient and doesn’t engage in unwanted behavior.
None of the bad AIs ever escape the lab, there’s nowhere for them to go.
Note that might be a different story in 2049, that would be when Moore’s law would put a single GPU at the power of 10,000 of them. It likely can’t continue that long, exponentials stop, but maybe computers built with computronium printed off a nanoforge.
But we don’t have any of that, and won’t anytime in the plannable future. We will have AGI systems good enough to do basic tasks, including robotic tasks.
What makes it a foregone conclusion is the powerful nature of race dynamics are convergent. Actions that would cause a party to definitely lose a race have feedback. Over time multiple competing agents will choose winning strategies, and others will copy those, leading to strategy mirroring. Certain forms of strategy (like nationalizing all the AI labs) are also convergent and optimal. And see a party could fail to play optimally, then observe they are losing, and be forced to choose optimal play in order to lose less.
So my seeming overconfidence is because I am convinced the overall game will force all these disparate uncoordinated individual events to converge on what it must.
I expect there are several views, but let’s look at the bioweapon argument for a second.
In what computers can the “escaped” AI exist in? There is no biosphere of computers. You need at least (1600 Gb x 2 / 80 x 2) = 80 H100s to host a GPT-4 instance. The real number is rumored to be about 128. And that’s a subhuman AGI at best without vision and other critical features.
How many cards will a dangerous ASI need to exist? I won’t go into the derivation here but I think the number is > 10,000, and they must be in a cluster with high bandwidth interconnects.
As for the second part, “how are we going to use it as a stick”. Simple. If you are unconcerned with the AI “breaking out”, you train and try a lot of techniques, and only use “in production” (industrial automation, killer robots etc) the most powerful model you have that is measurably reliable and efficient and doesn’t engage in unwanted behavior.
None of the bad AIs ever escape the lab, there’s nowhere for them to go.
Note that might be a different story in 2049, that would be when Moore’s law would put a single GPU at the power of 10,000 of them. It likely can’t continue that long, exponentials stop, but maybe computers built with computronium printed off a nanoforge.
But we don’t have any of that, and won’t anytime in the plannable future. We will have AGI systems good enough to do basic tasks, including robotic tasks.