Just in case we’re out of sync, let’s briefly refocus on some object details
China has made several efforts to preserve their chip access, including smuggling, buying chips that are just under the legal limit of performance, and investing in their domestic chip industry.
Are you aware of the following?
the smuggling was done by… smugglers
the buying of chips under the limit was done by multiple suppliers in China
the selling of chips under the limit was done by Nvidia (and perhaps others)
the investment in China’s chip industry was done by the CCP
If not, please digest those nuances (and perhaps I need to make them clearer in my OP!) and consider why I object to the phrasing.
You said,
If ground truth reality is you’re in a race to the nuke, dressing up reality in language that denies this is counterproductive.
This is true only if you have sufficient justification to believe confidently in that particular ‘ground truth reality’, and if the cost of speaking with nuance outweighs the expected cost of inflaming tensions in worlds where you’re wrong.
To be clear, I have wide uncertainty on ‘ground truth’ here. From that POV, ‘[People and organisations in] China [have has] made several efforts...’ is the ‘clear and honest’ version, while coarse and lossy speech like ‘China has made several efforts...’ is not. I further expect the cost of nuanced speech is low, while the cost of foregone-conclusion speech (if wrong) is high, which I admit is what gets me exercised about this particular lack of nuance and not so much about others (though also others).
What about you? (I note we’re discussing possible geopolitical futures, right? I don’t think humans can be justifiably very confident about questions like this. I object to the use of ‘ground truth’ here on that basis[1].)
I’m still interested in whether you think those questions I previously gestured at are cruxes, and whether my attempted ITT was about right. I don’t think there is a ‘MIRI’s take’ in this context.
Did you see my section in the OP about excludability of harms as follows?
Separately, a lack of reliable alignment techniques and performance guarantees makes AI-powered belligerent national interest plays look more like bioweapons than like nukes—i.e. minimally-excludable—and perhaps mutually-knowably so! This presently damps the incentive to go after them.
I wrote ‘perhaps mutually-knowably so’ anticipating this kind of ‘ooh AI big stick’ thing, though I remain uncertain. Do you think harm-excludability seems difficult for AGI? Do you think enough people currently/might agree that it’s not like a nuke and more like a bioweapon?
Do you think humanity is sort of doing middling OK on bio? (i.e. not foregone conclusion biowarfare/disasters?) What about climate? Nukes? Clearly we’re doing quite badly but I don’t think the course of the future is set in stone[1:1] for any of these.
Overall it appears that you’re very (I would say over) confident in this picture. To the extent that you take issue with my asking for nuance (of the kind that takes claims from false unless contorted with caveats to basically true). Perhaps on the basis that what we perceive now (lots of actors of various sizes competing and cooperating on various axes including access to compute) is actually a shadow of what’s unavoidably to come (all-out superpower strife in a race to AGI) and in the latter world the finer distinctions don’t matter?
I don’t care if you are a physical determinist, we’re finite, tiny computers in a messy world. There might be some ‘ground truth’ about what the future holds, but from our POV it’s stochastic.
To be clear, I have wide uncertainty on ‘ground truth’ here. From that POV, ‘[People and organisations in] China [have has] made several efforts...’ is the ‘clear and honest’ version, while coarse and lossy speech like ‘China has made several efforts...’ is not. I further expect the cost of nuanced speech is low, while the cost of foregone-conclusion speech (if wrong) is high,
What makes it a foregone conclusion is the powerful nature of race dynamics are convergent. Actions that would cause a party to definitely lose a race have feedback. Over time multiple competing agents will choose winning strategies, and others will copy those, leading to strategy mirroring. Certain forms of strategy (like nationalizing all the AI labs) are also convergent and optimal. And see a party could fail to play optimally, then observe they are losing, and be forced to choose optimal play in order to lose less.
So my seeming overconfidence is because I am convinced the overall game will force all these disparate uncoordinated individual events to converge on what it must.
I wrote ‘perhaps mutually-knowably so’ anticipating this kind of ‘ooh AI big stick’ thing, though I remain uncertain. Do you think harm-excludability seems difficult for AGI? Do you think enough people currently/might agree that it’s not like a nuke and more like a bioweapon?
I expect there are several views, but let’s look at the bioweapon argument for a second.
In what computers can the “escaped” AI exist in? There is no biosphere of computers. You need at least (1600 Gb x 2 / 80 x 2) = 80 H100s to host a GPT-4 instance. The real number is rumored to be about 128. And that’s a subhuman AGI at best without vision and other critical features.
How many cards will a dangerous ASI need to exist? I won’t go into the derivation here but I think the number is > 10,000, and they must be in a cluster with high bandwidth interconnects.
As for the second part, “how are we going to use it as a stick”. Simple. If you are unconcerned with the AI “breaking out”, you train and try a lot of techniques, and only use “in production” (industrial automation, killer robots etc) the most powerful model you have that is measurably reliable and efficient and doesn’t engage in unwanted behavior.
None of the bad AIs ever escape the lab, there’s nowhere for them to go.
Note that might be a different story in 2049, that would be when Moore’s law would put a single GPU at the power of 10,000 of them. It likely can’t continue that long, exponentials stop, but maybe computers built with computronium printed off a nanoforge.
But we don’t have any of that, and won’t anytime in the plannable future. We will have AGI systems good enough to do basic tasks, including robotic tasks.
Just in case we’re out of sync, let’s briefly refocus on some object details
Are you aware of the following?
the smuggling was done by… smugglers
the buying of chips under the limit was done by multiple suppliers in China
the selling of chips under the limit was done by Nvidia (and perhaps others)
the investment in China’s chip industry was done by the CCP
If not, please digest those nuances (and perhaps I need to make them clearer in my OP!) and consider why I object to the phrasing.
You said,
This is true only if you have sufficient justification to believe confidently in that particular ‘ground truth reality’, and if the cost of speaking with nuance outweighs the expected cost of inflaming tensions in worlds where you’re wrong.
To be clear, I have wide uncertainty on ‘ground truth’ here. From that POV, ‘[People and organisations in] China [have
has] made several efforts...’ is the ‘clear and honest’ version, while coarse and lossy speech like ‘China has made several efforts...’ is not. I further expect the cost of nuanced speech is low, while the cost of foregone-conclusion speech (if wrong) is high, which I admit is what gets me exercised about this particular lack of nuance and not so much about others (though also others).What about you? (I note we’re discussing possible geopolitical futures, right? I don’t think humans can be justifiably very confident about questions like this. I object to the use of ‘ground truth’ here on that basis[1].)
I’m still interested in whether you think those questions I previously gestured at are cruxes, and whether my attempted ITT was about right. I don’t think there is a ‘MIRI’s take’ in this context.
Did you see my section in the OP about excludability of harms as follows?
I wrote ‘perhaps mutually-knowably so’ anticipating this kind of ‘ooh AI big stick’ thing, though I remain uncertain. Do you think harm-excludability seems difficult for AGI? Do you think enough people currently/might agree that it’s not like a nuke and more like a bioweapon?
Do you think humanity is sort of doing middling OK on bio? (i.e. not foregone conclusion biowarfare/disasters?) What about climate? Nukes? Clearly we’re doing quite badly but I don’t think the course of the future is set in stone[1:1] for any of these.
Overall it appears that you’re very (I would say over) confident in this picture. To the extent that you take issue with my asking for nuance (of the kind that takes claims from false unless contorted with caveats to basically true). Perhaps on the basis that what we perceive now (lots of actors of various sizes competing and cooperating on various axes including access to compute) is actually a shadow of what’s unavoidably to come (all-out superpower strife in a race to AGI) and in the latter world the finer distinctions don’t matter?
I don’t care if you are a physical determinist, we’re finite, tiny computers in a messy world. There might be some ‘ground truth’ about what the future holds, but from our POV it’s stochastic.
What makes it a foregone conclusion is the powerful nature of race dynamics are convergent. Actions that would cause a party to definitely lose a race have feedback. Over time multiple competing agents will choose winning strategies, and others will copy those, leading to strategy mirroring. Certain forms of strategy (like nationalizing all the AI labs) are also convergent and optimal. And see a party could fail to play optimally, then observe they are losing, and be forced to choose optimal play in order to lose less.
So my seeming overconfidence is because I am convinced the overall game will force all these disparate uncoordinated individual events to converge on what it must.
I expect there are several views, but let’s look at the bioweapon argument for a second.
In what computers can the “escaped” AI exist in? There is no biosphere of computers. You need at least (1600 Gb x 2 / 80 x 2) = 80 H100s to host a GPT-4 instance. The real number is rumored to be about 128. And that’s a subhuman AGI at best without vision and other critical features.
How many cards will a dangerous ASI need to exist? I won’t go into the derivation here but I think the number is > 10,000, and they must be in a cluster with high bandwidth interconnects.
As for the second part, “how are we going to use it as a stick”. Simple. If you are unconcerned with the AI “breaking out”, you train and try a lot of techniques, and only use “in production” (industrial automation, killer robots etc) the most powerful model you have that is measurably reliable and efficient and doesn’t engage in unwanted behavior.
None of the bad AIs ever escape the lab, there’s nowhere for them to go.
Note that might be a different story in 2049, that would be when Moore’s law would put a single GPU at the power of 10,000 of them. It likely can’t continue that long, exponentials stop, but maybe computers built with computronium printed off a nanoforge.
But we don’t have any of that, and won’t anytime in the plannable future. We will have AGI systems good enough to do basic tasks, including robotic tasks.