this tendency leads to analysis that assumes more coordination among governments, companies, and individuals in other countries than is warranted. When people talk about “the US” taking some action… more likely to be aware of the nuance this ignores… less likely to consider such nuances when people talk about “China” doing something
This seems exactly right and is what I’m frustrated by. Though, further than you give credit (or un-credit) for, frequently I come across writing or talking about “US success in AI”, “US leading in AI”, or “China catching up to US”, etc. which are all almost nonsense as far as I’m concerned. What do those statements even mean? In good faith I hope for someone to describe what these sorts of claims mean in a way which clicks for me, but I have come to expect that there is probably none.
Do people actually think that Google+OpenAI+Anthropic (for sake of argument) are the US? Do they think the US government/military can/will appropriate those staff/artefacts/resources at some point? Are they referring to integration of contemporary ML/DS into the economy? The military? Or impacts on other indicators[1]? What do people mean by “China” here: CCP, Alibaba, Tencent, …? If people mean these things, they should say those things, or otherwise say what they do mean. Otherwise I think people motte-and-bailey themselves (and others) into some really strange understandings. There’s not some linear scoreboard which “US” and “China” have points on but people behave/talk like they actually think in those terms.
your claim that governments don’t influence AI development [via semiconductor progress] is too strong
Thanks, this would indeed be too strong :) but it’s not what I mean. (Also thank you for the example bullets below that, for me and for other readers.)
I don’t mean to imply they have no influence on AI development and deployment[2]. What I meant by ‘not currently meaningful players in AI development and deployment’ was that, to date, governments have had little to no say in the course or nature of AI development. Rather, they have been mostly passive or unwitting passengers, with recent interventions (to date) comprising coarse economy-level lever-pulls, like your examples of regulation on chip production and sales. Can you think of a better compression of this than what I wrote? ‘currently mainly passive except for coarse interventions at the economy-level’?
early demand for semiconductors was driven by the US government’s military and space program
The key difference between e.g. space-race or nuclear/ICBM etc. and AI is that in those cases, governments were appropriately thought of as somewhat-coherently instigating, steering and directing, and could be described as being key players with a real competition between them. With AI, none of those things are (currently) true. So ideally we would use different language to describe the different situations (especially because the misleading use of language is inflammatory).
I get exercised about this overall issue because on one model, this sort of failure of imagination and the confusion it gives rise to is exactly what leads to escalation and conflict, which I sense you agree on. We do not want sloppy foregone-conclusion thinking leading to WWIII with AI and nukes.
Ironically for a piece on bringing clarity through nuance, I evidently wasn’t clear enough about where I was drawing the boundaries in my initial post…
Do people actually think that Google+OpenAI+Anthropic (for sake of argument) are the US? Do they think the US government/military can/will appropriate those staff/artefacts/resources at some point?
I’m pretty sure what most (educated) people think is they are part of the US (in the sense that they are “US entities”, among other things), that they will pay taxes in the US, will hire more people in the US than China (at least relative to if they were Chinese entities), will create other economic and technological spillover effects in greater amount in the US than in China (similar to how the US’s early lead on the internet did), will enhance the US’s national glory and morale, will provide strategically valuable assets to the US and deny these assets to China (at least in a time of conflict), will more likely embody US culture and norms than Chinese culture and norms, and will be subject to US regulation much more than Chinese regulation.
Most people don’t expect these companies will be nationalized (though that does remain a possibility, and presumably more so if they were Chinese companies than US companies, due to the differing economic and political systems), but there are plenty of other ways that people expect the companies to advantage their host country[’s government, population, economy, etc].
Do people actually think that Google+OpenAI+Anthropic (for sake of argument) are the US? >Do they think the US government/military can/will appropriate those >staff/artefacts/resources at some point? Are they referring to integration of contemporary >ML/DS into the economy? The military? Or impacts on other indicators
Yes.
In the end, all the answers to your questions are yes.
The critical thing to realize is until basically EOY 2022, AI didn’t exist. It was narrow and expensive and essentially non-general—a cool party trick but the cost to build a model for anything and get to useful performance levels was high. Self driving cars were endlessly delayed, Recsys work but their techniques to correlate fields of user data with preferences are only a little better using neural networks than older cheaper methods, for most other purposes AI was just a tech demo.
You need to think in terms of “what does it means that AI works now and how are decisions going to be different”. With that said, governments won’t nationalize AI companies until they develop a lot stronger models.
Imagine the Manhattan project never happened, but GE and a few other US companies kept tinkering with fission. Eventually they would have build critical devices, and EOY 2022 is the “Chicago pile” moment—there’s a nuclear reactor, and we can plot out the yield for a nuke, but the devices have not yet been built.
Around the time GE is building nuclear bombs for military demos, at some point the US government has to nationalize it all. It’s too dangerous.
As for the rest of your post, i don’t see how “non framing a competition as a competition” is very useful. It’s not the media. We live on a finite planet with finite resources, and the only reason there are different countries is the most powerful winners have not found a big enough club to conquer everyone else.
You know nations used to be way smaller, right. Why do you think they are so large now? In each history someone found a way to depose all the other feudal kings and lords.
AGI may be that club, and whoever builds it fastest and bestest may in fact just be able to crush everyone. Even if they can’t, each superpower has to assume that they can.
Interesting. I’d love to know if you think the crux schema I outlined is indeed important? I mean this:
How quickly/totally/coherently could US gov/CCP capture AI talent/artefacts/compute within its jurisdiction and redirect them toward excludable destructive ends? Under what circumstances would they want/be able to do that?
Correct me at any point if I misinterpret: I read that, on the basis of answers to something a bit like these, you think an international competition/race is all but inevitable? Presumably that registers as terrifically dangerous for you such that mitigating it would be a high priority if tractable? But you deny the tractability of mitigating it so consider concerns like mine regarding clear use of language to be… wasteful? Distracting? Counterproductive?
Your alternative history with fission is helpful and thought-provoking—and plausible. I don’t think it’s the inevitable way things would play out, though. For example, if concerns about atmospheric ignition, nuclear winter, and other risks were raised in a climate of less international distrust it’s at least plausible to me that coordination to avoid those risks could have been achieved. (Of course, with the benefit of hindsight we know that atmospheric ignition was not a risk, but we still don’t know about nuclear winter.)
Are we in a climate of less international distrust than they? I think so, at least a little. Careless talk can inflame escalation, so this variable really matters not only as an input to our actions but an output.
You know nations used to be way smaller, right. Why do you think they are so large now?
I have a passable grasp of world history and prehistory (though I will probably always lament my lack of knowledge). Do you remember the international trading companies in the age of sail? The age of European empires? They’re gone, now. Possible counterpoints to part of the worldview you’re exposing.
Correct me at any point if I misinterpret: I read that, on the basis of answers to something a bit like these, you think an international competition/race is all but inevitable? Presumably that registers as terrifically dangerous for you such that mitigating it would be a high priority if tractable? But you deny the tractability of mitigating it so consider concerns like mine regarding clear use of language to be… wasteful? Distracting? Counterproductive?
If ground truth reality is you’re in a race to the nuke, dressing up reality in language that denies this is counterproductive. Technically ok, it would be used, but as a form of misinformation. If the nuclear arms race had been public and 2 party, both sides might have chosen a propaganda strategy to pretend they weren’t actually working round the clock.
Personally I like clear and honest information but that’s just me. I think MIRIs take is closer to ground truth.
I have a passable grasp of world history and prehistory (though I will probably always lament my lack of knowledge). Do you remember the international trading companies in the age of sail? The age of European empires? They’re gone, now. Possible counterpoints to part of the worldview you’re exposing.
There’s a variation on EMH here. If empire building were profitable, the most powerful empire would own most of the planet and would be soon to grab the rest, because they would have a runaway level of resources. Apparently colonialism was not profitable enough. That the overhead of administering all these far flung countries didn’t accrue enough revenue to the European powers to be worth continuing. Otherwise, they would have done so.
The moral outrage wouldn’t have caused every European power to give up, what would have happened is a less outraged power would have just seized the territories given up.
One way profitability can be reduced is asymmetric warfare. For example the US occupation of Iraq was a huge financial loss and would have always been. The country was costing more soldiers, equipment, and health benefits to occupy than it’s GDP.
Maybe it was the AK-47 or something (a powerful tool for asymmetric warfare), I don’t know.
Just in case we’re out of sync, let’s briefly refocus on some object details
China has made several efforts to preserve their chip access, including smuggling, buying chips that are just under the legal limit of performance, and investing in their domestic chip industry.
Are you aware of the following?
the smuggling was done by… smugglers
the buying of chips under the limit was done by multiple suppliers in China
the selling of chips under the limit was done by Nvidia (and perhaps others)
the investment in China’s chip industry was done by the CCP
If not, please digest those nuances (and perhaps I need to make them clearer in my OP!) and consider why I object to the phrasing.
You said,
If ground truth reality is you’re in a race to the nuke, dressing up reality in language that denies this is counterproductive.
This is true only if you have sufficient justification to believe confidently in that particular ‘ground truth reality’, and if the cost of speaking with nuance outweighs the expected cost of inflaming tensions in worlds where you’re wrong.
To be clear, I have wide uncertainty on ‘ground truth’ here. From that POV, ‘[People and organisations in] China [have has] made several efforts...’ is the ‘clear and honest’ version, while coarse and lossy speech like ‘China has made several efforts...’ is not. I further expect the cost of nuanced speech is low, while the cost of foregone-conclusion speech (if wrong) is high, which I admit is what gets me exercised about this particular lack of nuance and not so much about others (though also others).
What about you? (I note we’re discussing possible geopolitical futures, right? I don’t think humans can be justifiably very confident about questions like this. I object to the use of ‘ground truth’ here on that basis[1].)
I’m still interested in whether you think those questions I previously gestured at are cruxes, and whether my attempted ITT was about right. I don’t think there is a ‘MIRI’s take’ in this context.
Did you see my section in the OP about excludability of harms as follows?
Separately, a lack of reliable alignment techniques and performance guarantees makes AI-powered belligerent national interest plays look more like bioweapons than like nukes—i.e. minimally-excludable—and perhaps mutually-knowably so! This presently damps the incentive to go after them.
I wrote ‘perhaps mutually-knowably so’ anticipating this kind of ‘ooh AI big stick’ thing, though I remain uncertain. Do you think harm-excludability seems difficult for AGI? Do you think enough people currently/might agree that it’s not like a nuke and more like a bioweapon?
Do you think humanity is sort of doing middling OK on bio? (i.e. not foregone conclusion biowarfare/disasters?) What about climate? Nukes? Clearly we’re doing quite badly but I don’t think the course of the future is set in stone[1:1] for any of these.
Overall it appears that you’re very (I would say over) confident in this picture. To the extent that you take issue with my asking for nuance (of the kind that takes claims from false unless contorted with caveats to basically true). Perhaps on the basis that what we perceive now (lots of actors of various sizes competing and cooperating on various axes including access to compute) is actually a shadow of what’s unavoidably to come (all-out superpower strife in a race to AGI) and in the latter world the finer distinctions don’t matter?
I don’t care if you are a physical determinist, we’re finite, tiny computers in a messy world. There might be some ‘ground truth’ about what the future holds, but from our POV it’s stochastic.
To be clear, I have wide uncertainty on ‘ground truth’ here. From that POV, ‘[People and organisations in] China [have has] made several efforts...’ is the ‘clear and honest’ version, while coarse and lossy speech like ‘China has made several efforts...’ is not. I further expect the cost of nuanced speech is low, while the cost of foregone-conclusion speech (if wrong) is high,
What makes it a foregone conclusion is the powerful nature of race dynamics are convergent. Actions that would cause a party to definitely lose a race have feedback. Over time multiple competing agents will choose winning strategies, and others will copy those, leading to strategy mirroring. Certain forms of strategy (like nationalizing all the AI labs) are also convergent and optimal. And see a party could fail to play optimally, then observe they are losing, and be forced to choose optimal play in order to lose less.
So my seeming overconfidence is because I am convinced the overall game will force all these disparate uncoordinated individual events to converge on what it must.
I wrote ‘perhaps mutually-knowably so’ anticipating this kind of ‘ooh AI big stick’ thing, though I remain uncertain. Do you think harm-excludability seems difficult for AGI? Do you think enough people currently/might agree that it’s not like a nuke and more like a bioweapon?
I expect there are several views, but let’s look at the bioweapon argument for a second.
In what computers can the “escaped” AI exist in? There is no biosphere of computers. You need at least (1600 Gb x 2 / 80 x 2) = 80 H100s to host a GPT-4 instance. The real number is rumored to be about 128. And that’s a subhuman AGI at best without vision and other critical features.
How many cards will a dangerous ASI need to exist? I won’t go into the derivation here but I think the number is > 10,000, and they must be in a cluster with high bandwidth interconnects.
As for the second part, “how are we going to use it as a stick”. Simple. If you are unconcerned with the AI “breaking out”, you train and try a lot of techniques, and only use “in production” (industrial automation, killer robots etc) the most powerful model you have that is measurably reliable and efficient and doesn’t engage in unwanted behavior.
None of the bad AIs ever escape the lab, there’s nowhere for them to go.
Note that might be a different story in 2049, that would be when Moore’s law would put a single GPU at the power of 10,000 of them. It likely can’t continue that long, exponentials stop, but maybe computers built with computronium printed off a nanoforge.
But we don’t have any of that, and won’t anytime in the plannable future. We will have AGI systems good enough to do basic tasks, including robotic tasks.
Thanks for this thoughtful response!
This seems exactly right and is what I’m frustrated by. Though, further than you give credit (or un-credit) for, frequently I come across writing or talking about “US success in AI”, “US leading in AI”, or “China catching up to US”, etc. which are all almost nonsense as far as I’m concerned. What do those statements even mean? In good faith I hope for someone to describe what these sorts of claims mean in a way which clicks for me, but I have come to expect that there is probably none.
Do people actually think that Google+OpenAI+Anthropic (for sake of argument) are the US? Do they think the US government/military can/will appropriate those staff/artefacts/resources at some point? Are they referring to integration of contemporary ML/DS into the economy? The military? Or impacts on other indicators[1]? What do people mean by “China” here: CCP, Alibaba, Tencent, …? If people mean these things, they should say those things, or otherwise say what they do mean. Otherwise I think people motte-and-bailey themselves (and others) into some really strange understandings. There’s not some linear scoreboard which “US” and “China” have points on but people behave/talk like they actually think in those terms.
Thanks, this would indeed be too strong :) but it’s not what I mean. (Also thank you for the example bullets below that, for me and for other readers.)
I don’t mean to imply they have no influence on AI development and deployment[2]. What I meant by ‘not currently meaningful players in AI development and deployment’ was that, to date, governments have had little to no say in the course or nature of AI development. Rather, they have been mostly passive or unwitting passengers, with recent interventions (to date) comprising coarse economy-level lever-pulls, like your examples of regulation on chip production and sales. Can you think of a better compression of this than what I wrote? ‘currently mainly passive except for coarse interventions at the economy-level’?
The key difference between e.g. space-race or nuclear/ICBM etc. and AI is that in those cases, governments were appropriately thought of as somewhat-coherently instigating, steering and directing, and could be described as being key players with a real competition between them. With AI, none of those things are (currently) true. So ideally we would use different language to describe the different situations (especially because the misleading use of language is inflammatory).
I get exercised about this overall issue because on one model, this sort of failure of imagination and the confusion it gives rise to is exactly what leads to escalation and conflict, which I sense you agree on. We do not want sloppy foregone-conclusion thinking leading to WWIII with AI and nukes.
What indicators? Education, unemployment, privacy, health, productivity, democracy, inequality, …?
Ironically for a piece on bringing clarity through nuance, I evidently wasn’t clear enough about where I was drawing the boundaries in my initial post…
I’m pretty sure what most (educated) people think is they are part of the US (in the sense that they are “US entities”, among other things), that they will pay taxes in the US, will hire more people in the US than China (at least relative to if they were Chinese entities), will create other economic and technological spillover effects in greater amount in the US than in China (similar to how the US’s early lead on the internet did), will enhance the US’s national glory and morale, will provide strategically valuable assets to the US and deny these assets to China (at least in a time of conflict), will more likely embody US culture and norms than Chinese culture and norms, and will be subject to US regulation much more than Chinese regulation.
Most people don’t expect these companies will be nationalized (though that does remain a possibility, and presumably more so if they were Chinese companies than US companies, due to the differing economic and political systems), but there are plenty of other ways that people expect the companies to advantage their host country[’s government, population, economy, etc].
Yes.
In the end, all the answers to your questions are yes.
The critical thing to realize is until basically EOY 2022, AI didn’t exist. It was narrow and expensive and essentially non-general—a cool party trick but the cost to build a model for anything and get to useful performance levels was high. Self driving cars were endlessly delayed, Recsys work but their techniques to correlate fields of user data with preferences are only a little better using neural networks than older cheaper methods, for most other purposes AI was just a tech demo.
You need to think in terms of “what does it means that AI works now and how are decisions going to be different”. With that said, governments won’t nationalize AI companies until they develop a lot stronger models.
Imagine the Manhattan project never happened, but GE and a few other US companies kept tinkering with fission. Eventually they would have build critical devices, and EOY 2022 is the “Chicago pile” moment—there’s a nuclear reactor, and we can plot out the yield for a nuke, but the devices have not yet been built.
Around the time GE is building nuclear bombs for military demos, at some point the US government has to nationalize it all. It’s too dangerous.
As for the rest of your post, i don’t see how “non framing a competition as a competition” is very useful. It’s not the media. We live on a finite planet with finite resources, and the only reason there are different countries is the most powerful winners have not found a big enough club to conquer everyone else.
You know nations used to be way smaller, right. Why do you think they are so large now? In each history someone found a way to depose all the other feudal kings and lords.
AGI may be that club, and whoever builds it fastest and bestest may in fact just be able to crush everyone. Even if they can’t, each superpower has to assume that they can.
Interesting. I’d love to know if you think the crux schema I outlined is indeed important? I mean this:
Correct me at any point if I misinterpret: I read that, on the basis of answers to something a bit like these, you think an international competition/race is all but inevitable? Presumably that registers as terrifically dangerous for you such that mitigating it would be a high priority if tractable? But you deny the tractability of mitigating it so consider concerns like mine regarding clear use of language to be… wasteful? Distracting? Counterproductive?
Your alternative history with fission is helpful and thought-provoking—and plausible. I don’t think it’s the inevitable way things would play out, though. For example, if concerns about atmospheric ignition, nuclear winter, and other risks were raised in a climate of less international distrust it’s at least plausible to me that coordination to avoid those risks could have been achieved. (Of course, with the benefit of hindsight we know that atmospheric ignition was not a risk, but we still don’t know about nuclear winter.)
Are we in a climate of less international distrust than they? I think so, at least a little. Careless talk can inflame escalation, so this variable really matters not only as an input to our actions but an output.
I have a passable grasp of world history and prehistory (though I will probably always lament my lack of knowledge). Do you remember the international trading companies in the age of sail? The age of European empires? They’re gone, now. Possible counterpoints to part of the worldview you’re exposing.
If ground truth reality is you’re in a race to the nuke, dressing up reality in language that denies this is counterproductive. Technically ok, it would be used, but as a form of misinformation. If the nuclear arms race had been public and 2 party, both sides might have chosen a propaganda strategy to pretend they weren’t actually working round the clock.
Personally I like clear and honest information but that’s just me. I think MIRIs take is closer to ground truth.
There’s a variation on EMH here. If empire building were profitable, the most powerful empire would own most of the planet and would be soon to grab the rest, because they would have a runaway level of resources. Apparently colonialism was not profitable enough. That the overhead of administering all these far flung countries didn’t accrue enough revenue to the European powers to be worth continuing. Otherwise, they would have done so.
The moral outrage wouldn’t have caused every European power to give up, what would have happened is a less outraged power would have just seized the territories given up.
One way profitability can be reduced is asymmetric warfare. For example the US occupation of Iraq was a huge financial loss and would have always been. The country was costing more soldiers, equipment, and health benefits to occupy than it’s GDP.
Maybe it was the AK-47 or something (a powerful tool for asymmetric warfare), I don’t know.
Just in case we’re out of sync, let’s briefly refocus on some object details
Are you aware of the following?
the smuggling was done by… smugglers
the buying of chips under the limit was done by multiple suppliers in China
the selling of chips under the limit was done by Nvidia (and perhaps others)
the investment in China’s chip industry was done by the CCP
If not, please digest those nuances (and perhaps I need to make them clearer in my OP!) and consider why I object to the phrasing.
You said,
This is true only if you have sufficient justification to believe confidently in that particular ‘ground truth reality’, and if the cost of speaking with nuance outweighs the expected cost of inflaming tensions in worlds where you’re wrong.
To be clear, I have wide uncertainty on ‘ground truth’ here. From that POV, ‘[People and organisations in] China [have
has] made several efforts...’ is the ‘clear and honest’ version, while coarse and lossy speech like ‘China has made several efforts...’ is not. I further expect the cost of nuanced speech is low, while the cost of foregone-conclusion speech (if wrong) is high, which I admit is what gets me exercised about this particular lack of nuance and not so much about others (though also others).What about you? (I note we’re discussing possible geopolitical futures, right? I don’t think humans can be justifiably very confident about questions like this. I object to the use of ‘ground truth’ here on that basis[1].)
I’m still interested in whether you think those questions I previously gestured at are cruxes, and whether my attempted ITT was about right. I don’t think there is a ‘MIRI’s take’ in this context.
Did you see my section in the OP about excludability of harms as follows?
I wrote ‘perhaps mutually-knowably so’ anticipating this kind of ‘ooh AI big stick’ thing, though I remain uncertain. Do you think harm-excludability seems difficult for AGI? Do you think enough people currently/might agree that it’s not like a nuke and more like a bioweapon?
Do you think humanity is sort of doing middling OK on bio? (i.e. not foregone conclusion biowarfare/disasters?) What about climate? Nukes? Clearly we’re doing quite badly but I don’t think the course of the future is set in stone[1:1] for any of these.
Overall it appears that you’re very (I would say over) confident in this picture. To the extent that you take issue with my asking for nuance (of the kind that takes claims from false unless contorted with caveats to basically true). Perhaps on the basis that what we perceive now (lots of actors of various sizes competing and cooperating on various axes including access to compute) is actually a shadow of what’s unavoidably to come (all-out superpower strife in a race to AGI) and in the latter world the finer distinctions don’t matter?
I don’t care if you are a physical determinist, we’re finite, tiny computers in a messy world. There might be some ‘ground truth’ about what the future holds, but from our POV it’s stochastic.
What makes it a foregone conclusion is the powerful nature of race dynamics are convergent. Actions that would cause a party to definitely lose a race have feedback. Over time multiple competing agents will choose winning strategies, and others will copy those, leading to strategy mirroring. Certain forms of strategy (like nationalizing all the AI labs) are also convergent and optimal. And see a party could fail to play optimally, then observe they are losing, and be forced to choose optimal play in order to lose less.
So my seeming overconfidence is because I am convinced the overall game will force all these disparate uncoordinated individual events to converge on what it must.
I expect there are several views, but let’s look at the bioweapon argument for a second.
In what computers can the “escaped” AI exist in? There is no biosphere of computers. You need at least (1600 Gb x 2 / 80 x 2) = 80 H100s to host a GPT-4 instance. The real number is rumored to be about 128. And that’s a subhuman AGI at best without vision and other critical features.
How many cards will a dangerous ASI need to exist? I won’t go into the derivation here but I think the number is > 10,000, and they must be in a cluster with high bandwidth interconnects.
As for the second part, “how are we going to use it as a stick”. Simple. If you are unconcerned with the AI “breaking out”, you train and try a lot of techniques, and only use “in production” (industrial automation, killer robots etc) the most powerful model you have that is measurably reliable and efficient and doesn’t engage in unwanted behavior.
None of the bad AIs ever escape the lab, there’s nowhere for them to go.
Note that might be a different story in 2049, that would be when Moore’s law would put a single GPU at the power of 10,000 of them. It likely can’t continue that long, exponentials stop, but maybe computers built with computronium printed off a nanoforge.
But we don’t have any of that, and won’t anytime in the plannable future. We will have AGI systems good enough to do basic tasks, including robotic tasks.