I suspect the primary reasons you want to break up Deepmind from Google is to:
Increase their autonomy, reducing pressure from google to race
Reduce Deepmind’s access to capital and compute, reducing their competitiveness
Perhaps that goes without saying, but I think it’s worth explicitly mentioning. In a world without AI risk, I don’t believe you would be citing various consumer harms to argue for a break up.
The traditional argument for breaking up companies and preventing mergers is to reduce the company’s market power, increasing consumer surplus. In this case, the implicit reason for breaking up Deepmind is to decrease its competitiveness thus reducing consumer surplus.
I think it’s perfectly fine to argue for this, I just really want us to be explicit about it.
Huh, fwiw I thought this proposal would increase AI risk, since it would increase competitive dynamics (and generally make coordinating on slowing down harder). I at least didn’t read this post as x-risk motivated (though I admit I was confused what it’s primary motivation was).
I read it as aiming to reduce AI risk by increasing the cost of scaling.
I also don’t see how breaking deepmind off from Google would increase competitive dynamics. Google, Microsoft, Amazon and other big tech partners are likely to be pushing their subsidiaries to race even faster since they are likely to have much less conscientiousness about AI risk than the companies building AI. Coordination between DeepMind and e.g. OpenAI seems much easier than coordination between Google and Microsoft.
Less than a year ago Deepmind and Google Brain were two separate companies (both making cutting-edge contributions to AI development). My guess is if you broke off Deepmind from Google you would now just pretty quickly get competition between Deepmind and Google Brain (and more broadly just make the situation around slowing things down a more multilateral situation).
But more concretely, anti-trust action makes all kinds of coordination harder. After an anti-trust action that destroyed billions of dollars in economic value, the ability to get people in the same room and even consider coordinating goes down a lot, since that action itself might invite further anti-trust action.
AI labs tend to partner with Big Tech for money, data, compute, scale etc. (e.g. Google Deepmind, Microsoft/OpenAI, and Amazon/Anthropic). Presumably to compete better? If they they’re already competing hard now, then it seems unlikely that they’ll coordinate much on slowing down in the future.
Also, it seems like a function of timelines: antitrust advocates argue that breaking up firms / preventing mergers would slow industry down in the short-run but speed up in the long-run by increasing competition, but if competition is usually already healthy, as libertarians often argue, then antitrust interventions might slow down industries in the long-run.
I also think that it’s far from given that the option which would minimise consumer harm from monopoly would also minimise pressure to race.
An AI research institute spun off by the regulator under pressure to generate business models to stay viable is plausibly a lot more inclined to ‘race’, than an AI research institute swimming in ad money which can earn its keep by incrementally improving search, ads and phone UX and generating good PR with its more abstract research along the way. Monopolies are often complacent about exploiting their research findings, and Google’s corporate culture has historically not been particularly compatible with launching sort of military or enterprise tooling that represents the most obviously risky use of ‘AI’.
There are of course arguments the other way (Google has a lot more money and data than putative spinouts) but people need to predict what a divested DeepMind would do before concluding breaking up Google is a safety win.
I only said we should look into this more and have reviewed the pros and cons from different angles (e.g. not only consumer harms). As you say, the standard argument is that breaking up monopolists like Google increases consumer surplus and this might also apply here.
But I’m not sure in how far, in the short and long-run, this increases/decreases AI risks and/or race dynamics and within the west or between countries. This approach might be more elegant than Pausing AI, which definitely reduces consumer surplus.
I suspect the primary reasons you want to break up Deepmind from Google is to:
Increase their autonomy, reducing pressure from google to race
Reduce Deepmind’s access to capital and compute, reducing their competitiveness
Perhaps that goes without saying, but I think it’s worth explicitly mentioning. In a world without AI risk, I don’t believe you would be citing various consumer harms to argue for a break up.
The traditional argument for breaking up companies and preventing mergers is to reduce the company’s market power, increasing consumer surplus. In this case, the implicit reason for breaking up Deepmind is to decrease its competitiveness thus reducing consumer surplus.
I think it’s perfectly fine to argue for this, I just really want us to be explicit about it.
Huh, fwiw I thought this proposal would increase AI risk, since it would increase competitive dynamics (and generally make coordinating on slowing down harder). I at least didn’t read this post as x-risk motivated (though I admit I was confused what it’s primary motivation was).
I read it as aiming to reduce AI risk by increasing the cost of scaling.
I also don’t see how breaking deepmind off from Google would increase competitive dynamics. Google, Microsoft, Amazon and other big tech partners are likely to be pushing their subsidiaries to race even faster since they are likely to have much less conscientiousness about AI risk than the companies building AI. Coordination between DeepMind and e.g. OpenAI seems much easier than coordination between Google and Microsoft.
Less than a year ago Deepmind and Google Brain were two separate companies (both making cutting-edge contributions to AI development). My guess is if you broke off Deepmind from Google you would now just pretty quickly get competition between Deepmind and Google Brain (and more broadly just make the situation around slowing things down a more multilateral situation).
But more concretely, anti-trust action makes all kinds of coordination harder. After an anti-trust action that destroyed billions of dollars in economic value, the ability to get people in the same room and even consider coordinating goes down a lot, since that action itself might invite further anti-trust action.
AI labs tend to partner with Big Tech for money, data, compute, scale etc. (e.g. Google Deepmind, Microsoft/OpenAI, and Amazon/Anthropic). Presumably to compete better? If they they’re already competing hard now, then it seems unlikely that they’ll coordinate much on slowing down in the future.
Also, it seems like a function of timelines: antitrust advocates argue that breaking up firms / preventing mergers would slow industry down in the short-run but speed up in the long-run by increasing competition, but if competition is usually already healthy, as libertarians often argue, then antitrust interventions might slow down industries in the long-run.
I also think that it’s far from given that the option which would minimise consumer harm from monopoly would also minimise pressure to race.
An AI research institute spun off by the regulator under pressure to generate business models to stay viable is plausibly a lot more inclined to ‘race’, than an AI research institute swimming in ad money which can earn its keep by incrementally improving search, ads and phone UX and generating good PR with its more abstract research along the way. Monopolies are often complacent about exploiting their research findings, and Google’s corporate culture has historically not been particularly compatible with launching sort of military or enterprise tooling that represents the most obviously risky use of ‘AI’.
There are of course arguments the other way (Google has a lot more money and data than putative spinouts) but people need to predict what a divested DeepMind would do before concluding breaking up Google is a safety win.
I only said we should look into this more and have reviewed the pros and cons from different angles (e.g. not only consumer harms). As you say, the standard argument is that breaking up monopolists like Google increases consumer surplus and this might also apply here.
But I’m not sure in how far, in the short and long-run, this increases/decreases AI risks and/or race dynamics and within the west or between countries. This approach might be more elegant than Pausing AI, which definitely reduces consumer surplus.