In which JP tries to learn about AI governance by writing up a take. Take tl;dr: Overhang concerns and desire to avoid catchup effects seem super real. But that need not imply speeding ahead towards our doom. Why not try to slow everything down uniformly? â Please tell me why Iâm wrong.
After the FLI letter, the debate in EA has coalesced into â6 month pauseâ vs âshut it all downâ vs some complicated shrug. Iâm broadly sympathetic to concerns (1, 2) that a moratorium might make the hardware overhang worse, or make competitive dynamics worse.
To put it provocatively (at least around these parts), it seems like thereâs something to the OpenAI âPlanning for AGI and beyondâ justification for their behavior. I think that sudden discontinuous jumps are bad.
Ok, but it remains true that OpenAI has burned a whole bunch of timelines, and thatâs bad. It seems to me that speeding up algorithmic improvements is incredibly dubious. And the large economic incentives theyâve created for AI chips seems really bad.[1]
So, how do we balance these things? Proposal: ââweââ reduce the economic incentive to speed ahead with AI. If weâre successful, we could slow down hardware progress, and algorithmic progress, for OpenAI, and all its competitors.
How would this work? This could be the weak point of my analysis, but you could put a tax on âAI productsâ. This would be terrible and distortionary, but it would probably be effective at slashing the most centrally AGI companies. You can also put a tax on GPUs.
China. Yep. This is a counterargument against a moratorium as well. I think Iâm willing to bite this bullet.
Something like: we really need the cooperation of chip companies. If we tax their chips theyâll sell to China immediately. This is the major reason why Iâm more optimistic about taxing âAI productsâ than GPUs, which would be easier to tax.
IDK, JP, a moratorium would be easier to coordinate around, is more likely to actually happen, and isnât that bad. We should put our wood behind that arrow. Seems plausible, but I really donât think this is a long term solution, and I tentatively think the tax thing is.
***
Again, will be very appreciative of different takes, etc here.
See also: Notes on Potential Future AI Tax Policy. This post was written before reading that one. Sadly that post spends too much time in the weeds arguing against a specific implementation which Zvi apparently really doesnât like, and not enough time discussing overall dynamics, IMO.
I agree increasing the cost of compute or decreasing the benefits of compute would slow dangerous AI.
I claim taxing AI products isnât great because I think the labs that might make world-ending models arenât super sensitive to revenueâI wouldnât expect a tax to change their behavior much. (Epistemic status: weak sense; stating an empirical crux.)
Relatedly, taxing AI products might differentially slow safe cool stuff like robots and autonomous vehicles and image generation. Ideally weâd only target LLMs or something.
Clarification: I think âhardware overhangâ in this context means âamount labs could quickly increase training compute (because they were artificially restrained for a while by regulation but can quickly catch up to where they would have been)â? There is no standard definition, and the AI Impacts definition you linked to seems inappropriate here (and almost always uselessâit was a useful concept before the training is expensive; running is cheap era).
A tax, not a ban
In which JP tries to learn about AI governance by writing up a take. Take tl;dr: Overhang concerns and desire to avoid catchup effects seem super real. But that need not imply speeding ahead towards our doom. Why not try to slow everything down uniformly? â Please tell me why Iâm wrong.
After the FLI letter, the debate in EA has coalesced into â6 month pauseâ vs âshut it all downâ vs some complicated shrug. Iâm broadly sympathetic to concerns (1, 2) that a moratorium might make the hardware overhang worse, or make competitive dynamics worse.
To put it provocatively (at least around these parts), it seems like thereâs something to the OpenAI âPlanning for AGI and beyondâ justification for their behavior. I think that sudden discontinuous jumps are bad.
Ok, but it remains true that OpenAI has burned a whole bunch of timelines, and thatâs bad. It seems to me that speeding up algorithmic improvements is incredibly dubious. And the large economic incentives theyâve created for AI chips seems really bad.[1]
So, how do we balance these things? Proposal: ââweââ reduce the economic incentive to speed ahead with AI. If weâre successful, we could slow down hardware progress, and algorithmic progress, for OpenAI, and all its competitors.
How would this work? This could be the weak point of my analysis, but you could put a tax on âAI productsâ. This would be terrible and distortionary, but it would probably be effective at slashing the most centrally AGI companies. You can also put a tax on GPUs.
Note: a way to think about this is a Pigouvian tax.
Counterarguments
China. Yep. This is a counterargument against a moratorium as well. I think Iâm willing to bite this bullet.
Something like: we really need the cooperation of chip companies. If we tax their chips theyâll sell to China immediately. This is the major reason why Iâm more optimistic about taxing âAI productsâ than GPUs, which would be easier to tax.
IDK, JP, a moratorium would be easier to coordinate around, is more likely to actually happen, and isnât that bad. We should put our wood behind that arrow. Seems plausible, but I really donât think this is a long term solution, and I tentatively think the tax thing is.
***
Again, will be very appreciative of different takes, etc here.
See also: Notes on Potential Future AI Tax Policy. This post was written before reading that one. Sadly that post spends too much time in the weeds arguing against a specific implementation which Zvi apparently really doesnât like, and not enough time discussing overall dynamics, IMO.
See also here, which I havenât entirely reread for a good related discussion.
I agree increasing the cost of compute or decreasing the benefits of compute would slow dangerous AI.
I claim taxing AI products isnât great because I think the labs that might make world-ending models arenât super sensitive to revenueâI wouldnât expect a tax to change their behavior much. (Epistemic status: weak sense; stating an empirical crux.)
Relatedly, taxing AI products might differentially slow safe cool stuff like robots and autonomous vehicles and image generation. Ideally weâd only target LLMs or something.
Clarification: I think âhardware overhangâ in this context means âamount labs could quickly increase training compute (because they were artificially restrained for a while by regulation but can quickly catch up to where they would have been)â? There is no standard definition, and the AI Impacts definition you linked to seems inappropriate here (and almost always uselessâit was a useful concept before the training is expensive; running is cheap era).