In which JP tries to learn about AI governance by writing up a take. Take tl;dr: Overhang concerns and desire to avoid catchup effects seem super real. But that need not imply speeding ahead towards our doom. Why not try to slow everything down uniformly? ā Please tell me why Iām wrong.
After the FLI letter, the debate in EA has coalesced into ā6 month pauseā vs āshut it all downā vs some complicated shrug. Iām broadly sympathetic to concerns (1, 2) that a moratorium might make the hardware overhang worse, or make competitive dynamics worse.
To put it provocatively (at least around these parts), it seems like thereās something to the OpenAI āPlanning for AGI and beyondā justification for their behavior. I think that sudden discontinuous jumps are bad.
Ok, but it remains true that OpenAI has burned a whole bunch of timelines, and thatās bad. It seems to me that speeding up algorithmic improvements is incredibly dubious. And the large economic incentives theyāve created for AI chips seems really bad.[1]
So, how do we balance these things? Proposal: āāweāā reduce the economic incentive to speed ahead with AI. If weāre successful, we could slow down hardware progress, and algorithmic progress, for OpenAI, and all its competitors.
How would this work? This could be the weak point of my analysis, but you could put a tax on āAI productsā. This would be terrible and distortionary, but it would probably be effective at slashing the most centrally AGI companies. You can also put a tax on GPUs.
China. Yep. This is a counterargument against a moratorium as well. I think Iām willing to bite this bullet.
Something like: we really need the cooperation of chip companies. If we tax their chips theyāll sell to China immediately. This is the major reason why Iām more optimistic about taxing āAI productsā than GPUs, which would be easier to tax.
IDK, JP, a moratorium would be easier to coordinate around, is more likely to actually happen, and isnāt that bad. We should put our wood behind that arrow. Seems plausible, but I really donāt think this is a long term solution, and I tentatively think the tax thing is.
***
Again, will be very appreciative of different takes, etc here.
See also: Notes on Potential Future AI Tax Policy. This post was written before reading that one. Sadly that post spends too much time in the weeds arguing against a specific implementation which Zvi apparently really doesnāt like, and not enough time discussing overall dynamics, IMO.
I agree increasing the cost of compute or decreasing the benefits of compute would slow dangerous AI.
I claim taxing AI products isnāt great because I think the labs that might make world-ending models arenāt super sensitive to revenueāI wouldnāt expect a tax to change their behavior much. (Epistemic status: weak sense; stating an empirical crux.)
Relatedly, taxing AI products might differentially slow safe cool stuff like robots and autonomous vehicles and image generation. Ideally weād only target LLMs or something.
Clarification: I think āhardware overhangā in this context means āamount labs could quickly increase training compute (because they were artificially restrained for a while by regulation but can quickly catch up to where they would have been)ā? There is no standard definition, and the AI Impacts definition you linked to seems inappropriate here (and almost always uselessāit was a useful concept before the training is expensive; running is cheap era).
A tax, not a ban
In which JP tries to learn about AI governance by writing up a take. Take tl;dr: Overhang concerns and desire to avoid catchup effects seem super real. But that need not imply speeding ahead towards our doom. Why not try to slow everything down uniformly? ā Please tell me why Iām wrong.
After the FLI letter, the debate in EA has coalesced into ā6 month pauseā vs āshut it all downā vs some complicated shrug. Iām broadly sympathetic to concerns (1, 2) that a moratorium might make the hardware overhang worse, or make competitive dynamics worse.
To put it provocatively (at least around these parts), it seems like thereās something to the OpenAI āPlanning for AGI and beyondā justification for their behavior. I think that sudden discontinuous jumps are bad.
Ok, but it remains true that OpenAI has burned a whole bunch of timelines, and thatās bad. It seems to me that speeding up algorithmic improvements is incredibly dubious. And the large economic incentives theyāve created for AI chips seems really bad.[1]
So, how do we balance these things? Proposal: āāweāā reduce the economic incentive to speed ahead with AI. If weāre successful, we could slow down hardware progress, and algorithmic progress, for OpenAI, and all its competitors.
How would this work? This could be the weak point of my analysis, but you could put a tax on āAI productsā. This would be terrible and distortionary, but it would probably be effective at slashing the most centrally AGI companies. You can also put a tax on GPUs.
Note: a way to think about this is a Pigouvian tax.
Counterarguments
China. Yep. This is a counterargument against a moratorium as well. I think Iām willing to bite this bullet.
Something like: we really need the cooperation of chip companies. If we tax their chips theyāll sell to China immediately. This is the major reason why Iām more optimistic about taxing āAI productsā than GPUs, which would be easier to tax.
IDK, JP, a moratorium would be easier to coordinate around, is more likely to actually happen, and isnāt that bad. We should put our wood behind that arrow. Seems plausible, but I really donāt think this is a long term solution, and I tentatively think the tax thing is.
***
Again, will be very appreciative of different takes, etc here.
See also: Notes on Potential Future AI Tax Policy. This post was written before reading that one. Sadly that post spends too much time in the weeds arguing against a specific implementation which Zvi apparently really doesnāt like, and not enough time discussing overall dynamics, IMO.
See also here, which I havenāt entirely reread for a good related discussion.
I agree increasing the cost of compute or decreasing the benefits of compute would slow dangerous AI.
I claim taxing AI products isnāt great because I think the labs that might make world-ending models arenāt super sensitive to revenueāI wouldnāt expect a tax to change their behavior much. (Epistemic status: weak sense; stating an empirical crux.)
Relatedly, taxing AI products might differentially slow safe cool stuff like robots and autonomous vehicles and image generation. Ideally weād only target LLMs or something.
Clarification: I think āhardware overhangā in this context means āamount labs could quickly increase training compute (because they were artificially restrained for a while by regulation but can quickly catch up to where they would have been)ā? There is no standard definition, and the AI Impacts definition you linked to seems inappropriate here (and almost always uselessāit was a useful concept before the training is expensive; running is cheap era).