I agree increasing the cost of compute or decreasing the benefits of compute would slow dangerous AI.
I claim taxing AI products isn’t great because I think the labs that might make world-ending models aren’t super sensitive to revenue—I wouldn’t expect a tax to change their behavior much. (Epistemic status: weak sense; stating an empirical crux.)
Relatedly, taxing AI products might differentially slow safe cool stuff like robots and autonomous vehicles and image generation. Ideally we’d only target LLMs or something.
Clarification: I think “hardware overhang” in this context means “amount labs could quickly increase training compute (because they were artificially restrained for a while by regulation but can quickly catch up to where they would have been)”? There is no standard definition, and the AI Impacts definition you linked to seems inappropriate here (and almost always useless—it was a useful concept before the training is expensive; running is cheap era).
I agree increasing the cost of compute or decreasing the benefits of compute would slow dangerous AI.
I claim taxing AI products isn’t great because I think the labs that might make world-ending models aren’t super sensitive to revenue—I wouldn’t expect a tax to change their behavior much. (Epistemic status: weak sense; stating an empirical crux.)
Relatedly, taxing AI products might differentially slow safe cool stuff like robots and autonomous vehicles and image generation. Ideally we’d only target LLMs or something.
Clarification: I think “hardware overhang” in this context means “amount labs could quickly increase training compute (because they were artificially restrained for a while by regulation but can quickly catch up to where they would have been)”? There is no standard definition, and the AI Impacts definition you linked to seems inappropriate here (and almost always useless—it was a useful concept before the training is expensive; running is cheap era).