I agree, and I do not think that slowing down AI or speeding it up is desirable, for reasons related to Rohin Shah’s comment on LW about why slowing down AI progress is not desirable:
It makes it easier for a future misaligned AI to take over by increasing overhangs, both via compute progress and algorithmic efficiency progress. (This is basically the same sort of argument as “Every 18 months, the minimum IQ necessary to destroy the world drops by one point.”)
Such strategies are likely to disproportionately penalize safety-conscious actors.
(As a concrete example of (2), if you build public support, maybe the public calls for compute restrictions on AGI companies and this ends up binding the companies with AGI safety teams but not the various AI companies that are skeptical of “AGI” and “AI x-risk” and say they are just building powerful AI tools without calling it AGI.)
For me personally there’s a third reason, which is that (to first approximation) I have a limited amount of resources and it seems better to spend that on the “use good alignment techniques” plan rather than the “try to not build AGI” plan. But that’s specific to me.
I agree, and I do not think that slowing down AI or speeding it up is desirable, for reasons related to Rohin Shah’s comment on LW about why slowing down AI progress is not desirable: