It’s the first chapter in a new guide about how to help make AI go well (aimed at new audiences).
I think it’s generally important for people who want to help to understand the strategic picture.
Plus in my experience the thing most likely to make people take AI risk more seriously is believing that powerful AI might happen soon.
I appreciate that talking about this could also wake more people up to AGI, but I expect the guide overall will proportionally boost the safety talent pool a lot more than the speeding up AI talent pool.
(And long term I think it’s also better to be open about my actual thinking rather than try to message control to that degree, and a big part of the case in favour in my mind is that it might happen soon.)
Minor but I actually think Deepseek was pretty on trend for algorithmic efficiency (as explained in the post). The main surprise was that it was a Chinese company near the forefront of algorithmic efficiency (but here several months before I suggest that the Chinese are close to the frontier there).