Yes, I think “AI as normal technology” is probably a misnomer—or at least very liable to being misinterpreted. Perhaps this later post by the authors is helpful—they clarify they don’t mean “mundane or predictable” when they say “normal”.
But I’m not sure a world where human CEOs defer a lot of decisions, including high-level strategy, to AI requires something that is approximately AGI. Couldn’t we also see this happen in a world with very narrow but intelligent “Tool AI” systems? In other words, CEOs could be deferring a lot of decisions “to AI”, but to many different AI systems, each of which has relatively narrow competencies. This might depend on your view of how narrow or general a skill “high-level strategy” is.
From the Asterisk interview you linked, it doesn’t sound like Arvind is expecting AI to remain like narrow and tool-like forever. Just that he expects it will take longer to reach AGI than people expect, and only after AIs are used extensively in the real world. He admits he would significantly change his evaluation if we saw a fairly general-purpose personal assistant work out of the box in 2025-26.
Thank you! Appreciate the feedback :)