Yes, I think “AI as normal technology” is probably a misnomer—or at least very liable to being misinterpreted. Perhaps this later post by the authors is helpful—they clarify they don’t mean “mundane or predictable” when they say “normal”.
But I’m not sure a world where human CEOs defer a lot of decisions, including high-level strategy, to AI requires something that is approximately AGI. Couldn’t we also see this happen in a world with very narrow but intelligent “Tool AI” systems? In other words, CEOs could be deferring a lot of decisions “to AI”, but to many different AI systems, each of which has relatively narrow competencies. This might depend on your view of how narrow or general a skill “high-level strategy” is.
From the Asterisk interview you linked, it doesn’t sound like Arvind is expecting AI to remain like narrow and tool-like forever. Just that he expects it will take longer to reach AGI than people expect, and only after AIs are used extensively in the real world. He admits he would significantly change his evaluation if we saw a fairly general-purpose personal assistant work out of the box in 2025-26.
Thank you for weighing in! I appreciate your perspective.
”Normal technology” really invokes a sense of, well, normal technology — smartphones, Internet, apps, autocorrect, Google, recommender algorithms on YouTube and Netflix, that sort of thing.
You raised an interesting question about tool AI vs. agent AI, but then you also (rather helpfully!) answered your own question. Arvind seems to be imagining a steady, gradual, continuous, relatively slower (compared to, say, Metaculus, but not what I’d necessarily call “slow” without qualification) path to agentic AGI, or something with very much AGI-like capabilities, like the ability to autonomously run a whole company without human input.
I was imagining that the “normal technology” AIs he was imagining in the not-too-distant future would be agentic and mostly autonomous, only asking for human feedback occasionally. But I don’t know for sure what Arvind has in mind.
Really it would be nice if Arvind could weigh in on what he thinks and try to paint us a more vivid picture of what he’s imagining. The more I get into these kinds of discussions, the more I realize people just imagine completely different things based on the same descriptions of hypothetical future AI systems. I think we have work to do getting clear on what we’re talking about as we keep having these discussions moving forward.
Yes, I think “AI as normal technology” is probably a misnomer—or at least very liable to being misinterpreted. Perhaps this later post by the authors is helpful—they clarify they don’t mean “mundane or predictable” when they say “normal”.
But I’m not sure a world where human CEOs defer a lot of decisions, including high-level strategy, to AI requires something that is approximately AGI. Couldn’t we also see this happen in a world with very narrow but intelligent “Tool AI” systems? In other words, CEOs could be deferring a lot of decisions “to AI”, but to many different AI systems, each of which has relatively narrow competencies. This might depend on your view of how narrow or general a skill “high-level strategy” is.
From the Asterisk interview you linked, it doesn’t sound like Arvind is expecting AI to remain like narrow and tool-like forever. Just that he expects it will take longer to reach AGI than people expect, and only after AIs are used extensively in the real world. He admits he would significantly change his evaluation if we saw a fairly general-purpose personal assistant work out of the box in 2025-26.
Thank you for weighing in! I appreciate your perspective.
”Normal technology” really invokes a sense of, well, normal technology — smartphones, Internet, apps, autocorrect, Google, recommender algorithms on YouTube and Netflix, that sort of thing.
You raised an interesting question about tool AI vs. agent AI, but then you also (rather helpfully!) answered your own question. Arvind seems to be imagining a steady, gradual, continuous, relatively slower (compared to, say, Metaculus, but not what I’d necessarily call “slow” without qualification) path to agentic AGI, or something with very much AGI-like capabilities, like the ability to autonomously run a whole company without human input.
I was imagining that the “normal technology” AIs he was imagining in the not-too-distant future would be agentic and mostly autonomous, only asking for human feedback occasionally. But I don’t know for sure what Arvind has in mind.
Really it would be nice if Arvind could weigh in on what he thinks and try to paint us a more vivid picture of what he’s imagining. The more I get into these kinds of discussions, the more I realize people just imagine completely different things based on the same descriptions of hypothetical future AI systems. I think we have work to do getting clear on what we’re talking about as we keep having these discussions moving forward.