Setting aside general arguments about companiesâ conflicts of interest regarding AI projections, I want to note that the revenue projections of these companies do not assume straight lines over trends.
Differentsources suggest OpenAI does not expect to be profitable until 2029, and its revenue projection for 2029 is around $100-120 billion. Similarly, Anthropic expects $34.5 billion revenue in 2027. These are very significant numbers, but for comparison Microsoft has an annual revenue of $250 billion. When I see the headlines âAGI by 2027â, I expect something far scarier than $34.5 billion annual revenue. Of course one can argue that business deployment of AI takes time, companies canât capture all the value they produce and so on. Nonetheless, I think these numbers are helpful to keep things in perspective.
The next few years, I expect AI revenues to continue to increase 2-4x per year, like they have recently, which gets you to those kinds of numbers in 2027.
There wonât be widespread automation, rather AI will make money from a few key areas with few barriers, especially programming.
You could then reach an inflection point where AI starts to help with AI research. AI inference gets mostly devoted to that task for a while. Major progress is made, perhaps reaching AGI, without further external deployment.
Revenues would then explode after that point, but OpenAI arenât going to put that in their investor deck right now.
You could also see an acceleration in revenues when agents start to work. And in general I expect revenues to strongly lag capabilities. (Revenue also depends on the diff between the leading model and best free model.)
Overall I see the near term revenue figures as consistent with an AGI soon scenario. I agree 100bn in 2029 is harder to square, but I think thatâs in part because OpenAI thinks investors wonât believe higher figures.
I agree 100bn in 2029 is harder to square, but I think thatâs in part because OpenAI thinks investors wonât believe higher figures.
So, OpenAI is telling the truth when it says AGI will come soon and lying when it says AGI will not come soon?
Sam Altmanâs most recent timeline is âthousands of daysâ, which is so vague. 2,000 days (the minimum âthousands of daysâ could mean) is 5.5 years. 9,000 days (the point before you might think he would just say âten thousand daysâ) is 24.7 years. So, 5-25 years?
So, OpenAI is telling the truth when it says AGI will come soon and lying when it says AGI will not come soon?
I donât especially trust OpenAIâs statements on either front.
The framing of the piece is âthe companies are making these claims, letâs dig into the evidence for ourselvesâ not âletâs believe the companiesâ.
(I think the companies are most worth listening to when it comes to specific capabilities that will arrive in the next 2-3 years.)
Setting aside general arguments about companiesâ conflicts of interest regarding AI projections, I want to note that the revenue projections of these companies do not assume straight lines over trends.
Different sources suggest OpenAI does not expect to be profitable until 2029, and its revenue projection for 2029 is around $100-120 billion. Similarly, Anthropic expects $34.5 billion revenue in 2027. These are very significant numbers, but for comparison Microsoft has an annual revenue of $250 billion. When I see the headlines âAGI by 2027â, I expect something far scarier than $34.5 billion annual revenue. Of course one can argue that business deployment of AI takes time, companies canât capture all the value they produce and so on. Nonetheless, I think these numbers are helpful to keep things in perspective.
The next few years, I expect AI revenues to continue to increase 2-4x per year, like they have recently, which gets you to those kinds of numbers in 2027.
There wonât be widespread automation, rather AI will make money from a few key areas with few barriers, especially programming.
You could then reach an inflection point where AI starts to help with AI research. AI inference gets mostly devoted to that task for a while. Major progress is made, perhaps reaching AGI, without further external deployment.
Revenues would then explode after that point, but OpenAI arenât going to put that in their investor deck right now.
You could also see an acceleration in revenues when agents start to work. And in general I expect revenues to strongly lag capabilities. (Revenue also depends on the diff between the leading model and best free model.)
Overall I see the near term revenue figures as consistent with an AGI soon scenario. I agree 100bn in 2029 is harder to square, but I think thatâs in part because OpenAI thinks investors wonât believe higher figures.
So, OpenAI is telling the truth when it says AGI will come soon and lying when it says AGI will not come soon?
Sam Altmanâs most recent timeline is âthousands of daysâ, which is so vague. 2,000 days (the minimum âthousands of daysâ could mean) is 5.5 years. 9,000 days (the point before you might think he would just say âten thousand daysâ) is 24.7 years. So, 5-25 years?
I donât especially trust OpenAIâs statements on either front.
The framing of the piece is âthe companies are making these claims, letâs dig into the evidence for ourselvesâ not âletâs believe the companiesâ.
(I think the companies are most worth listening to when it comes to specific capabilities that will arrive in the next 2-3 years.)