In March 2025, Dario Amodei, the CEO of Anthropic, predicted that 90% of code would be written by AI as early as June 2025 and no later than September 2025. This turned out to be dead wrong.
Amodei claims that 90% of code at Anthropic (and some companies they work with) is being written by AI
Turns out not even 90% of code at Anthropic is being written by AI, if you listen to Dario Amodei’s full remarks. This strikes me as a pretty dishonest thing for him to say, especially since I don’t know if we would have even got that important clarification if his interlocutor (Marc Benioff, the CEO of Salesforce) hadn’t pressed him on it.
His prediction was about all code, not Anthropic’s code, so his prediction is still false. The article incorrectly states in the italicized section under the title (I believe it’s called the deck) the prediction was about Anthropic’s code, but this is what he said in March 2025:
I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code
There was no qualifier that this was only about Anthropic’s code. It’s about all code.
I’ll be blunt: I think Dario saying “Some people think that prediction is wrong” is dishonest. If you make a prediction and it’s wrong, you should just admit that it’s wrong.
But now, getting to the job side of this, I do have a fair amount of concern about this. On one hand, I think comparative advantage is a very powerful tool. If I look at coding, programming, which is one area where AI is making the most progress, what we are finding is we are not far from the world—I think we’ll be there in three to six months—where AI is writing 90 percent of the code. And then in twelve months, we may be in a world where AI is writing essentially all of the code. But the programmer still needs to specify what the conditions of what you’re doing are, what the overall app you’re trying to make is, what the overall design decision is. How do we collaborate with other code that’s been written? How do we have some common sense on whether this is a secure design or an insecure design? So as long as there are these small pieces that a programmer, a human programmer, needs to do, the AI isn’t good at, I think human productivity will actually be enhanced. But on the other hand, I think that eventually all those little islands will get picked off by AI systems. And then we will eventually reach the point where the AIs can do everything that humans can. And I think that will happen in every industry.
For what it’s worth at the time I thought he was talking about code at Anthropic, and another commenter agreed. The “we are finding” indicates to me that it’s at Anthropic. Claude 4.5 Sonnet disagree with me and says that it can be read as being about the entire world.
(I really hope you’re right and the entire AI industry goes up in flames next year.)
To me, that quote really sounds like it’s about code in general, not code at Anthropic.
Dario’s own interpretation of the prediction, even now that it’s come false, seems to be about code in general, based on this defense:
I made this prediction that, you know, in six months, 90% of code would be written by AI models. Some people think that prediction is wrong, but within Anthropic and within a number of companies that we work with, that is absolutely true now.
If the prediction was just about Anthropic’s code, you’d think he would just say:
I made this prediction that in six months 90% of Anthropic’s code would be written by AI and now within Anthropic that is absolutely true now.
What he actually said comes across as a defense of a prediction he knows was at least partially falsified or is at least in doubt. If he just meant 90% of Anthropic’s code would be written by AI, he could just say he was unambiguously right and there’s no doubt about it.
Edit:
To address the part of your comment that changed after you edited it, in my interpretation, “we are finding” just means “we are learning” or “we are gaining information that” and is general enough that it doesn’t by itself support any particular interpretation. For example, he could have said:
...what we are finding is we are not far from the world—I think we’ll be there in three to six months—where AI is writing 90 percent of grant applications.
I wouldn’t interpret this to mean that Anthropic is writing any grant applications at all. My interpretation wouldn’t be different with or without the “what we are finding” part. If he just said, “I think we are not far from the world...”, to me, that would mean exactly the same thing.
Amodei claims that 90% of code at Anthropic (and some companies they work with) is being written by AI
Turns out not even 90% of code at Anthropic is being written by AI, if you listen to Dario Amodei’s full remarks. This strikes me as a pretty dishonest thing for him to say, especially since I don’t know if we would have even got that important clarification if his interlocutor (Marc Benioff, the CEO of Salesforce) hadn’t pressed him on it.
Not sure if you caught it but there was a detailed critique and discussion of this 90% figure on LessWrong
Thanks! :)
His prediction was about all code, not Anthropic’s code, so his prediction is still false. The article incorrectly states in the italicized section under the title (I believe it’s called the deck) the prediction was about Anthropic’s code, but this is what he said in March 2025:
There was no qualifier that this was only about Anthropic’s code. It’s about all code.
I’ll be blunt: I think Dario saying “Some people think that prediction is wrong” is dishonest. If you make a prediction and it’s wrong, you should just admit that it’s wrong.
The relevant is this timestamp in an interview. Relevant part of the interview:
For what it’s worth at the time I thought he was talking about code at Anthropic, and another commenter agreed. The “we are finding” indicates to me that it’s at Anthropic. Claude 4.5 Sonnet disagree with me and says that it can be read as being about the entire world.
(I really hope you’re right and the entire AI industry goes up in flames next year.)
To me, that quote really sounds like it’s about code in general, not code at Anthropic.
Dario’s own interpretation of the prediction, even now that it’s come false, seems to be about code in general, based on this defense:
If the prediction was just about Anthropic’s code, you’d think he would just say:
I made this prediction that in six months 90% of Anthropic’s code would be written by AI and now within Anthropic that is absolutely true now.
What he actually said comes across as a defense of a prediction he knows was at least partially falsified or is at least in doubt. If he just meant 90% of Anthropic’s code would be written by AI, he could just say he was unambiguously right and there’s no doubt about it.
Edit:
To address the part of your comment that changed after you edited it, in my interpretation, “we are finding” just means “we are learning” or “we are gaining information that” and is general enough that it doesn’t by itself support any particular interpretation. For example, he could have said:
...what we are finding is we are not far from the world—I think we’ll be there in three to six months—where AI is writing 90 percent of grant applications.
I wouldn’t interpret this to mean that Anthropic is writing any grant applications at all. My interpretation wouldn’t be different with or without the “what we are finding” part. If he just said, “I think we are not far from the world...”, to me, that would mean exactly the same thing.