This is clarifying context, thanks. It’s a common strategy to go red for years while tech start-ups build a moat around themselves (particularly through network effects). Amazon built a moat in terms of drawing in vendors and buyers into its platform while reducing logistics costs, and Uber in drawing in taxi drivers and riders onto its platform. Tesla started out with a technological edge.
Currently, I don’t see a strong case for that OpenAI and Anthropic are building up a moat. –> Do you have any moats in mind that I missed? Curious.
Network effects aren’t much of a moat here, since their users are mostly using the tools by themselves (though their prompts used to improve the tools; not sure how much). It doesn’t seem a big deal for most users to switch to another competing chatting tool or image generation tool say. Potentially, current ChatGPT or Claude users can later move to using new model-based tools that are profitable for those AI companies. But as it stands, OpenAI and Anthropic are losing money on existing users on one end, while being under threat of losing users to cheap model alternatives on the other end. It’s not clear whether with the head-start they got on releasing increasingly extractive, general use models that they are going to be ‘winners’. Maybe their researchers will be the ones to come up with new capability breakthroughs, that will somehow be used to maintain an industry edge (incl. in e.g. military applications). But over the last two years, there has been more of a closing of the gap between the user functionality of newer versions of Claude and ChatGPT and cheaper competing models (like Meta’s and DeepSeek’s). OpenAI sunk hundreds of millions of dollars over 18 months into a model that was not worth calling GPT-5, and meanwhile other players caught up on model functionality of GPT-4.
This is not to say that there could not be large-model developers with at least tens of billion dollars in yearly profit within the next decade. That is what current investments and continued R&D are aimed towards. It seems the default scenario. Personally, I’ll work hard to prevent that scenario since at that point restricting the development of increasingly unscoped (and harmful) models will basically be intractable.
I think there are serious risks for LLM development (i.e. a better DeepSeek could be released at any points), but also some serious opportunities.
1. The game is still early. It’s hard to say what moats might exist 5 years from now. This is a chaotic field. 2. ChatGPT/Claude spend a lot of attention on their frontends, the API support, documentation, monitoring, moderation, lots of surrounding tooling. It’s a ton of work to make a high production-grade service, besides just having one narrow good LLM. 3. There’s always the chance of something like a Decisive Strategic Advantage later.
Personally, if I were an investor, both would seem promising to me. Both are very risky—high chances of total failure, depending on how things play out. But that’s common for startups. I’d bet that there’s a good chance that moats will emerge later.
This is clarifying context, thanks. It’s a common strategy to go red for years while tech start-ups build a moat around themselves (particularly through network effects). Amazon built a moat in terms of drawing in vendors and buyers into its platform while reducing logistics costs, and Uber in drawing in taxi drivers and riders onto its platform. Tesla started out with a technological edge.
Currently, I don’t see a strong case for that OpenAI and Anthropic are building up a moat.
–> Do you have any moats in mind that I missed? Curious.
Network effects aren’t much of a moat here, since their users are mostly using the tools by themselves (though their prompts used to improve the tools; not sure how much). It doesn’t seem a big deal for most users to switch to another competing chatting tool or image generation tool say. Potentially, current ChatGPT or Claude users can later move to using new model-based tools that are profitable for those AI companies. But as it stands, OpenAI and Anthropic are losing money on existing users on one end, while being under threat of losing users to cheap model alternatives on the other end. It’s not clear whether with the head-start they got on releasing increasingly extractive, general use models that they are going to be ‘winners’. Maybe their researchers will be the ones to come up with new capability breakthroughs, that will somehow be used to maintain an industry edge (incl. in e.g. military applications). But over the last two years, there has been more of a closing of the gap between the user functionality of newer versions of Claude and ChatGPT and cheaper competing models (like Meta’s and DeepSeek’s). OpenAI sunk hundreds of millions of dollars over 18 months into a model that was not worth calling GPT-5, and meanwhile other players caught up on model functionality of GPT-4.
OpenAI seems reflective of an industry where investment far outstrips user demand, as happened during the dotcom bubble.
This is not to say that there could not be large-model developers with at least tens of billion dollars in yearly profit within the next decade. That is what current investments and continued R&D are aimed towards. It seems the default scenario. Personally, I’ll work hard to prevent that scenario since at that point restricting the development of increasingly unscoped (and harmful) models will basically be intractable.
I think there are serious risks for LLM development (i.e. a better DeepSeek could be released at any points), but also some serious opportunities.
1. The game is still early. It’s hard to say what moats might exist 5 years from now. This is a chaotic field.
2. ChatGPT/Claude spend a lot of attention on their frontends, the API support, documentation, monitoring, moderation, lots of surrounding tooling. It’s a ton of work to make a high production-grade service, besides just having one narrow good LLM.
3. There’s always the chance of something like a Decisive Strategic Advantage later.
Personally, if I were an investor, both would seem promising to me. Both are very risky—high chances of total failure, depending on how things play out. But that’s common for startups. I’d bet that there’s a good chance that moats will emerge later.