a bet on OpenAI having better models in the future
OpenAI models will improve, and offerings from competitors will also improve. But will OpenAI’s offerings consistently maintain a lead over competitors?
Here is an animation I found of LLM leaderboard rankings over time. It seems like OpenAI has consistently been in the lead, but its lead tends to be pretty narrow. They might even lose their lead in the future, given the recent talent exodus. [Edit: On the other hand, it’s possible their best models are not publicly available.]
If switching costs were zero, it’s easy for me to imagine businesses becoming price-sensitive. Imagine calling a wrapper API which automatically selects the cheapest LLM that (a) passes your test suite and (b) has a sufficiently low rate of confabulations/misbehavior/etc.
Given the choice of an expensive LLM with 112 IQ, and a cheap LLM with 110 IQ, a rational business might only pay for the 112 IQ LLM if they really need those additional 2 IQ points. Perhaps only a small fraction of business applications will fall in the narrow range where they can be done with 112 IQ but not 110 IQ. For other applications, you get commoditization.
A wrapper API might also employ some sort of router model that tries to figure out if it’s worth paying extra for 2 more IQ points on a query-specific basis. For example, initially route to the cheapest LLM, and prompt that LLM really well, so it’s good at complaining if it can’t do the task. If it complains, retry with a more powerful LLM.
If the wrapper API was good enough, and everyone was using it, I could imagine a situation where even if your models consistently maintain a narrow lead, you barely eke out extra profits.
It’s possible that https://openrouter.ai/ is already pretty close to what I’m describing. Maybe working there would be a good EA job?
I don’t think OpenAI’s near term ability to make money (e.g. because of the quality of its models) is particularly relevant now to its valuation. It’s possible it won’t be in the lead in the future, but I think OpenAI investors are betting on worlds where OpenAI does clearly “win”, and the stickiness of its customers in other worlds doesn’t really affect the valuation much.
So I don’t agree that working on this would be useful compared with things that contribute to safety more directly.
How much do you think customers having 0 friction to switching away from OpenAI would reduce its valuation? I think it wouldn’t change it much, less than 10%.
(Also note that OpenAI’s competitors are incentivised to make switching cheap, e.g. Anthropic’s API is very similar to OpenAI’s for this reason.)
I don’t think OpenAI’s near term ability to make money (e.g. because of the quality of its models) is particularly relevant now to its valuation. It’s possible it won’t be in the lead in the future, but I think OpenAI investors are betting on worlds where OpenAI does clearly “win”, and the stickiness of its customers in other worlds doesn’t really affect the valuation much.
They’re losing billions every year, and they need a continuous flow of investment to pay the bills. Even if current OpenAI investors are focused on an extreme upside scenario, that doesn’t mean they want unlimited exposure to OpenAI in their portfolio. Eventually OpenAI will find themselves talking to investors who care about moats, industry structure, profit and loss, etc.
The very fact that OpenAI has been throwing around revenue projections for the next 5 years suggests that investors care about those numbers.
I also think the extreme upside is not that compelling for OpenAI, due to their weird legal structure with capped profit and so on?
On the EA Forum it’s common to think in terms of clear “wins”, but it’s unclear to me that typical AI investors are thinking this way. E.g. if they were, I would expect them to be more concerned about doom, and OpenAI’s profit cap.
Dario Amodei’s recent post was rather far out, and even in his fairly wild scenario, no clear “win” was implied or required. There’s nothing in his post that implies LLM providers must be making outsized profits—same way the fact that we’re having this discussion online doesn’t imply that typical dot-com bubble companies or telecom companies made outsized profits.
How much do you think customers having 0 friction to switching away from OpenAI would reduce its valuation? I think it wouldn’t change it much, less than 10%.
If it becomes common knowledge that LLMs are bad businesses, and investor interest dries up, that could make the difference between OpenAI joining the ranks of FAANG at a $1T+ valuation vs raising a down round.
Markets are ruled by fear and greed. Too much doomer discourse inadvertently fuels “greed” sentiment by focusing on rapid capability gain scenarios. Arguably, doomer messaging to AI investors should be more like: “If OpenAI succeeds, you’ll die. If it fails, you’ll lose your shirt. Not a good bet either way.”
There are liable to be tipping points here—chipping in to keep OpenAI afloat is less attractive if future investors are seeming less willing to do this. There’s also the background risk of a random recession due to H5N1 / a contested US election / port strike resumption / etc. to take into account, which could shift investor sentiment.
So I don’t agree that working on this would be useful compared with things that contribute to safety more directly.
If you have a good way to contribute to safety, go for it. So far efforts to slow AI development haven’t seemed very successful, and I think slowing AI development is an important and valuable thing to do. So it seems worth discussing alternatives to the current strategy there. I do think there’s a fair amount of groupthink in EA.
OpenAI models will improve, and offerings from competitors will also improve. But will OpenAI’s offerings consistently maintain a lead over competitors?
Here is an animation I found of LLM leaderboard rankings over time. It seems like OpenAI has consistently been in the lead, but its lead tends to be pretty narrow. They might even lose their lead in the future, given the recent talent exodus. [Edit: On the other hand, it’s possible their best models are not publicly available.]
If switching costs were zero, it’s easy for me to imagine businesses becoming price-sensitive. Imagine calling a wrapper API which automatically selects the cheapest LLM that (a) passes your test suite and (b) has a sufficiently low rate of confabulations/misbehavior/etc.
Given the choice of an expensive LLM with 112 IQ, and a cheap LLM with 110 IQ, a rational business might only pay for the 112 IQ LLM if they really need those additional 2 IQ points. Perhaps only a small fraction of business applications will fall in the narrow range where they can be done with 112 IQ but not 110 IQ. For other applications, you get commoditization.
A wrapper API might also employ some sort of router model that tries to figure out if it’s worth paying extra for 2 more IQ points on a query-specific basis. For example, initially route to the cheapest LLM, and prompt that LLM really well, so it’s good at complaining if it can’t do the task. If it complains, retry with a more powerful LLM.
If the wrapper API was good enough, and everyone was using it, I could imagine a situation where even if your models consistently maintain a narrow lead, you barely eke out extra profits.
It’s possible that https://openrouter.ai/ is already pretty close to what I’m describing. Maybe working there would be a good EA job?
I don’t think OpenAI’s near term ability to make money (e.g. because of the quality of its models) is particularly relevant now to its valuation. It’s possible it won’t be in the lead in the future, but I think OpenAI investors are betting on worlds where OpenAI does clearly “win”, and the stickiness of its customers in other worlds doesn’t really affect the valuation much.
So I don’t agree that working on this would be useful compared with things that contribute to safety more directly.
How much do you think customers having 0 friction to switching away from OpenAI would reduce its valuation? I think it wouldn’t change it much, less than 10%.
(Also note that OpenAI’s competitors are incentivised to make switching cheap, e.g. Anthropic’s API is very similar to OpenAI’s for this reason.)
They’re losing billions every year, and they need a continuous flow of investment to pay the bills. Even if current OpenAI investors are focused on an extreme upside scenario, that doesn’t mean they want unlimited exposure to OpenAI in their portfolio. Eventually OpenAI will find themselves talking to investors who care about moats, industry structure, profit and loss, etc.
The very fact that OpenAI has been throwing around revenue projections for the next 5 years suggests that investors care about those numbers.
I also think the extreme upside is not that compelling for OpenAI, due to their weird legal structure with capped profit and so on?
On the EA Forum it’s common to think in terms of clear “wins”, but it’s unclear to me that typical AI investors are thinking this way. E.g. if they were, I would expect them to be more concerned about doom, and OpenAI’s profit cap.
Dario Amodei’s recent post was rather far out, and even in his fairly wild scenario, no clear “win” was implied or required. There’s nothing in his post that implies LLM providers must be making outsized profits—same way the fact that we’re having this discussion online doesn’t imply that typical dot-com bubble companies or telecom companies made outsized profits.
If it becomes common knowledge that LLMs are bad businesses, and investor interest dries up, that could make the difference between OpenAI joining the ranks of FAANG at a $1T+ valuation vs raising a down round.
Markets are ruled by fear and greed. Too much doomer discourse inadvertently fuels “greed” sentiment by focusing on rapid capability gain scenarios. Arguably, doomer messaging to AI investors should be more like: “If OpenAI succeeds, you’ll die. If it fails, you’ll lose your shirt. Not a good bet either way.”
There are liable to be tipping points here—chipping in to keep OpenAI afloat is less attractive if future investors are seeming less willing to do this. There’s also the background risk of a random recession due to H5N1 / a contested US election / port strike resumption / etc. to take into account, which could shift investor sentiment.
If you have a good way to contribute to safety, go for it. So far efforts to slow AI development haven’t seemed very successful, and I think slowing AI development is an important and valuable thing to do. So it seems worth discussing alternatives to the current strategy there. I do think there’s a fair amount of groupthink in EA.