I don’t think OpenAI’s near term ability to make money (e.g. because of the quality of its models) is particularly relevant now to its valuation. It’s possible it won’t be in the lead in the future, but I think OpenAI investors are betting on worlds where OpenAI does clearly “win”, and the stickiness of its customers in other worlds doesn’t really affect the valuation much.
So I don’t agree that working on this would be useful compared with things that contribute to safety more directly.
How much do you think customers having 0 friction to switching away from OpenAI would reduce its valuation? I think it wouldn’t change it much, less than 10%.
(Also note that OpenAI’s competitors are incentivised to make switching cheap, e.g. Anthropic’s API is very similar to OpenAI’s for this reason.)
I don’t think OpenAI’s near term ability to make money (e.g. because of the quality of its models) is particularly relevant now to its valuation. It’s possible it won’t be in the lead in the future, but I think OpenAI investors are betting on worlds where OpenAI does clearly “win”, and the stickiness of its customers in other worlds doesn’t really affect the valuation much.
They’re losing billions every year, and they need a continuous flow of investment to pay the bills. Even if current OpenAI investors are focused on an extreme upside scenario, that doesn’t mean they want unlimited exposure to OpenAI in their portfolio. Eventually OpenAI will find themselves talking to investors who care about moats, industry structure, profit and loss, etc.
The very fact that OpenAI has been throwing around revenue projections for the next 5 years suggests that investors care about those numbers.
I also think the extreme upside is not that compelling for OpenAI, due to their weird legal structure with capped profit and so on?
On the EA Forum it’s common to think in terms of clear “wins”, but it’s unclear to me that typical AI investors are thinking this way. E.g. if they were, I would expect them to be more concerned about doom, and OpenAI’s profit cap.
Dario Amodei’s recent post was rather far out, and even in his fairly wild scenario, no clear “win” was implied or required. There’s nothing in his post that implies LLM providers must be making outsized profits—same way the fact that we’re having this discussion online doesn’t imply that typical dot-com bubble companies or telecom companies made outsized profits.
How much do you think customers having 0 friction to switching away from OpenAI would reduce its valuation? I think it wouldn’t change it much, less than 10%.
If it becomes common knowledge that LLMs are bad businesses, and investor interest dries up, that could make the difference between OpenAI joining the ranks of FAANG at a $1T+ valuation vs raising a down round.
Markets are ruled by fear and greed. Too much doomer discourse inadvertently fuels “greed” sentiment by focusing on rapid capability gain scenarios. Arguably, doomer messaging to AI investors should be more like: “If OpenAI succeeds, you’ll die. If it fails, you’ll lose your shirt. Not a good bet either way.”
There are liable to be tipping points here—chipping in to keep OpenAI afloat is less attractive if future investors are seeming less willing to do this. There’s also the background risk of a random recession due to H5N1 / a contested US election / port strike resumption / etc. to take into account, which could shift investor sentiment.
So I don’t agree that working on this would be useful compared with things that contribute to safety more directly.
If you have a good way to contribute to safety, go for it. So far efforts to slow AI development haven’t seemed very successful, and I think slowing AI development is an important and valuable thing to do. So it seems worth discussing alternatives to the current strategy there. I do think there’s a fair amount of groupthink in EA.
I don’t think OpenAI’s near term ability to make money (e.g. because of the quality of its models) is particularly relevant now to its valuation. It’s possible it won’t be in the lead in the future, but I think OpenAI investors are betting on worlds where OpenAI does clearly “win”, and the stickiness of its customers in other worlds doesn’t really affect the valuation much.
So I don’t agree that working on this would be useful compared with things that contribute to safety more directly.
How much do you think customers having 0 friction to switching away from OpenAI would reduce its valuation? I think it wouldn’t change it much, less than 10%.
(Also note that OpenAI’s competitors are incentivised to make switching cheap, e.g. Anthropic’s API is very similar to OpenAI’s for this reason.)
They’re losing billions every year, and they need a continuous flow of investment to pay the bills. Even if current OpenAI investors are focused on an extreme upside scenario, that doesn’t mean they want unlimited exposure to OpenAI in their portfolio. Eventually OpenAI will find themselves talking to investors who care about moats, industry structure, profit and loss, etc.
The very fact that OpenAI has been throwing around revenue projections for the next 5 years suggests that investors care about those numbers.
I also think the extreme upside is not that compelling for OpenAI, due to their weird legal structure with capped profit and so on?
On the EA Forum it’s common to think in terms of clear “wins”, but it’s unclear to me that typical AI investors are thinking this way. E.g. if they were, I would expect them to be more concerned about doom, and OpenAI’s profit cap.
Dario Amodei’s recent post was rather far out, and even in his fairly wild scenario, no clear “win” was implied or required. There’s nothing in his post that implies LLM providers must be making outsized profits—same way the fact that we’re having this discussion online doesn’t imply that typical dot-com bubble companies or telecom companies made outsized profits.
If it becomes common knowledge that LLMs are bad businesses, and investor interest dries up, that could make the difference between OpenAI joining the ranks of FAANG at a $1T+ valuation vs raising a down round.
Markets are ruled by fear and greed. Too much doomer discourse inadvertently fuels “greed” sentiment by focusing on rapid capability gain scenarios. Arguably, doomer messaging to AI investors should be more like: “If OpenAI succeeds, you’ll die. If it fails, you’ll lose your shirt. Not a good bet either way.”
There are liable to be tipping points here—chipping in to keep OpenAI afloat is less attractive if future investors are seeming less willing to do this. There’s also the background risk of a random recession due to H5N1 / a contested US election / port strike resumption / etc. to take into account, which could shift investor sentiment.
If you have a good way to contribute to safety, go for it. So far efforts to slow AI development haven’t seemed very successful, and I think slowing AI development is an important and valuable thing to do. So it seems worth discussing alternatives to the current strategy there. I do think there’s a fair amount of groupthink in EA.