I gave some reasons why I don’t think AI companies will want to externally deploy their best models (like less benefit from user growth), so maybe you disagree with that, or do you disagree with 1,2, or 3?
I understand that there are some reasons that companies might do this. One 1/2/3, I’m really unsure about the details of (2). If capabilities accelerate, but predictably and slowly, I assume this wouldn’t feel very discontinuous.
Also, there’s a major difference between AIs getting better and them becoming more useful. Often there are diminishing returns to intelligence.
> I do think that more than one actor (e.g. 3 actors) may be trying to IE at the same time, but I’m not sure why this is implied by my post. I think my model isn’t especially sensitive to single vs multiple competing IEs, but possible you’re seeing something I’m not.
Sorry, I may have misunderstood that. But if there is only one or two potential actors, that does seem to make the situation far easier. Like, it could be fairly clear to many international actors that there are 1-2 firms that might be making major breakthroughs. In that case, we might just need to worry about policing these firms. This seems fairly possible to me (if we can be somewhat competent).
Do you expect competition to increase dramatically from where we are at rn? If not then I think current level of competition empirically do lead to people investing a lot in AI development so I’m not sure I quite follow your line of reasoning.
I’d expect that market caps of these companies would be far higher if it were clear if there would be less competition later, and I’d equivalently expect these companies to do (even more) R&D.
I’m quite sure investors are quite nervous about the monopoly abilities of LLM companies.
Right now, I don’t think it’s clear to anyone where OpenAI/Anthropic will really make money 5+ years from now. It seems like [slightly worse AIs] often are both cheap / open-source and good enough. I think that both companies are very promising, but just that the future market value is very unclear.
I’ve heard that some of the Chinese strategy is, “Don’t worry too much about being on the absolute frontier, because it’s far cheaper to just copy from 1-2 steps behind.”
I wasn’t saying that “competition would greatly decrease the value of the marginal intelligence gain” in the sense of “things will get worse from where we are now”, but in the sense of “things are generally worse from where they would be without such competition”
I understand that there are some reasons that companies might do this. One 1/2/3, I’m really unsure about the details of (2). If capabilities accelerate, but predictably and slowly, I assume this wouldn’t feel very discontinuous.
Also, there’s a major difference between AIs getting better and them becoming more useful. Often there are diminishing returns to intelligence.
> I do think that more than one actor (e.g. 3 actors) may be trying to IE at the same time, but I’m not sure why this is implied by my post. I think my model isn’t especially sensitive to single vs multiple competing IEs, but possible you’re seeing something I’m not.
Sorry, I may have misunderstood that. But if there is only one or two potential actors, that does seem to make the situation far easier. Like, it could be fairly clear to many international actors that there are 1-2 firms that might be making major breakthroughs. In that case, we might just need to worry about policing these firms. This seems fairly possible to me (if we can be somewhat competent).
I’d expect that market caps of these companies would be far higher if it were clear if there would be less competition later, and I’d equivalently expect these companies to do (even more) R&D.
I’m quite sure investors are quite nervous about the monopoly abilities of LLM companies.
Right now, I don’t think it’s clear to anyone where OpenAI/Anthropic will really make money 5+ years from now. It seems like [slightly worse AIs] often are both cheap / open-source and good enough. I think that both companies are very promising, but just that the future market value is very unclear.
I’ve heard that some of the Chinese strategy is, “Don’t worry too much about being on the absolute frontier, because it’s far cheaper to just copy from 1-2 steps behind.”
I wasn’t saying that “competition would greatly decrease the value of the marginal intelligence gain” in the sense of “things will get worse from where we are now”, but in the sense of “things are generally worse from where they would be without such competition”