1. I think that “secret intelligence explosions” are something that would be quite scary and disruptive. So it seems worth spending some attention on, like this writing.
2. I feel like there are some key assumptions here. I’d naively guess:
a. An intelligence explosion like you’re describing doesn’t seem very likely to me. It seems to imply a discontinuous jump (as opposed to regular acceleration), and also implies that this resulting intelligence would have profound market value, such that the investments would have some steeply increased ROI at this point.
b. This model also implies that it might be feasible for multiple actors, particularly isolated ones, to make an “intelligent explosion.” I’d naively expect there to be a ton of competition in this area, and I’d expect that competition would greatly decrease the value of the marginal intelligence gain (i.e. cheap LLMs can do much of the work that expensive LLMs do). I’d naively expect that if there are any discontinuous gains to be made, they’ll be made by the largest actors.
I agree that abstract arguments about the cost/benefits of privacy don’t make either case certain. But I think the empirical evidence is quite positive so far, against secrecy. That said, of course I expect that the situation could change quickly as major circumstances change.
a. An intelligence explosion like you’re describing doesn’t seem very likely to me. It seems to imply a discontinuous jump (as opposed to regular acceleration), and also implies that this resulting intelligence would have profound market value, such that the investments would have some steeply increased ROI at this point.
I’m not exactly sure what you mean by discontinuous jump. I expect the usefulness of AI systems to be pretty “continuous” inside AI companies and “discontinuous” outside AI companies. If you think that: 1. model release cadence will stay similar 2. but, capabilities will accelerate 3. then you should also expect external AI progress to be more “discontinuous” than it currently is.
I gave some reasons why I don’t think AI companies will want to externally deploy their best models (like less benefit from user growth), so maybe you disagree with that, or do you disagree with 1,2, or 3?
b. This model also implies that it might be feasible for multiple actors, particularly isolated ones, to make an “intelligent explosion.” I’d naively expect there to be a ton of competition in this area, and I’d expect that competition would greatly decrease the value of the marginal intelligence gain (i.e. cheap LLMs can do much of the work that expensive LLMs do). I’d naively expect that if there are any discontinuous gains to be made, they’ll be made by the largest actors.
I do think that more than one actor (e.g. 3 actors) may be trying to IE at the same time, but I’m not sure why this is implied by my post. I think my model isn’t especially sensitive to single vs multiple competing IEs, but possible you’re seeing something I’m not. I don’t really follow
competition would greatly decrease the value of the marginal intelligence gain (i.e. cheap LLMs can do much of the work that expensive LLMs do)
Do you expect competition to increase dramatically from where we are at rn? If not then I think current level of competition empirically do lead to people investing a lot in AI development so I’m not sure I quite follow your line of reasoning.
I gave some reasons why I don’t think AI companies will want to externally deploy their best models (like less benefit from user growth), so maybe you disagree with that, or do you disagree with 1,2, or 3?
I understand that there are some reasons that companies might do this. One 1/2/3, I’m really unsure about the details of (2). If capabilities accelerate, but predictably and slowly, I assume this wouldn’t feel very discontinuous.
Also, there’s a major difference between AIs getting better and them becoming more useful. Often there are diminishing returns to intelligence.
> I do think that more than one actor (e.g. 3 actors) may be trying to IE at the same time, but I’m not sure why this is implied by my post. I think my model isn’t especially sensitive to single vs multiple competing IEs, but possible you’re seeing something I’m not.
Sorry, I may have misunderstood that. But if there is only one or two potential actors, that does seem to make the situation far easier. Like, it could be fairly clear to many international actors that there are 1-2 firms that might be making major breakthroughs. In that case, we might just need to worry about policing these firms. This seems fairly possible to me (if we can be somewhat competent).
Do you expect competition to increase dramatically from where we are at rn? If not then I think current level of competition empirically do lead to people investing a lot in AI development so I’m not sure I quite follow your line of reasoning.
I’d expect that market caps of these companies would be far higher if it were clear if there would be less competition later, and I’d equivalently expect these companies to do (even more) R&D.
I’m quite sure investors are quite nervous about the monopoly abilities of LLM companies.
Right now, I don’t think it’s clear to anyone where OpenAI/Anthropic will really make money 5+ years from now. It seems like [slightly worse AIs] often are both cheap / open-source and good enough. I think that both companies are very promising, but just that the future market value is very unclear.
I’ve heard that some of the Chinese strategy is, “Don’t worry too much about being on the absolute frontier, because it’s far cheaper to just copy from 1-2 steps behind.”
I wasn’t saying that “competition would greatly decrease the value of the marginal intelligence gain” in the sense of “things will get worse from where we are now”, but in the sense of “things are generally worse from where they would be without such competition”
Very quickly:
1. I think that “secret intelligence explosions” are something that would be quite scary and disruptive. So it seems worth spending some attention on, like this writing.
2. I feel like there are some key assumptions here. I’d naively guess:
a. An intelligence explosion like you’re describing doesn’t seem very likely to me. It seems to imply a discontinuous jump (as opposed to regular acceleration), and also implies that this resulting intelligence would have profound market value, such that the investments would have some steeply increased ROI at this point.
b. This model also implies that it might be feasible for multiple actors, particularly isolated ones, to make an “intelligent explosion.” I’d naively expect there to be a ton of competition in this area, and I’d expect that competition would greatly decrease the value of the marginal intelligence gain (i.e. cheap LLMs can do much of the work that expensive LLMs do). I’d naively expect that if there are any discontinuous gains to be made, they’ll be made by the largest actors.
I agree that abstract arguments about the cost/benefits of privacy don’t make either case certain. But I think the empirical evidence is quite positive so far, against secrecy. That said, of course I expect that the situation could change quickly as major circumstances change.
a. An intelligence explosion like you’re describing doesn’t seem very likely to me. It seems to imply a discontinuous jump (as opposed to regular acceleration), and also implies that this resulting intelligence would have profound market value, such that the investments would have some steeply increased ROI at this point.
I’m not exactly sure what you mean by discontinuous jump. I expect the usefulness of AI systems to be pretty “continuous” inside AI companies and “discontinuous” outside AI companies. If you think that:
1. model release cadence will stay similar
2. but, capabilities will accelerate
3. then you should also expect external AI progress to be more “discontinuous” than it currently is.
I gave some reasons why I don’t think AI companies will want to externally deploy their best models (like less benefit from user growth), so maybe you disagree with that, or do you disagree with 1,2, or 3?
b. This model also implies that it might be feasible for multiple actors, particularly isolated ones, to make an “intelligent explosion.” I’d naively expect there to be a ton of competition in this area, and I’d expect that competition would greatly decrease the value of the marginal intelligence gain (i.e. cheap LLMs can do much of the work that expensive LLMs do). I’d naively expect that if there are any discontinuous gains to be made, they’ll be made by the largest actors.
I do think that more than one actor (e.g. 3 actors) may be trying to IE at the same time, but I’m not sure why this is implied by my post. I think my model isn’t especially sensitive to single vs multiple competing IEs, but possible you’re seeing something I’m not. I don’t really follow
competition would greatly decrease the value of the marginal intelligence gain (i.e. cheap LLMs can do much of the work that expensive LLMs do)
Do you expect competition to increase dramatically from where we are at rn? If not then I think current level of competition empirically do lead to people investing a lot in AI development so I’m not sure I quite follow your line of reasoning.
I understand that there are some reasons that companies might do this. One 1/2/3, I’m really unsure about the details of (2). If capabilities accelerate, but predictably and slowly, I assume this wouldn’t feel very discontinuous.
Also, there’s a major difference between AIs getting better and them becoming more useful. Often there are diminishing returns to intelligence.
> I do think that more than one actor (e.g. 3 actors) may be trying to IE at the same time, but I’m not sure why this is implied by my post. I think my model isn’t especially sensitive to single vs multiple competing IEs, but possible you’re seeing something I’m not.
Sorry, I may have misunderstood that. But if there is only one or two potential actors, that does seem to make the situation far easier. Like, it could be fairly clear to many international actors that there are 1-2 firms that might be making major breakthroughs. In that case, we might just need to worry about policing these firms. This seems fairly possible to me (if we can be somewhat competent).
I’d expect that market caps of these companies would be far higher if it were clear if there would be less competition later, and I’d equivalently expect these companies to do (even more) R&D.
I’m quite sure investors are quite nervous about the monopoly abilities of LLM companies.
Right now, I don’t think it’s clear to anyone where OpenAI/Anthropic will really make money 5+ years from now. It seems like [slightly worse AIs] often are both cheap / open-source and good enough. I think that both companies are very promising, but just that the future market value is very unclear.
I’ve heard that some of the Chinese strategy is, “Don’t worry too much about being on the absolute frontier, because it’s far cheaper to just copy from 1-2 steps behind.”
I wasn’t saying that “competition would greatly decrease the value of the marginal intelligence gain” in the sense of “things will get worse from where we are now”, but in the sense of “things are generally worse from where they would be without such competition”