I currently lead EA funds.
Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.
Unless explicitly stated otherwise, opinions are my own, not my employer’s.
You can give me positive and negative feedback here.
a. An intelligence explosion like you’re describing doesn’t seem very likely to me. It seems to imply a discontinuous jump (as opposed to regular acceleration), and also implies that this resulting intelligence would have profound market value, such that the investments would have some steeply increased ROI at this point.
I’m not exactly sure what you mean by discontinuous jump. I expect the usefulness of AI systems to be pretty “continuous” inside AI companies and “discontinuous” outside AI companies. If you think that:
1. model release cadence will stay similar
2. but, capabilities will accelerate
3. then you should also expect external AI progress to be more “discontinuous” than it currently is.
I gave some reasons why I don’t think AI companies will want to externally deploy their best models (like less benefit from user growth), so maybe you disagree with that, or do you disagree with 1,2, or 3?
b. This model also implies that it might be feasible for multiple actors, particularly isolated ones, to make an “intelligent explosion.” I’d naively expect there to be a ton of competition in this area, and I’d expect that competition would greatly decrease the value of the marginal intelligence gain (i.e. cheap LLMs can do much of the work that expensive LLMs do). I’d naively expect that if there are any discontinuous gains to be made, they’ll be made by the largest actors.
I do think that more than one actor (e.g. 3 actors) may be trying to IE at the same time, but I’m not sure why this is implied by my post. I think my model isn’t especially sensitive to single vs multiple competing IEs, but possible you’re seeing something I’m not. I don’t really follow
competition would greatly decrease the value of the marginal intelligence gain (i.e. cheap LLMs can do much of the work that expensive LLMs do)
Do you expect competition to increase dramatically from where we are at rn? If not then I think current level of competition empirically do lead to people investing a lot in AI development so I’m not sure I quite follow your line of reasoning.