Thanks for posting this, Ted, it’s definitely made me think more about the potential barriers and the proper way to combine probability estimates.
One thing I was hoping you could clarify: In some of your comments and estimates, it seems like you are suggesting that it’s decently plausible(?)[1] we will “have AGI“ by 2043, it’s just that it won’t lead to transformative AGI before 2043 because the progress in robotics, semiconductors, and energy scaling will be too slow by 2043. However, it seems to me that once we have (expensive/physically-limited) AGI, this should be able to significantly help with the other things, at least over the span of 10 years. So my main question is: Does your model attach significantly higher probabilities to transformative AGI by 2053? Is it just that 2043 is right near the base of a rise in the cumulative probability curve?
The odds of AGI by 2043 are much, much higher than transformative AGI by 2043
AGI will rapidly accelerate progress toward transformative AGI
The odds of transformative AGI by 2053 is higher than by 2043
We didn’t explicitly forecast 2053 in the paper, just 2043 (0.4%) and 2100 (41%). If I had to guess without much thought I might go with 3%. It’s a huge advantage to get 10 extra years to build fabs, make algorithms efficient, collect vast training sets, train from slow/expensive real-world feedback, and recover from rare setbacks.
My mental model is some kind of S surve where progress in the short-term is extremely unlikely, progress in the medium-term is more likely, and after a while, the longer it takes to happen, the less likely it is to happen in any given year, as that suggests that some ingredient is still missing and hard to get.
I think you may be right that twenty years is before the S of my S curve really kicks in. Twenty just feels so short with everything that needs to be solved and scaled. I’m much more open-minded about forty.
Interesting. Perhaps we have quite different interpretations of what AGI would be able to do with some set of compute/cost and time limitations. I haven’t had the chance yet to read the relevant aspects of your paper (I will try to do so over the weekend), but I suspect that we have very cruxy disagreements about the ability of a high-cost AGI—and perhaps even pre-general AI that can still aid R&D—to help overcome barriers in robotics, semiconductor design, and possibly even aspects of AI algorithm design.
Just to clarify, does your S-curve almost entirely rely on base rates of previous trends in technological development, or do you have a component in your model that says “there’s some X% chance that conditional on the aforementioned progress (60% * 40%) we get intermediate/general AI that causes the chance of sufficiently rapid progress in everything else to be Y%, because AI could actually assist in the R&D and thus could have far greater returns to progress than most other technologies”?
No it’s not just extrapolating base rates (that would be a big blunder). We assume that the development of proto-AGI or AGI will rapidly accelerate progress and investment, and our conditional forecasts are much more optimistic about progress than they would be otherwise.
However, it’s a totally fair to disagree with us on the degree of that acceleration. Even with superhuman AGI, for example, I don’t think we’re moving away from semiconductor transistors in less than 15 years. Of course, it really depends on how superhuman this superhuman intelligence would be. We discuss this more in the essay.
Thanks for posting this, Ted, it’s definitely made me think more about the potential barriers and the proper way to combine probability estimates.
One thing I was hoping you could clarify: In some of your comments and estimates, it seems like you are suggesting that it’s decently plausible(?)[1] we will “have AGI“ by 2043, it’s just that it won’t lead to transformative AGI before 2043 because the progress in robotics, semiconductors, and energy scaling will be too slow by 2043. However, it seems to me that once we have (expensive/physically-limited) AGI, this should be able to significantly help with the other things, at least over the span of 10 years. So my main question is: Does your model attach significantly higher probabilities to transformative AGI by 2053? Is it just that 2043 is right near the base of a rise in the cumulative probability curve?
I wasn’t clear if this is just 60%, or 60%*40%, or what. If you could clarify this, that would be helpful!
Agree that:
The odds of AGI by 2043 are much, much higher than transformative AGI by 2043
AGI will rapidly accelerate progress toward transformative AGI
The odds of transformative AGI by 2053 is higher than by 2043
We didn’t explicitly forecast 2053 in the paper, just 2043 (0.4%) and 2100 (41%). If I had to guess without much thought I might go with 3%. It’s a huge advantage to get 10 extra years to build fabs, make algorithms efficient, collect vast training sets, train from slow/expensive real-world feedback, and recover from rare setbacks.
My mental model is some kind of S surve where progress in the short-term is extremely unlikely, progress in the medium-term is more likely, and after a while, the longer it takes to happen, the less likely it is to happen in any given year, as that suggests that some ingredient is still missing and hard to get.
I think you may be right that twenty years is before the S of my S curve really kicks in. Twenty just feels so short with everything that needs to be solved and scaled. I’m much more open-minded about forty.
Interesting. Perhaps we have quite different interpretations of what AGI would be able to do with some set of compute/cost and time limitations. I haven’t had the chance yet to read the relevant aspects of your paper (I will try to do so over the weekend), but I suspect that we have very cruxy disagreements about the ability of a high-cost AGI—and perhaps even pre-general AI that can still aid R&D—to help overcome barriers in robotics, semiconductor design, and possibly even aspects of AI algorithm design.
Just to clarify, does your S-curve almost entirely rely on base rates of previous trends in technological development, or do you have a component in your model that says “there’s some X% chance that conditional on the aforementioned progress (60% * 40%) we get intermediate/general AI that causes the chance of sufficiently rapid progress in everything else to be Y%, because AI could actually assist in the R&D and thus could have far greater returns to progress than most other technologies”?
No it’s not just extrapolating base rates (that would be a big blunder). We assume that the development of proto-AGI or AGI will rapidly accelerate progress and investment, and our conditional forecasts are much more optimistic about progress than they would be otherwise.
However, it’s a totally fair to disagree with us on the degree of that acceleration. Even with superhuman AGI, for example, I don’t think we’re moving away from semiconductor transistors in less than 15 years. Of course, it really depends on how superhuman this superhuman intelligence would be. We discuss this more in the essay.