My understanding (and I think everyone else’s) of AI capabilities is largely shaped by how impressive the results of major papers intuitively seem.
I claim that this is not how I think about AI capabilities, and it is not how many AI researchers think about AI capabilities. For a particularly extreme example, the Go-explore paper out of Uber had a very nominally impressive result on Montezuma’s Revenge, but much of the AI community didn’t find it compelling because of the assumptions that their algorithm used.
I’m not sure I fully understand how the metric would work. For the Atari example, it seems clear to me that we could easily reach it without making a generalizable AI system, or vice versa.
Tbc, I definitely did not intend for that to be an actual metric.
But let’s say that we could come up with a relevant metric. Then I’d agree with Garfinkel, as long as people in the community had known roughly the current state of AI in relation to it and the rate of advance toward it before the release of “AI and Compute”.
I would say that I have a set of intuitions and impressions that function as a very weak prediction of what AI will look like in the future, along the lines of that sort of metric. I trust timelines based on extrapolation of progress using these intuitions more than timelines based solely on compute.To the extent that you hear timeline estimates from people like me who do this sort of “progress extrapolation” who also did not know about how compute has been scaling, you would want to lengthen their timeline estimates. I’m not sure how timeline predictions break down on this axis.
I claim that this is not how I think about AI capabilities, and it is not how many AI researchers think about AI capabilities. For a particularly extreme example, the Go-explore paper out of Uber had a very nominally impressive result on Montezuma’s Revenge, but much of the AI community didn’t find it compelling because of the assumptions that their algorithm used.
Sorry, I meant the results in light of which methods were used, implications for other research, etc. The sentence would better read, “My understanding (and I think everyone else’s) of AI capabilities is largely shaped by how impressive major papers seem.”
Tbc, I definitely did not intend for that to be an actual metric.
Yeah, totally got that—I just think that making a relevant metric would be hard, and we’d have to know a lot that we don’t know now, including whether current ML techniques can ever lead to AGI.
I would say that I have a set of intuitions and impressions that function as a very weak prediction of what AI will look like in the future, along the lines of that sort of metric. I trust timelines based on extrapolation of progress using these intuitions more than timelines based solely on compute.
Interesting. Yeah, I don’t much trust my own intuitions on our current progress. I’d love to have a better understanding of how to evaluate the implications of new developments, but I really can’t do much better than, “GPT-2 impressed me a lot more than AlphaStar.” And just to be 100% clear—I tend to think that the necessary amount of compute is somewhere in the 18-to-300-year range. After we reach it, I’m stuck using my intuition to guess when we’ll have the right algorithms to create AGI.
Mostly agree with all of this; some nitpicks:
I claim that this is not how I think about AI capabilities, and it is not how many AI researchers think about AI capabilities. For a particularly extreme example, the Go-explore paper out of Uber had a very nominally impressive result on Montezuma’s Revenge, but much of the AI community didn’t find it compelling because of the assumptions that their algorithm used.
Tbc, I definitely did not intend for that to be an actual metric.
I would say that I have a set of intuitions and impressions that function as a very weak prediction of what AI will look like in the future, along the lines of that sort of metric. I trust timelines based on extrapolation of progress using these intuitions more than timelines based solely on compute.To the extent that you hear timeline estimates from people like me who do this sort of “progress extrapolation” who also did not know about how compute has been scaling, you would want to lengthen their timeline estimates. I’m not sure how timeline predictions break down on this axis.
Sorry, I meant the results in light of which methods were used, implications for other research, etc. The sentence would better read, “My understanding (and I think everyone else’s) of AI capabilities is largely shaped by how impressive major papers seem.”
Yeah, totally got that—I just think that making a relevant metric would be hard, and we’d have to know a lot that we don’t know now, including whether current ML techniques can ever lead to AGI.
Interesting. Yeah, I don’t much trust my own intuitions on our current progress. I’d love to have a better understanding of how to evaluate the implications of new developments, but I really can’t do much better than, “GPT-2 impressed me a lot more than AlphaStar.” And just to be 100% clear—I tend to think that the necessary amount of compute is somewhere in the 18-to-300-year range. After we reach it, I’m stuck using my intuition to guess when we’ll have the right algorithms to create AGI.