I claim that this is not how I think about AI capabilities, and it is not how many AI researchers think about AI capabilities. For a particularly extreme example, the Go-explore paper out of Uber had a very nominally impressive result on Montezuma’s Revenge, but much of the AI community didn’t find it compelling because of the assumptions that their algorithm used.
Sorry, I meant the results in light of which methods were used, implications for other research, etc. The sentence would better read, “My understanding (and I think everyone else’s) of AI capabilities is largely shaped by how impressive major papers seem.”
Tbc, I definitely did not intend for that to be an actual metric.
Yeah, totally got that—I just think that making a relevant metric would be hard, and we’d have to know a lot that we don’t know now, including whether current ML techniques can ever lead to AGI.
I would say that I have a set of intuitions and impressions that function as a very weak prediction of what AI will look like in the future, along the lines of that sort of metric. I trust timelines based on extrapolation of progress using these intuitions more than timelines based solely on compute.
Interesting. Yeah, I don’t much trust my own intuitions on our current progress. I’d love to have a better understanding of how to evaluate the implications of new developments, but I really can’t do much better than, “GPT-2 impressed me a lot more than AlphaStar.” And just to be 100% clear—I tend to think that the necessary amount of compute is somewhere in the 18-to-300-year range. After we reach it, I’m stuck using my intuition to guess when we’ll have the right algorithms to create AGI.
Sorry, I meant the results in light of which methods were used, implications for other research, etc. The sentence would better read, “My understanding (and I think everyone else’s) of AI capabilities is largely shaped by how impressive major papers seem.”
Yeah, totally got that—I just think that making a relevant metric would be hard, and we’d have to know a lot that we don’t know now, including whether current ML techniques can ever lead to AGI.
Interesting. Yeah, I don’t much trust my own intuitions on our current progress. I’d love to have a better understanding of how to evaluate the implications of new developments, but I really can’t do much better than, “GPT-2 impressed me a lot more than AlphaStar.” And just to be 100% clear—I tend to think that the necessary amount of compute is somewhere in the 18-to-300-year range. After we reach it, I’m stuck using my intuition to guess when we’ll have the right algorithms to create AGI.