Thanks for linking âLine Goes Up? Inherent Limitations of Benchmarks for Evaluating Large Language Modelsâ. Also, I agree with:
MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of âAI researchersâ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.
That comparison seems simplistic and inapt for at least a few reasons. That does seem like pretty âtrust me broâ justification for the intelligence explosion lol. Granted, I only listened to the accompanying podcast, so I canât speak too much to the paper.
Still, I am of two minds. I still buy into a lot of the premise of âPreparing for the Intelligence Explosionâ. I find the idea of getting collectively blind-sighted by rapid, uneven AI progress ~eminently plausible. There didnât even need to be that much of a fig leaf.
Donât get me wrong, I am not personally very confident in âexpert level AI researcher for arbitrary domainsâ w/âi the next few decades. Even so, it does seem like the sort of thing worth thinking about and preparing about.
From one perspective, AI coding tools are just recursive self improvement gradually coming online. I think I understand some of the urgency, but I appreciate the skepticism a lot too.
Preparing for an intelligence explosion is a worthwhile thought experiment at least. It seems probably good to know what we would do in a world with âa lot of powerful AIâ given that we are in a world where all sorts of people are trying to research/âmake/âsell ~âa lot of powerful AIâ. Like just in case, at least.
I think I see multiple sides. Lots to think about.
Thanks for linking âLine Goes Up? Inherent Limitations of Benchmarks for Evaluating Large Language Modelsâ. Also, I agree with:
That comparison seems simplistic and inapt for at least a few reasons. That does seem like pretty âtrust me broâ justification for the intelligence explosion lol. Granted, I only listened to the accompanying podcast, so I canât speak too much to the paper.
Still, I am of two minds. I still buy into a lot of the premise of âPreparing for the Intelligence Explosionâ. I find the idea of getting collectively blind-sighted by rapid, uneven AI progress ~eminently plausible. There didnât even need to be that much of a fig leaf.
Donât get me wrong, I am not personally very confident in âexpert level AI researcher for arbitrary domainsâ w/âi the next few decades. Even so, it does seem like the sort of thing worth thinking about and preparing about.
From one perspective, AI coding tools are just recursive self improvement gradually coming online. I think I understand some of the urgency, but I appreciate the skepticism a lot too.
Preparing for an intelligence explosion is a worthwhile thought experiment at least. It seems probably good to know what we would do in a world with âa lot of powerful AIâ given that we are in a world where all sorts of people are trying to research/âmake/âsell ~âa lot of powerful AIâ. Like just in case, at least.
I think I see multiple sides. Lots to think about.