“we haven’t had examples where a huge amount of cognitive labour has been dumped on a scientific field and we’ve been able to observe how much progress in that field accelerates”
Well, Claude 3.5 and I can think of some examples that contradict that statement. These are Claude’s estimates:
The rise of citizen science and crowdsourcing in certain fields. For instance, projects like Galaxy Zoo in astronomy have allowed large numbers of amateur scientists to contribute to data analysis, accelerating progress in classifying galaxies.
Duration: Ongoing since 2007 (about 17 years)
Degree of influx: Substantial. Galaxy Zoo alone has involved over 250,000 volunteers.
Acceleration: Significant but focused. The original Galaxy Zoo project classified ~900,000 galaxies in less than a year, a task that would have taken years for professional astronomers. However, the acceleration is primarily in data processing rather than theoretical advancements.
Estimated acceleration: 10-20x faster for specific classification tasks, but perhaps only 2-3x acceleration for the field of galaxy morphology as a whole.
The influx of physicists and mathematicians into quantitative finance and economics in the 1980s and 1990s. This led to rapid developments in financial modeling and econometrics.
Duration: About 20 years (concentrated influx)
Degree of influx: Moderate. Estimated several thousand PhDs over this period.
Acceleration: Substantial. This influx led to the rapid development of complex financial models and the growth of quantitative trading.
Estimated acceleration: 5-10x in areas like options pricing and risk modeling. Overall acceleration in quantitative finance might be around 3-5x.
The growth of computer science departments in universities during the 1960s and 1970s, which led to an acceleration in theoretical computer science and algorithm development.
Duration: About 20 years
Degree of influx: Significant. The number of CS departments and graduates grew rapidly during this period.
Acceleration: Major. This period saw fundamental developments in algorithms, programming languages, and theoretical computer science.
Estimated acceleration: 5-8x in theoretical computer science and algorithm development. The overall field might have seen a 3-4x acceleration.
I think it’s also interesting to see how open source contributions to language models and academia clearly havr thousands of times more contributors but seems to make relatively limited progress compared to the top AI labs. The main reason being, presumably, the lack of compute for experiments and training. So that’s one reason to be less concerned about a major influx of cognitive skills with limited compute.
The first and second examples seems pretty good, and useful reference points.
The third example don’t seem like they are nearly as useful though. What’s particularly unusual about this case is that there are two useful inputs to AI R&D—cognitive labour and compute for experiments—and the former will rise very rapidly but the other will not. In particular, I imagine CS departments also saw compute inputs growing in that time. And I imagine some of the developments discussed (eg proofs about algorithms) only have cognitive labour as an input.
The second example (quant finance), I suppose the ‘data’ input to doing this work stayed constant while the cognitive effort rose. So it works as an example. Though it may be a field with an unusual superabundance of data, unlike ML.
The first example involves a kind of ‘data overhang’ that the cognitive labour quickly eats up. Perhaps in a similar way AGI will “eat up” all the insights that are implicit in existing data from ML experiments.
What i think all the examples currently lack is a measure of how the pace of overall progress changed that isn’t completely made up. Could be interested to list out the achievements in each time period and ask some experts what they think. There an interesting empirical project here I think.
All the examples also lack anything like the scale to which cognitive labour will increase with AGI. This makes comparison even harder. (Though if we can get 3X speed-ups from mild influxes of cognitive labour, that makes 10X speed ups more plausible.)
I tried to edit the paragraph (though LW won’t let me) to:
I think we don’t know what perspective is right, we haven’t had many examples where a huge amount of cognitive labour has been dumped on a scientific field and other inputs to progress have remained constant and we’ve accurately measured how much overall progress in that field accelerates. (Edit: though this comment suggests some interesting examples.)
“we haven’t had examples where a huge amount of cognitive labour has been dumped on a scientific field and we’ve been able to observe how much progress in that field accelerates”
Well, Claude 3.5 and I can think of some examples that contradict that statement. These are Claude’s estimates:
The rise of citizen science and crowdsourcing in certain fields. For instance, projects like Galaxy Zoo in astronomy have allowed large numbers of amateur scientists to contribute to data analysis, accelerating progress in classifying galaxies. Duration: Ongoing since 2007 (about 17 years) Degree of influx: Substantial. Galaxy Zoo alone has involved over 250,000 volunteers. Acceleration: Significant but focused. The original Galaxy Zoo project classified ~900,000 galaxies in less than a year, a task that would have taken years for professional astronomers. However, the acceleration is primarily in data processing rather than theoretical advancements. Estimated acceleration: 10-20x faster for specific classification tasks, but perhaps only 2-3x acceleration for the field of galaxy morphology as a whole.
The influx of physicists and mathematicians into quantitative finance and economics in the 1980s and 1990s. This led to rapid developments in financial modeling and econometrics. Duration: About 20 years (concentrated influx) Degree of influx: Moderate. Estimated several thousand PhDs over this period. Acceleration: Substantial. This influx led to the rapid development of complex financial models and the growth of quantitative trading. Estimated acceleration: 5-10x in areas like options pricing and risk modeling. Overall acceleration in quantitative finance might be around 3-5x.
The growth of computer science departments in universities during the 1960s and 1970s, which led to an acceleration in theoretical computer science and algorithm development. Duration: About 20 years Degree of influx: Significant. The number of CS departments and graduates grew rapidly during this period. Acceleration: Major. This period saw fundamental developments in algorithms, programming languages, and theoretical computer science. Estimated acceleration: 5-8x in theoretical computer science and algorithm development. The overall field might have seen a 3-4x acceleration.
I think it’s also interesting to see how open source contributions to language models and academia clearly havr thousands of times more contributors but seems to make relatively limited progress compared to the top AI labs. The main reason being, presumably, the lack of compute for experiments and training. So that’s one reason to be less concerned about a major influx of cognitive skills with limited compute.
Thanks, this is a great comment.
The first and second examples seems pretty good, and useful reference points.
The third example don’t seem like they are nearly as useful though. What’s particularly unusual about this case is that there are two useful inputs to AI R&D—cognitive labour and compute for experiments—and the former will rise very rapidly but the other will not. In particular, I imagine CS departments also saw compute inputs growing in that time. And I imagine some of the developments discussed (eg proofs about algorithms) only have cognitive labour as an input.
The second example (quant finance), I suppose the ‘data’ input to doing this work stayed constant while the cognitive effort rose. So it works as an example. Though it may be a field with an unusual superabundance of data, unlike ML.
The first example involves a kind of ‘data overhang’ that the cognitive labour quickly eats up. Perhaps in a similar way AGI will “eat up” all the insights that are implicit in existing data from ML experiments.
What i think all the examples currently lack is a measure of how the pace of overall progress changed that isn’t completely made up. Could be interested to list out the achievements in each time period and ask some experts what they think. There an interesting empirical project here I think.
All the examples also lack anything like the scale to which cognitive labour will increase with AGI. This makes comparison even harder. (Though if we can get 3X speed-ups from mild influxes of cognitive labour, that makes 10X speed ups more plausible.)
I tried to edit the paragraph (though LW won’t let me) to:
I think we don’t know what perspective is right, we haven’t had many examples where a huge amount of cognitive labour has been dumped on a scientific field and other inputs to progress have remained constant and we’ve accurately measured how much overall progress in that field accelerates. (Edit: though this comment suggests some interesting examples.)