I think this kind of research will help inform people about the economic impacts of AI, but I don’t think the primary benefits will be for forecasters per se. Instead, I’d expect policymakers, academics, journalists, investors, and other groups of people who value academic prestige and working within established disciplines to be the main groups that would learn from research like this.
I don’t think most expert AI forecasters would really value this paper. They’re generally already highly informed about AI progress, and might have read relatively niche research on the topic, like Ajeya Cotra and Tom Davidson’s work at OpenPhil. The methodology in this paper might seem obvious to them (“of course firms will automate when it’s cost effective!”), and its conclusions wouldn’t be strong or comprehensive enough to change their views.
It’s more plausible that future work building on this paper would inform forecasters. As you mentioned above, this work is only about computer vision systems, so it would be useful to see the methodology applied to LLMs and other kinds of AI. This paper has a relatively limited dataset, so it’d be good to see this methodology applied to more empirical evidence. Right now, I think most AI forecasters rely on either macro-level models like Davidson or simple intuitions like “we’ll get explosive growth when we have automated remote workers.” This line of research could eventually lead to a much more detailed economic model of AI automation, which I could imagine becoming a key source of information for forecasters.
But expert forecasters are only one group of people whose expectations about the future matter. I’d expect this research to be more valuable for other kinds of people whose opinions about AI development also matter, such as:
Economists (Korinek, Trammell, Brynjolfsson, Chad Jones, Daniel Rock)
Policymakers (Researchers at policy think tanks and staffers in political institutions who spend a large share of their time thinking about AI)
Other educated people who influence public debates, such as journalists or investors
Media coverage of this paper suggests it may be influential among those audiences.
Thanks again for the comment.
You think that the primary value of the paper is in its help with forecasting, right?
In that case, do you think it would be fair to ask expert forecasters if this paper is useful or not?
I think this kind of research will help inform people about the economic impacts of AI, but I don’t think the primary benefits will be for forecasters per se. Instead, I’d expect policymakers, academics, journalists, investors, and other groups of people who value academic prestige and working within established disciplines to be the main groups that would learn from research like this.
I don’t think most expert AI forecasters would really value this paper. They’re generally already highly informed about AI progress, and might have read relatively niche research on the topic, like Ajeya Cotra and Tom Davidson’s work at OpenPhil. The methodology in this paper might seem obvious to them (“of course firms will automate when it’s cost effective!”), and its conclusions wouldn’t be strong or comprehensive enough to change their views.
It’s more plausible that future work building on this paper would inform forecasters. As you mentioned above, this work is only about computer vision systems, so it would be useful to see the methodology applied to LLMs and other kinds of AI. This paper has a relatively limited dataset, so it’d be good to see this methodology applied to more empirical evidence. Right now, I think most AI forecasters rely on either macro-level models like Davidson or simple intuitions like “we’ll get explosive growth when we have automated remote workers.” This line of research could eventually lead to a much more detailed economic model of AI automation, which I could imagine becoming a key source of information for forecasters.
But expert forecasters are only one group of people whose expectations about the future matter. I’d expect this research to be more valuable for other kinds of people whose opinions about AI development also matter, such as:
Economists (Korinek, Trammell, Brynjolfsson, Chad Jones, Daniel Rock)
Policymakers (Researchers at policy think tanks and staffers in political institutions who spend a large share of their time thinking about AI)
Other educated people who influence public debates, such as journalists or investors
Media coverage of this paper suggests it may be influential among those audiences.