Thanks for the comment @aogara <3. I agree this paper seems very good from an academic point of view.
My main question: how does this research help in preventing existential risks from AI?
Other questions:
What are the practical implications of this paper?
What insights does this model provide regarding text-based task automation using LLMs?
Looking into one of the main computer vision tasks: self-driving cars. What insights does their model provide? (Tesla is probably ~3 years away from self-driving cars and this won’t require any hardware update, so no cost)
Mainly I think this paper will help inform people about the potential economic implications of AI development. These implications are important for people to understand because they contribute to AI x-risks. For example, explosive economic growth could lead to many new scientific innovations in a short period of time, with incredible upside but also serious risks, and perhaps warranting more centralized control over AI during that critical period. Another example would be automation: if most economic productivity comes from AI systems rather than human labor or other forms of capital, this will dramatically change the global balance of power and contribute to many existential risks.
I think this kind of research will help inform people about the economic impacts of AI, but I don’t think the primary benefits will be for forecasters per se. Instead, I’d expect policymakers, academics, journalists, investors, and other groups of people who value academic prestige and working within established disciplines to be the main groups that would learn from research like this.
I don’t think most expert AI forecasters would really value this paper. They’re generally already highly informed about AI progress, and might have read relatively niche research on the topic, like Ajeya Cotra and Tom Davidson’s work at OpenPhil. The methodology in this paper might seem obvious to them (“of course firms will automate when it’s cost effective!”), and its conclusions wouldn’t be strong or comprehensive enough to change their views.
It’s more plausible that future work building on this paper would inform forecasters. As you mentioned above, this work is only about computer vision systems, so it would be useful to see the methodology applied to LLMs and other kinds of AI. This paper has a relatively limited dataset, so it’d be good to see this methodology applied to more empirical evidence. Right now, I think most AI forecasters rely on either macro-level models like Davidson or simple intuitions like “we’ll get explosive growth when we have automated remote workers.” This line of research could eventually lead to a much more detailed economic model of AI automation, which I could imagine becoming a key source of information for forecasters.
But expert forecasters are only one group of people whose expectations about the future matter. I’d expect this research to be more valuable for other kinds of people whose opinions about AI development also matter, such as:
Economists (Korinek, Trammell, Brynjolfsson, Chad Jones, Daniel Rock)
Policymakers (Researchers at policy think tanks and staffers in political institutions who spend a large share of their time thinking about AI)
Other educated people who influence public debates, such as journalists or investors
Media coverage of this paper suggests it may be influential among those audiences.
Thanks for the comment @aogara <3. I agree this paper seems very good from an academic point of view.
My main question: how does this research help in preventing existential risks from AI?
Other questions:
What are the practical implications of this paper?
What insights does this model provide regarding text-based task automation using LLMs?
Looking into one of the main computer vision tasks: self-driving cars. What insights does their model provide? (Tesla is probably ~3 years away from self-driving cars and this won’t require any hardware update, so no cost)
Mainly I think this paper will help inform people about the potential economic implications of AI development. These implications are important for people to understand because they contribute to AI x-risks. For example, explosive economic growth could lead to many new scientific innovations in a short period of time, with incredible upside but also serious risks, and perhaps warranting more centralized control over AI during that critical period. Another example would be automation: if most economic productivity comes from AI systems rather than human labor or other forms of capital, this will dramatically change the global balance of power and contribute to many existential risks.
Thanks again for the comment.
You think that the primary value of the paper is in its help with forecasting, right?
In that case, do you think it would be fair to ask expert forecasters if this paper is useful or not?
I think this kind of research will help inform people about the economic impacts of AI, but I don’t think the primary benefits will be for forecasters per se. Instead, I’d expect policymakers, academics, journalists, investors, and other groups of people who value academic prestige and working within established disciplines to be the main groups that would learn from research like this.
I don’t think most expert AI forecasters would really value this paper. They’re generally already highly informed about AI progress, and might have read relatively niche research on the topic, like Ajeya Cotra and Tom Davidson’s work at OpenPhil. The methodology in this paper might seem obvious to them (“of course firms will automate when it’s cost effective!”), and its conclusions wouldn’t be strong or comprehensive enough to change their views.
It’s more plausible that future work building on this paper would inform forecasters. As you mentioned above, this work is only about computer vision systems, so it would be useful to see the methodology applied to LLMs and other kinds of AI. This paper has a relatively limited dataset, so it’d be good to see this methodology applied to more empirical evidence. Right now, I think most AI forecasters rely on either macro-level models like Davidson or simple intuitions like “we’ll get explosive growth when we have automated remote workers.” This line of research could eventually lead to a much more detailed economic model of AI automation, which I could imagine becoming a key source of information for forecasters.
But expert forecasters are only one group of people whose expectations about the future matter. I’d expect this research to be more valuable for other kinds of people whose opinions about AI development also matter, such as:
Economists (Korinek, Trammell, Brynjolfsson, Chad Jones, Daniel Rock)
Policymakers (Researchers at policy think tanks and staffers in political institutions who spend a large share of their time thinking about AI)
Other educated people who influence public debates, such as journalists or investors
Media coverage of this paper suggests it may be influential among those audiences.