Possible research/āforecasting questions to understand the economic value of AGI research
A common narrative about AI research is that we are on a path to AGI, in that society will be motivated to try to create increasingly general AI systems, culminating in AGI. Since this is a core assumption of the AGI risk hypothesis, I think itās very important to understand whether this is actually the case.
Some people have predicted that AI research funding will dry up someday as the costs start to outweigh the benefits, resulting in an āAI winter.ā Jeff Bigham wrote in 2019 that the AI field will experience an āAI autumn,ā in which the AI research community will shift its focus from trying to develop human-level AI capabilities to developing socially valuable applications of narrow AI.
My view is that an AI winter is unlikely to happen anytime soon (10%), an AI autumn is likely to happen eventually (70%), and continued investment in AGI research all the way to AGI is somewhat unlikely (20%). But I think we can try to understand and predict these outcomes better. Here are some ideas for possibly testable research questions:
How much money will OpenAI make by licensing GPT-3?
How long will it take for the technology behind GPT-2 and GPT-3 (roughly, making generic language models do other language tasks without specific training) to become as economically competitive as similar technologies did after they were invented?
How long will it take for DeepMind and OpenAI to break even?
How do the growth rates of DeepMind and OpenAIās revenues and expenses compare to those of other corporate research labs throughout history?
Will Alphabet downsize or shut down DeepMind?
Will Microsoft scale back or end its partnership with OpenAI?
Notes:
I donāt know of any other labs actively trying to create AGI.
I have no experience with financial analysis, so I donāt know if these questions are the kind that a financial analyst would actually be able to answer. They could be nonsensical for all I know.
Possible research/āforecasting questions to understand the economic value of AGI research
A common narrative about AI research is that we are on a path to AGI, in that society will be motivated to try to create increasingly general AI systems, culminating in AGI. Since this is a core assumption of the AGI risk hypothesis, I think itās very important to understand whether this is actually the case.
Some people have predicted that AI research funding will dry up someday as the costs start to outweigh the benefits, resulting in an āAI winter.ā Jeff Bigham wrote in 2019 that the AI field will experience an āAI autumn,ā in which the AI research community will shift its focus from trying to develop human-level AI capabilities to developing socially valuable applications of narrow AI.
My view is that an AI winter is unlikely to happen anytime soon (10%), an AI autumn is likely to happen eventually (70%), and continued investment in AGI research all the way to AGI is somewhat unlikely (20%). But I think we can try to understand and predict these outcomes better. Here are some ideas for possibly testable research questions:
What will be the ROI on:
Microsoftās $1 billion dollar investment and partnership with OpenAI?
Microsoftās GPT-3 licensing deal with OpenAI?
Google/āAlphabetās acquisition of DeepMind?
How much money will OpenAI make by licensing GPT-3?
How long will it take for the technology behind GPT-2 and GPT-3 (roughly, making generic language models do other language tasks without specific training) to become as economically competitive as similar technologies did after they were invented?
How long will it take for DeepMind and OpenAI to break even?
How do the growth rates of DeepMind and OpenAIās revenues and expenses compare to those of other corporate research labs throughout history?
Will Alphabet downsize or shut down DeepMind?
Will Microsoft scale back or end its partnership with OpenAI?
Notes:
I donāt know of any other labs actively trying to create AGI.
I have no experience with financial analysis, so I donāt know if these questions are the kind that a financial analyst would actually be able to answer. They could be nonsensical for all I know.
Have you seen the Metaculus āForecasting AI Progressā tournament?
Nope!