Thank you very much for the review and aggregation of all these forecasts! Very nice!
I just have one point to add:
As the first aggregate prediction, you mention the AI Impacts’ 2023 survey of machine learning researchers. Your post gives the impression that it produced an aggregate forecast of 50% by 2047 for human-level AI. I think this is at least imprecise, if not incorrect.
AI Impacts asked about the timing of human-level performance by asking some participants about how soon they expect “high-level machine intelligence” (HLMI) and asking others about how soon they expect “full automation of labor” (FAOL). The resulting aggregate forecast gave a 50% chance of HLMI by 2047 and a 50% chance of FAOL by 2116. In your post, you ignore that AI Impacts uses two different concepts for human-level AI and just report the aggregate forecast for HLMI under the headline of human-level AI.
I think this is unfortunate because this difference matters. One of your main points is that you claim that experts think human-level AI is likely to arrive in your lifetime. However, most of us will probably not be alive in 2116.
Great point, Gregor! Tom Adamczewski has done an analysis which combines the answers to the questions about tasks and occupations. Here is the mainline graph.
Tom aggregates the results from the different questions in the most agnostic way possible, which I think is the best one can do.
I achieve this by simply including answers to both questions prior to aggregation, i.e. no special form of aggregation is used for aggregating tasks (HLMI) and occupations (FAOL). Since more respondents were asked about tasks than occupations, I achieve equal weight by resampling from the occupations (FAOL) responses.
Here is how Tom suggests people describe the results.
Experts were asked when it will be feasible to automate all tasks or occupations. The median expert thinks this is 20% likely by 2048, and 80% likely by 2103. There was substantial disagreement among experts. For automation by 2048, the middle half of experts assigned it a probability between 1% and a 60% (meaning ¼ assigned it a chance lower than 1%, and ¼ gave a chance higher than 60%). For automation by 2103, the central half of experts forecasts ranged from a 25% chance to a 100% chance.2
Thanks! We’ve edited the text to include both the FAOL estimate that you mention, and the combined estimate that Vasco mentions in the other reply. (The changes might not show up on site immediately, but will soon.) To the extent that people think FAOL will take longer than HLMI because of obstacles to AI doing jobs that don’t come from it not being generally capable enough, I think the estimate for HLMI is closer to an estimate of when we’ll have human-level AI than the estimate for FAOL. But I don’t know if that’s the right interpretation, and you’re definitely right that it’s fairer to include the whole picture. I agree that there’s some tension between us saying “experts think human-level AI is likely to arrive in your lifetime” and this survey result, but I do also still think that that sentence is true on the whole, so we’ll think about whether to add more detail about that.
Thank you very much for the review and aggregation of all these forecasts! Very nice!
I just have one point to add:
As the first aggregate prediction, you mention the AI Impacts’ 2023 survey of machine learning researchers. Your post gives the impression that it produced an aggregate forecast of 50% by 2047 for human-level AI. I think this is at least imprecise, if not incorrect.
AI Impacts asked about the timing of human-level performance by asking some participants about how soon they expect “high-level machine intelligence” (HLMI) and asking others about how soon they expect “full automation of labor” (FAOL). The resulting aggregate forecast gave a 50% chance of HLMI by 2047 and a 50% chance of FAOL by 2116. In your post, you ignore that AI Impacts uses two different concepts for human-level AI and just report the aggregate forecast for HLMI under the headline of human-level AI.
I think this is unfortunate because this difference matters. One of your main points is that you claim that experts think human-level AI is likely to arrive in your lifetime. However, most of us will probably not be alive in 2116.
Great point, Gregor! Tom Adamczewski has done an analysis which combines the answers to the questions about tasks and occupations. Here is the mainline graph.
Tom aggregates the results from the different questions in the most agnostic way possible, which I think is the best one can do.
Here is how Tom suggests people describe the results.
Thanks! We’ve edited the text to include both the FAOL estimate that you mention, and the combined estimate that Vasco mentions in the other reply. (The changes might not show up on site immediately, but will soon.) To the extent that people think FAOL will take longer than HLMI because of obstacles to AI doing jobs that don’t come from it not being generally capable enough, I think the estimate for HLMI is closer to an estimate of when we’ll have human-level AI than the estimate for FAOL. But I don’t know if that’s the right interpretation, and you’re definitely right that it’s fairer to include the whole picture. I agree that there’s some tension between us saying “experts think human-level AI is likely to arrive in your lifetime” and this survey result, but I do also still think that that sentence is true on the whole, so we’ll think about whether to add more detail about that.