Disclaimer: Be careful about definitions and interpreting metaculus questions. The latter involves resolution criteria on defining AGI that does not align with my own (e.g. meeting the named criteria would not replace all human tasks). Also, there has been an inflow of additional forecasters based on recent developments which should be factored in.
I listed some of my current sources down below. I hope this helps!
Eliezer & Paul’s IMO challenge bet: “Paul at <8%, Eliezer at >16% for AI made before the IMO is able to get a gold (under time controls etc. of grand challenge) in one of 2022-2025. Separately, we have Paul at <4% of an AI able to solve the “hardest” problem under the same conditions.”
Eliezer: “My probability is at least 16% [on the IMO grand challenge falling], though I’d have to think more and Look into Things, and maybe ask for such sad little metrics as are available before I was confident saying how much more.”
Paul Christiano: “I’d put 4% on “For the 2022, 2023, 2024, or 2025 IMO an AI built before the IMO is able to solve the single hardest problem” I think the IMO challenge would be significant direct evidence that powerful AI would be sooner, or at least would be technologically possible sooner. I think this would be fairly significant evidence, perhaps pushing my 2040 TAI probability up from 25% to 40% or something like that.”
Ajeya’s “When required computation may be affordable” (from ACT):
Ajeya created a function to evaluate the predicted annual investments on giant AI projects (upward sloping curve), vs. the likely cost of training a human-level AI (downward sloping curve).
Eventually, these curves meet, representing the first trained human-level AI.
Ajeya’s values: 20% neural net, short horizon, 30% neural net, medium horizon, 15% neural net, long horizon, 5% human lifetime as training data, 10% evolutionary history as training data, 10% genome as parameter number
Thanks for putting in the effort. This is helpful information. I’ve got a few clarifying questions, though please don’t feel obliged to answer them if you don’t have the time or don’t have a sense of the answer. You’ve already helped much and I can search for answers elsewhere if need be.
To summarize:
a. The low/near end of the predicted distribution of the timeline for AI being at or near general intelligence is roughly 5 years out.
b. The median prediction for when super-human AGI will be achieved, i.e., ‘transformative AI,’ or ‘superintelligence,’ is 10-20 years out.
c. The average point at the 75% percentile for the spread of predicted timelines is over 20 years out.
Is that correct?
The meaning of weakly general AI isn’t clarified on the corresponding Metaculus webpage. I’m guessing it’s intended meaning is something like: “AI competitive with performance near the recorded human peak for a full range of all cognitive tasks, but not all such tasks being performed at super-human levels.” Is that accurate?
How much confidence can we have that the timelines dovetailed on by many long-termists in effective altruism are accurately reflected by:
Different kinds of aggregated forecast models from Metaculus or other prediction/forecasting platforms?
Prominent experts and professionals in the field, such as Legg, Hassabis, Yudkowsky, Christiano or Cotra?
Disclaimer: Be careful about definitions and interpreting metaculus questions. The latter involves resolution criteria on defining AGI that does not align with my own (e.g. meeting the named criteria would not replace all human tasks). Also, there has been an inflow of additional forecasters based on recent developments which should be factored in.
I listed some of my current sources down below. I hope this helps!
Metaculus forecasts:
Date of general AI: lower 25%: 2032, Median: 2045, upper 75%: 2067
Date Weakly General AI is Publicly Known: lower 25%: 2026, median: 2032, upper 75%: 2044
Date AIs Capable of Developing AI Software: lower 25%: 2026 median: 2032, upper 75%: 2045
Other:
AI Progress Essay Contest (lists relevant questions)
More x-risk relevant prediction market question suggestions to look out for
General accuracy evaluation of recently resolved Metaculus community predictions on AI timelines by Marius Hobbahn
Shane Legg (DeepMind Co-founder): 50%: 2030, some chance in next 10-30y
from DeepMind The road to AGI (S2, Ep5)
Demis Hassabis: 10-20y from now
DeepMind Promise of AI with Demis Hassabis (Ep9)
Eliezer & Paul’s IMO challenge bet: “Paul at <8%, Eliezer at >16% for AI made before the IMO is able to get a gold (under time controls etc. of grand challenge) in one of 2022-2025. Separately, we have Paul at <4% of an AI able to solve the “hardest” problem under the same conditions.”
Eliezer:
“My probability is at least 16% [on the IMO grand challenge falling], though I’d have to think more and Look into Things, and maybe ask for such sad little metrics as are available before I was confident saying how much more.”
Paul Christiano:
“I’d put 4% on “For the 2022, 2023, 2024, or 2025 IMO an AI built before the IMO is able to solve the single hardest problem”
I think the IMO challenge would be significant direct evidence that powerful AI would be sooner, or at least would be technologically possible sooner. I think this would be fairly significant evidence, perhaps pushing my 2040 TAI probability up from 25% to 40% or something like that.”
Ajeya’s “When required computation may be affordable” (from ACT):
Ajeya created a function to evaluate the predicted annual investments on giant AI projects (upward sloping curve), vs. the likely cost of training a human-level AI (downward sloping curve).
Eventually, these curves meet, representing the first trained human-level AI.
You can play around with the spreadsheet here.
Ajeya’s values: 20% neural net, short horizon, 30% neural net, medium horizon, 15% neural net, long horizon, 5% human lifetime as training data, 10% evolutionary history as training data, 10% genome as parameter number
Just for fun: Annual Reddit singularity predictions
Thanks for putting in the effort. This is helpful information. I’ve got a few clarifying questions, though please don’t feel obliged to answer them if you don’t have the time or don’t have a sense of the answer. You’ve already helped much and I can search for answers elsewhere if need be.
To summarize:
a. The low/near end of the predicted distribution of the timeline for AI being at or near general intelligence is roughly 5 years out.
b. The median prediction for when super-human AGI will be achieved, i.e., ‘transformative AI,’ or ‘superintelligence,’ is 10-20 years out.
c. The average point at the 75% percentile for the spread of predicted timelines is over 20 years out.
Is that correct?
The meaning of weakly general AI isn’t clarified on the corresponding Metaculus webpage. I’m guessing it’s intended meaning is something like: “AI competitive with performance near the recorded human peak for a full range of all cognitive tasks, but not all such tasks being performed at super-human levels.” Is that accurate?
How much confidence can we have that the timelines dovetailed on by many long-termists in effective altruism are accurately reflected by:
Different kinds of aggregated forecast models from Metaculus or other prediction/forecasting platforms?
Prominent experts and professionals in the field, such as Legg, Hassabis, Yudkowsky, Christiano or Cotra?