Thanks for putting in the effort. This is helpful information. I’ve got a few clarifying questions, though please don’t feel obliged to answer them if you don’t have the time or don’t have a sense of the answer. You’ve already helped much and I can search for answers elsewhere if need be.
To summarize:
a. The low/near end of the predicted distribution of the timeline for AI being at or near general intelligence is roughly 5 years out.
b. The median prediction for when super-human AGI will be achieved, i.e., ‘transformative AI,’ or ‘superintelligence,’ is 10-20 years out.
c. The average point at the 75% percentile for the spread of predicted timelines is over 20 years out.
Is that correct?
The meaning of weakly general AI isn’t clarified on the corresponding Metaculus webpage. I’m guessing it’s intended meaning is something like: “AI competitive with performance near the recorded human peak for a full range of all cognitive tasks, but not all such tasks being performed at super-human levels.” Is that accurate?
How much confidence can we have that the timelines dovetailed on by many long-termists in effective altruism are accurately reflected by:
Different kinds of aggregated forecast models from Metaculus or other prediction/forecasting platforms?
Prominent experts and professionals in the field, such as Legg, Hassabis, Yudkowsky, Christiano or Cotra?
Thanks for putting in the effort. This is helpful information. I’ve got a few clarifying questions, though please don’t feel obliged to answer them if you don’t have the time or don’t have a sense of the answer. You’ve already helped much and I can search for answers elsewhere if need be.
To summarize:
a. The low/near end of the predicted distribution of the timeline for AI being at or near general intelligence is roughly 5 years out.
b. The median prediction for when super-human AGI will be achieved, i.e., ‘transformative AI,’ or ‘superintelligence,’ is 10-20 years out.
c. The average point at the 75% percentile for the spread of predicted timelines is over 20 years out.
Is that correct?
The meaning of weakly general AI isn’t clarified on the corresponding Metaculus webpage. I’m guessing it’s intended meaning is something like: “AI competitive with performance near the recorded human peak for a full range of all cognitive tasks, but not all such tasks being performed at super-human levels.” Is that accurate?
How much confidence can we have that the timelines dovetailed on by many long-termists in effective altruism are accurately reflected by:
Different kinds of aggregated forecast models from Metaculus or other prediction/forecasting platforms?
Prominent experts and professionals in the field, such as Legg, Hassabis, Yudkowsky, Christiano or Cotra?