Report on Semi-informative Priors for AI timelines (Open Philanthropy)

This is a linkpost for https://​​www.openphilanthropy.org/​​blog/​​report-semi-informative-priors

I’ve cross-posted the introduction so people can see what it’s about. Happy to respond to questions and comments here!

One of Open Phil’s major focus areas is technical research and policy work aimed at reducing potential risks from advanced AI.

To inform this work, I have written a report developing one approach to forecasting when artificial general intelligence (AGI) will be developed. By AGI, I mean computer program(s) that can perform virtually any cognitive task as well as any human, for no more money than it would cost for a human to do it. The field of AI is largely understood to have begun in Dartmouth in 1956, and since its inception one of its central aims has been to develop AGI.1

How should we forecast when powerful AI systems will be developed? One approach is to construct a detailed estimate of the development requirements and when they will be met, drawing heavily on evidence from AI R&D. My colleague Ajeya Cotra has developed a framework along these lines.

We think it’s useful to approach the problem from multiple angles, and so my report takes a different perspective. It doesn’t take into account the achievements of AI R&D and instead makes a forecast based on analogous historical examples.

In brief:

  • My framework estimates pr(AGI by year X): the probability we should assign to AGI being developed by the end of year X.

  • I use the framework to make low-end and high-end estimates of pr(AGI by year X), as well as a central estimate.

  • pr(AGI by 2100) ranges from 5% to 35%, with my central estimate around 20%.

  • pr(AGI by 2036) ranges from 1% to 18%, with my central estimate around 8%.

    • The probabilities over the next few decades are heightened due to current fast growth of the number of AI researchers and of the computation used in AI R&D.

  • These probabilities should be treated with caution, for two reasons:

    • The framework ignores some of our evidence about when AGI will happen. It restricts itself to outside view considerations—those relating to how long analogous developments have taken in the past. It ignores evidence about how good current AI systems are compared to AGI, and how quickly the field of AI is progressing. It does not attempt to give all-things-considered probabilities.

    • The predictions of the framework depend on a number of highly subjective judgement calls. There aren’t clear historical analogies to the development of AGI, and interpreting the evidence we do have is difficult. Other authors would have made different judgements and arrived at somewhat different probabilities. Nonetheless, I believe thinking about these issues has made my probabilities more reasonable.

  • We have made an interactive tool where people can specify their own inputs to the framework and see the resultant pr(AGI by year X).

The structure of the rest of this post is as follows:

  • First I explain what kinds of evidence my framework does and does not take into account.

  • Then I explain where my results come from on a high level, without getting into the maths (more here).

  • I give some other high-level takeaways from the report (more here).

  • I describe my framework in greater detail, including the specific assumptions used to derive the results (more here).

  • Three academics reviewed the report. At the bottom I link to their reviews (more here).

(Read the rest of this post.)