I’m curating this post, but encourage people to look at the others if they’re interested.
Things I really appreciate about this post:
I think discussions of different models or forecasts and how they interact happen in lots of different places, and syntheses of these forecasts and models are really useful.
The full report has a tool that you can use to give more or less weight to different forecasts to see what the weighted average forecasts look like
I really appreciate the summaries of the different approaches (also in the full report), and that these summaries flag potential weaknesses (like the fact that the AI Impacts survey had a 17% response rate)
This is a useful insight
“The inside-view models we reviewed predicted shorter timelines (e.g. bioanchors has a median of 2052) while the outside-view models predicted longer timelines (e.g. semi-informative priors has a median over 2100). The judgment-based forecasts are skewed towards agreement with the inside-view models, and are often more aggressive (e.g. Samotsvety assigned a median of 2043)”
The visualization (although it took me a little while to parse it; I think it might be useful to e.g. also provide simplified visuals that show fewer approaches)
Other notes:
I do wish it was easier to tell how independent these different approaches/models are. I like the way model-based forecasts and judgement-based forecasts are separated, which already helps (I assume that e.g. the Metaculus estimate incorporates others’ and the models).
I think some of the conversations people have about timelines focus too much on what the timelines look like and less on “what does this mean for how we should act.” I don’t think this is a weakness of this lit review — this lit review is very useful and does what it sets out to do (aggregate different forecasts and explain different approaches to forecasting transformative AI) — but I wanted to flag this.
Some excellent content on AI timelines and takeoff scenarios has come out recently:
This literature review
Tom Davidson’s What a compute-centric framework says about AI takeoff speeds—draft report (somewhat more technical)
[Our World in Data] AI timelines: What do experts in artificial intelligence expect for the future? (Roser, 2023) — link-posted by _will_
And more (see more here)
I’m curating this post, but encourage people to look at the others if they’re interested.
Things I really appreciate about this post:
I think discussions of different models or forecasts and how they interact happen in lots of different places, and syntheses of these forecasts and models are really useful.
The full report has a tool that you can use to give more or less weight to different forecasts to see what the weighted average forecasts look like
I really appreciate the summaries of the different approaches (also in the full report), and that these summaries flag potential weaknesses (like the fact that the AI Impacts survey had a 17% response rate)
This is a useful insight
“The inside-view models we reviewed predicted shorter timelines (e.g. bioanchors has a median of 2052) while the outside-view models predicted longer timelines (e.g. semi-informative priors has a median over 2100). The judgment-based forecasts are skewed towards agreement with the inside-view models, and are often more aggressive (e.g. Samotsvety assigned a median of 2043)”
The visualization (although it took me a little while to parse it; I think it might be useful to e.g. also provide simplified visuals that show fewer approaches)
Other notes:
I do wish it was easier to tell how independent these different approaches/models are. I like the way model-based forecasts and judgement-based forecasts are separated, which already helps (I assume that e.g. the Metaculus estimate incorporates others’ and the models).
I think some of the conversations people have about timelines focus too much on what the timelines look like and less on “what does this mean for how we should act.” I don’t think this is a weakness of this lit review — this lit review is very useful and does what it sets out to do (aggregate different forecasts and explain different approaches to forecasting transformative AI) — but I wanted to flag this.
Thank you Lizka, this is really good feedback.