Other answers have made what I think of as the key points. I’ll try to add by pointing in the direction of some resources I’ve found on this matter which weren’t mentioned already by others. Note that:
Some of these source suggest AGI is on the horizon, some suggest it isn’t, and some just discuss the matter
The question of AGI timelines (things like “time until AGI”) is related to, but distinct from, the question of “discontinuity”/”takeoff speed”/”foom” (I mention the last of those terms only for historical reasons; I think it’s unnecessarily unprofessional). Both questions are relevant when determining strategies for handling AI risk. It would probably be good if the distinction was more often made explicit. The sources I’ll mention may sometimes be more about discontinuity-type-questions than about AGI timelines.
With those caveats in mind, here are some sources:
I’ve also made a collection of so far around 30 “works that highlight disagreements, cruxes, debates, assumptions, etc. about the importance of AI safety/alignment, about which risks are most likely, about which strategies to prioritise, etc.” Most aren’t primarily focused on timelines, but many relate to that matter.
Oh, also, on the more general question of what to actually do, given a particular belief about AGI timelines (or other existential risk timelines), this technical report by Owen Cotton-Barratt is interesting. One quote:
There are two major factors which seem to push towards preferring more work which focuses on scenarios where AI comes soon. The first is nearsightedness: we simply have a better idea of what will be useful in these scenarios. The second is diminishing marginal returns: the expected effect of an extra year of work on a problem tends to decline when it is being added to a larger total. And because there is a much larger time horizon in which to solve it (and in a wealthier world), the problem of AI safety when AI comes later may receive many times as much work as the problem of AI safety for AI that comes soon. On the other hand one more factor preferring work on scenarios where AI comes later is the ability to pursue more leveraged strategies which eschew object-level work today in favour of generating (hopefully) more object-level work later.
Other answers have made what I think of as the key points. I’ll try to add by pointing in the direction of some resources I’ve found on this matter which weren’t mentioned already by others. Note that:
Some of these source suggest AGI is on the horizon, some suggest it isn’t, and some just discuss the matter
The question of AGI timelines (things like “time until AGI”) is related to, but distinct from, the question of “discontinuity”/”takeoff speed”/”foom” (I mention the last of those terms only for historical reasons; I think it’s unnecessarily unprofessional). Both questions are relevant when determining strategies for handling AI risk. It would probably be good if the distinction was more often made explicit. The sources I’ll mention may sometimes be more about discontinuity-type-questions than about AGI timelines.
With those caveats in mind, here are some sources:
My current framework for thinking about AGI timelines (and the subsequent posts in the series) - zhukeepa, 2020
Double Cruxing the AI Foom debate—agilecaveman, 2018
Quick Nate/Eliezer comments on discontinuity − 2018
Arguments about fast takeoff—Paul Christiano, 2018
Likelihood of discontinuous progress around the development of AGI—AI Impacts, 2018
There’s No Fire Alarm for Artificial General Intelligence—Eliezer Yudkowsky, 2017 (I haven’t yet read this one)
The Hanson-Yudkowsky AI-Foom Debate—various works from 2008-2013 (I haven’t yet read most of this)
I’ve also made a collection of so far around 30 “works that highlight disagreements, cruxes, debates, assumptions, etc. about the importance of AI safety/alignment, about which risks are most likely, about which strategies to prioritise, etc.” Most aren’t primarily focused on timelines, but many relate to that matter.
Oh, also, on the more general question of what to actually do, given a particular belief about AGI timelines (or other existential risk timelines), this technical report by Owen Cotton-Barratt is interesting. One quote: