Other answers have made what I think of as the key points. Iāll try to add by pointing in the direction of some resources Iāve found on this matter which werenāt mentioned already by others. Note that:
Some of these source suggest AGI is on the horizon, some suggest it isnāt, and some just discuss the matter
The question of AGI timelines (things like ātime until AGIā) is related to, but distinct from, the question of ādiscontinuityā/āātakeoff speedā/āāfoomā (I mention the last of those terms only for historical reasons; I think itās unnecessarily unprofessional). Both questions are relevant when determining strategies for handling AI risk. It would probably be good if the distinction was more often made explicit. The sources Iāll mention may sometimes be more about discontinuity-type-questions than about AGI timelines.
With those caveats in mind, here are some sources:
Iāve also made a collection of so far around 30 āworks that highlight disagreements, cruxes, debates, assumptions, etc. about the importance of AI safety/āalignment, about which risks are most likely, about which strategies to prioritise, etc.ā Most arenāt primarily focused on timelines, but many relate to that matter.
Oh, also, on the more general question of what to actually do, given a particular belief about AGI timelines (or other existential risk timelines), this technical report by Owen Cotton-Barratt is interesting. One quote:
There are two major factors which seem to push towards preferring more work which focuses on scenarios where AI comes soon. The first is nearsightedness: we simply have a better idea of what will be useful in these scenarios. The second is diminishing marginal returns: the expected effect of an extra year of work on a problem tends to decline when it is being added to a larger total. And because there is a much larger time horizon in which to solve it (and in a wealthier world), the problem of AI safety when AI comes later may receive many times as much work as the problem of AI safety for AI that comes soon. On the other hand one more factor preferring work on scenarios where AI comes later is the ability to pursue more leveraged strategies which eschew object-level work today in favour of generating (hopefully) more object-level work later.
Other answers have made what I think of as the key points. Iāll try to add by pointing in the direction of some resources Iāve found on this matter which werenāt mentioned already by others. Note that:
Some of these source suggest AGI is on the horizon, some suggest it isnāt, and some just discuss the matter
The question of AGI timelines (things like ātime until AGIā) is related to, but distinct from, the question of ādiscontinuityā/āātakeoff speedā/āāfoomā (I mention the last of those terms only for historical reasons; I think itās unnecessarily unprofessional). Both questions are relevant when determining strategies for handling AI risk. It would probably be good if the distinction was more often made explicit. The sources Iāll mention may sometimes be more about discontinuity-type-questions than about AGI timelines.
With those caveats in mind, here are some sources:
My current framework for thinking about AGI timelines (and the subsequent posts in the series) - zhukeepa, 2020
Double Cruxing the AI Foom debateāagilecaveman, 2018
Quick Nate/āEliezer comments on discontinuity ā 2018
Arguments about fast takeoffāPaul Christiano, 2018
Likelihood of discontinuous progress around the development of AGIāAI Impacts, 2018
Thereās No Fire Alarm for Artificial General IntelligenceāEliezer Yudkowsky, 2017 (I havenāt yet read this one)
The Hanson-Yudkowsky AI-Foom Debateāvarious works from 2008-2013 (I havenāt yet read most of this)
Iāve also made a collection of so far around 30 āworks that highlight disagreements, cruxes, debates, assumptions, etc. about the importance of AI safety/āalignment, about which risks are most likely, about which strategies to prioritise, etc.ā Most arenāt primarily focused on timelines, but many relate to that matter.
Oh, also, on the more general question of what to actually do, given a particular belief about AGI timelines (or other existential risk timelines), this technical report by Owen Cotton-Barratt is interesting. One quote: