AGI by 2028 is more likely than not
GregorBln
Thank you, this is helpful!
I just wanted to add BlueDot Impacts “AI Governance Fast Track Course” to the list of AI Governance Courses. It’s a distilled version of their 12-weeks course that I’ve just taken with a background in law. I can highly recommend it.
If you, dear reader of this comment, have any questions about it or BlueDot Impacts “AI Governance 12-weeks Course” which I will take beginning next week, I’m happy to try to provide answers from a participants perspective.
Thank you very much for the review and aggregation of all these forecasts! Very nice!
I just have one point to add:
As the first aggregate prediction, you mention the AI Impacts’ 2023 survey of machine learning researchers. Your post gives the impression that it produced an aggregate forecast of 50% by 2047 for human-level AI. I think this is at least imprecise, if not incorrect.
AI Impacts asked about the timing of human-level performance by asking some participants about how soon they expect “high-level machine intelligence” (HLMI) and asking others about how soon they expect “full automation of labor” (FAOL). The resulting aggregate forecast gave a 50% chance of HLMI by 2047 and a 50% chance of FAOL by 2116. In your post, you ignore that AI Impacts uses two different concepts for human-level AI and just report the aggregate forecast for HLMI under the headline of human-level AI.
I think this is unfortunate because this difference matters. One of your main points is that you claim that experts think human-level AI is likely to arrive in your lifetime. However, most of us will probably not be alive in 2116.
The “maximize expected choiceworthiness” approach has also been called the “expected moral value” (EMV) approach to axiological uncertainty in Greaves/Ord, Moral uncertainty about population axiology (2017).
In their paper (pp. 2-3), they also briefly discuss different approaches to moral uncertainty (just like this article). In addition to the “My Favourite Theory” approach that relates to confidence, they also describe a similar approach where an agent chooses not according to their credences but their all-out believes.