In particular, it seems like some of your estimates make more sense to me if I read them as saying “Well there will likely exist some task that AI systems can’t do.” But I think such claims aren’t very relevant for transformative AI, which would in turn lead to AGI.
By the same token, if the AIs were looking at humans they might say “Well there will exist some tasks that humans can’t do” and of course they’d be right, but the relevant thing is the single non-cherry-picked variable of overall economic impact. The AIs would be wrong to conclude that humans have slow economic growth because we can’t do some tasks that AIs are great at, and the humans would be wrong to conclude that AIs will have slow economic growth because they can’t do some tasks we are great at. The exact comparison is only relevant for assessing things like complementarity, which make large impacts happen strictly more quickly than they would otherwise.
(This might be related to me disliking AGI though, and then it’s kind of on OpenPhil for asking about it. They could also have asked about timelines to 100000x electricity production and I’d be making broadly the same arguments, so in some sense it must be me who is missing the point.)
Yep. We’re using the main definition supplied by Open Philanthropy, which I’ll paraphrase as “nearly all human work at human cost or less by 2043.”
If the definition was more liberal, e.g., AGI as smart as humans, or AI causing world GDP to rise by >100%, we would have forecasted higher probabilities. We expect AI to get wildly more powerful over the next decades and wildly change the face of human life and work. The public is absolutely unprepared. We are very bullish on AI progress, and we think AI safety is an important, tractable, and neglected problem. Creating new entities with the potential to be more powerful than humanity is a scary, scary thing.
Yep. We’re using the main definition supplied by Open Philanthropy, which I’ll paraphrase as “nearly all human work at human cost or less by 2043.”
If the definition was more liberal, e.g., AGI as smart as humans, or AI causing world GDP to rise by >100%, we would have forecasted higher probabilities. We expect AI to get wildly more powerful over the next decades and wildly change the face of human life and work. The public is absolutely unprepared. We are very bullish on AI progress, and we think AI safety is an important, tractable, and neglected problem. Creating new entities with the potential to be more powerful than humanity is a scary, scary thing.