This link could also be useful for learning how Yudkowsky & Hanson think about the issue: https://intelligence.org/ai-foom-debate
Essentially, Yudkowsky is very worried about AGI (‘we’re dead in 20-30 years’ worried) because he thinks that progress on AI overall will rapidly accelerate as AI helps us make further progress. Hanson was (is?) less worried.
This link could also be useful for learning how Yudkowsky & Hanson think about the issue: https://intelligence.org/ai-foom-debate
Essentially, Yudkowsky is very worried about AGI (‘we’re dead in 20-30 years’ worried) because he thinks that progress on AI overall will rapidly accelerate as AI helps us make further progress. Hanson was (is?) less worried.