Thanks for the thoughtful comment!
Re point 1: I agree that the likelihood and expected impact of transformative AI exist on a spectrum. I didn’t mean to imply certainty about timelines, but I chose not to focus on arguing for specific timelines in this post.
Regarding the specific points: they seem plausible but are mostly based on base rates and social dynamics. I think many people’s views, especially those working on AI, have shifted from being shaped primarily by abstract arguments to being informed by observable trends in AI capabilities and investments.
Thanks for the comment! I might be missing something, but GPT-type chatbots are based on large language models, which play a key role in scaling toward AGI. I do think that extrapolating progress from them is valuable but also agree that tying discussions about future AI systems too closely to current models’ capabilities can be misleading.
That said, my post intentionally assumes a more limited claim: that AI will transform the world in significant ways relatively soon. This assumption seems both more likely and increasingly foreseeable. In contrast, assumptions about a world ‘incredibly radically’ transformed by superintelligence are less likely and less foreseeable. There have been lots of arguments around why you should work on AI Safety, and I agree with many of them. I’m mainly trying to reach the EAs who buy into the limited claim but currently act as if they don’t.
Regarding the example: It would likely be a mistake to focus only on current AI capabilities for education. However, it could be important to seriously evaluate scenarios like, ‘AI teachers better than every human teacher soon’.