I think of AGI (and human-level intelligence) as the cloud, and superintelligence as being above the cloud. They are useful concepts, despite their vagueness. But they’re markedly less useful when you get close to them. [...]
For my purposes, I think the key threshold is when the system is capable enough to cause dramatic, civilisational changes. For example, the point where AI could take over from humanity if misaligned, or has made 50% of people permanently unemployable, or has doubled the global rate of technological progress. I focus on this threshold because I think it matters most for planning our strategies and careers.
I think the example milestones you mention differ significantly from one another, and each carries substantial vagueness that compounds rather than resolves the vagueness issues you raised earlier in the essay.
For example, I don’t know how to operationalize the point where “AI could take over from humanity”, and I suspect people will disagree for years about whether that threshold has been reached, much as they have debated for years whether we have already achieved AGI. Similarly, it is unclear what it means for 50% of people to be “permanently unemployable” as opposed to merely unemployed.
If your goal is to ground the debate about timelines in something measurable and uncontroversial, it is worth thinking more carefully about milestones that actually serve that purpose. Otherwise, time will pass and you will likely find that these milestones will become markedly less useful as we get close to them.
For what it’s worth, this isn’t my view. I think AlphaFold will have a much smaller effect on human health and wellbeing than general-purpose digital agents that can substitute for human workers across a variety of jobs.
Medical progress—and economic progress more generally—relies on building out extensive infrastructure for the discovery, development, manufacturing, distribution and delivery of innovations. For example, more spending on medical R&D in 1925 would not have led to widespread MRI machines, because creating MRI machines required building complementary industries, such as large-scale helium liquefaction plants, that would not have arisen through R&D alone. For similar reasons, I predict that better medical AI alone would not be sufficient to reverse aging, cure cancer, or prevent Alzheimer’s.
In fact, I think the issue here is more fundamental than you might think: the very reason EAs are worried about general-purpose digital AI agents arises directly from the fact that these agents would be the most useful for accelerating technological progress. Their utility is precisely what makes them risky. You can’t eliminate the danger without making them less useful. The two things are intrinsically linked.