For what it’s worth, this isn’t my view. I think AlphaFold will have a much smaller effect on human health and wellbeing than general-purpose digital agents that can substitute for human workers across a variety of jobs.
Medical progress—and economic progress more generally—relies on building out extensive infrastructure for the discovery, development, manufacturing, distribution and delivery of innovations. For example, more spending on medical R&D in 1925 would not have led to widespread MRI machines, because creating MRI machines required building complementary industries, such as large-scale helium liquefaction plants, that would not have arisen through R&D alone. For similar reasons, I predict that better medical AI alone would not be sufficient to reverse aging, cure cancer, or prevent Alzheimer’s.
In fact, I think the issue here is more fundamental than you might think: the very reason EAs are worried about general-purpose digital AI agents arises directly from the fact that these agents would be the most useful for accelerating technological progress. Their utility is precisely what makes them risky. You can’t eliminate the danger without making them less useful. The two things are intrinsically linked.
For what it’s worth, this isn’t my view. I think AlphaFold will have a much smaller effect on human health and wellbeing than general-purpose digital agents that can substitute for human workers across a variety of jobs.
Medical progress—and economic progress more generally—relies on building out extensive infrastructure for the discovery, development, manufacturing, distribution and delivery of innovations. For example, more spending on medical R&D in 1925 would not have led to widespread MRI machines, because creating MRI machines required building complementary industries, such as large-scale helium liquefaction plants, that would not have arisen through R&D alone. For similar reasons, I predict that better medical AI alone would not be sufficient to reverse aging, cure cancer, or prevent Alzheimer’s.
In fact, I think the issue here is more fundamental than you might think: the very reason EAs are worried about general-purpose digital AI agents arises directly from the fact that these agents would be the most useful for accelerating technological progress. Their utility is precisely what makes them risky. You can’t eliminate the danger without making them less useful. The two things are intrinsically linked.