Taken literally, “accelerationist” implies that you think the technology isn’t currently progressing fast enough, and that some steps should be taken to make it go faster. This seems a bit odd, because one of your key arguments (that I actually agree with) is that we learn to adapt to technology as it rolls out. But obviously it’s harder to adapt when change is super quick, compared to gradual progress.
How fast do you think AI progress should be going, and what changes should be made to get there?
Not trying to answer on the author’s behalf, but it seems relatively clear to me that differential development is possible here: so far most advancements in science seem to have come from biological applications like AlphaFold that are distinct from the LLMs that have created most problems both in the eyes of “doomers” and in the eyes of people warning about current non-extinction dangers. Therefore the development of beneficial tools can in theory be accelerated while the development of LLMs is slowed down.
For what it’s worth, this isn’t my view. I think AlphaFold will have a much smaller effect on human health and wellbeing than general-purpose digital agents that can substitute for human workers across a variety of jobs.
Medical progress—and economic progress more generally—relies on building out extensive infrastructure for the discovery, development, manufacturing, distribution and delivery of innovations. For example, more spending on medical R&D in 1925 would not have led to widespread MRI machines, because creating MRI machines required building complementary industries, such as large-scale helium liquefaction plants, that would not have arisen through R&D alone. For similar reasons, I predict that better medical AI alone would not be sufficient to reverse aging, cure cancer, or prevent Alzheimer’s.
In fact, I think the issue here is more fundamental than you might think: the very reason EAs are worried about general-purpose digital AI agents arises directly from the fact that these agents would be the most useful for accelerating technological progress. Their utility is precisely what makes them risky. You can’t eliminate the danger without making them less useful. The two things are intrinsically linked.
Taken literally, “accelerationist” implies that you think the technology isn’t currently progressing fast enough, and that some steps should be taken to make it go faster. This seems a bit odd, because one of your key arguments (that I actually agree with) is that we learn to adapt to technology as it rolls out. But obviously it’s harder to adapt when change is super quick, compared to gradual progress.
How fast do you think AI progress should be going, and what changes should be made to get there?
Not trying to answer on the author’s behalf, but it seems relatively clear to me that differential development is possible here: so far most advancements in science seem to have come from biological applications like AlphaFold that are distinct from the LLMs that have created most problems both in the eyes of “doomers” and in the eyes of people warning about current non-extinction dangers. Therefore the development of beneficial tools can in theory be accelerated while the development of LLMs is slowed down.
For what it’s worth, this isn’t my view. I think AlphaFold will have a much smaller effect on human health and wellbeing than general-purpose digital agents that can substitute for human workers across a variety of jobs.
Medical progress—and economic progress more generally—relies on building out extensive infrastructure for the discovery, development, manufacturing, distribution and delivery of innovations. For example, more spending on medical R&D in 1925 would not have led to widespread MRI machines, because creating MRI machines required building complementary industries, such as large-scale helium liquefaction plants, that would not have arisen through R&D alone. For similar reasons, I predict that better medical AI alone would not be sufficient to reverse aging, cure cancer, or prevent Alzheimer’s.
In fact, I think the issue here is more fundamental than you might think: the very reason EAs are worried about general-purpose digital AI agents arises directly from the fact that these agents would be the most useful for accelerating technological progress. Their utility is precisely what makes them risky. You can’t eliminate the danger without making them less useful. The two things are intrinsically linked.