Taken literally, “accelerationist” implies that you think the technology isn’t currently progressing fast enough, and that some steps should be taken to make it go faster. This seems a bit odd, because one of your key arguments (that I actually agree with) is that we learn to adapt to technology as it rolls out. But obviously it’s harder to adapt when change is super quick, compared to gradual progress.
How fast do you think AI progress should be going, and what changes should be made to get there?
Not trying to answer on the author’s behalf, but it seems relatively clear to me that differential development is possible here: so far most advancements in science seem to have come from biological applications like AlphaFold that are distinct from the LLMs that have created most problems both in the eyes of “doomers” and in the eyes of people warning about current non-extinction dangers. Therefore the development of beneficial tools can in theory be accelerated while the development of LLMs is slowed down.
Taken literally, “accelerationist” implies that you think the technology isn’t currently progressing fast enough, and that some steps should be taken to make it go faster. This seems a bit odd, because one of your key arguments (that I actually agree with) is that we learn to adapt to technology as it rolls out. But obviously it’s harder to adapt when change is super quick, compared to gradual progress.
How fast do you think AI progress should be going, and what changes should be made to get there?
Not trying to answer on the author’s behalf, but it seems relatively clear to me that differential development is possible here: so far most advancements in science seem to have come from biological applications like AlphaFold that are distinct from the LLMs that have created most problems both in the eyes of “doomers” and in the eyes of people warning about current non-extinction dangers. Therefore the development of beneficial tools can in theory be accelerated while the development of LLMs is slowed down.