That sounds plausible. I do think of ACX as much more ‘accelerationist’ than the doomer circles, for lack of a better term. Here’s a more recent post from October 2023 informing that impression, below probably does a better job than I can do of adding nuance to Scott’s position.
Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism+mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality. I don’t spend much time worrying about any of these, because I think they’ll take a few generations to reach crisis level, and I expect technology to flip the gameboard well before then. But if we ban all gameboard-flipping technologies (the only other one I know is genetic enhancement, which is even more bannable), then we do end up with bioweapon catastrophe or social collapse. I’ve said before I think there’s a ~20% chance of AI destroying the world. But if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela. That doesn’t mean I have to support AI accelerationism because 20% is smaller than 50%. Short, carefully-tailored pauses could improve the chance of AI going well by a lot, without increasing the risk of social collapse too much. But it’s something on my mind.
That sounds plausible. I do think of ACX as much more ‘accelerationist’ than the doomer circles, for lack of a better term. Here’s a more recent post from October 2023 informing that impression, below probably does a better job than I can do of adding nuance to Scott’s position.
https://​​www.astralcodexten.com/​​p/​​pause-for-thought-the-ai-pause-debate