Huh, fwiw this is not my anecdotal experience. I would suggest that this is because I spend more time around doomers than you and doomers are very influenced by Yudkowskyâs âdonât fight over which monkey gets to eat the poison banana firstâ framing, but that seems contradicted by your example being ACX, who is also quite doomer-adjacent.
That sounds plausible. I do think of ACX as much more âaccelerationistâ than the doomer circles, for lack of a better term. Hereâs a more recent post from October 2023 informing that impression, below probably does a better job than I can do of adding nuance to Scottâs position.
Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism+mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality. I donât spend much time worrying about any of these, because I think theyâll take a few generations to reach crisis level, and I expect technology to flip the gameboard well before then. But if we ban all gameboard-flipping technologies (the only other one I know is genetic enhancement, which is even more bannable), then we do end up with bioweapon catastrophe or social collapse. Iâve said before I think thereâs a ~20% chance of AI destroying the world. But if we donât get AI, I think thereâs a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela. That doesnât mean I have to support AI accelerationism because 20% is smaller than 50%. Short, carefully-tailored pauses could improve the chance of AI going well by a lot, without increasing the risk of social collapse too much. But itâs something on my mind.
Huh, fwiw this is not my anecdotal experience. I would suggest that this is because I spend more time around doomers than you and doomers are very influenced by Yudkowskyâs âdonât fight over which monkey gets to eat the poison banana firstâ framing, but that seems contradicted by your example being ACX, who is also quite doomer-adjacent.
That sounds plausible. I do think of ACX as much more âaccelerationistâ than the doomer circles, for lack of a better term. Hereâs a more recent post from October 2023 informing that impression, below probably does a better job than I can do of adding nuance to Scottâs position.
https://ââwww.astralcodexten.com/ââp/ââpause-for-thought-the-ai-pause-debate