For Pause AI or Stop AI to succeed, pausing /â stopping needs to be a viable solution. I think some AI capabilities people who believe in existential risk may (perhaps?) be motivated by the thought that the risk of civilisational collapse is high without AI, so itâs worth taking the risk of misaligned AI to prevent that outcome.
If this really is cruxy for some people, itâs possible this doesnât get noticed because people take it as a background assumption and donât tend to discuss it directly, so they donât realize how much they disagree and how crucial that disagreement is.
credit to AGB for (in this comment) reminding me where to find the Scott Alexander remarks that pushed me a lot in this direction:
Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism + mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality. I donât spend much time worrying about any of these, because I think theyâll take a few generations to reach crisis level, and I expect technology to flip the gameboard well before then. But if we ban all gameboard-flipping technologies (the only other one I know is genetic enhancement, which is even more bannable), then we do end up with bioweapon catastrophe or social collapse. Iâve said before I think thereâs a ~20% chance of AI destroying the world. But if we donât get AI, I think thereâs a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela. That doesnât mean I have to support AI accelerationism because 20% is smaller than 50%. Short, carefully-tailored pauses could improve the chance of AI going well by a lot, without increasing the risk of social collapse too much. But itâs something on my mind.
(emphasis mine)
My original shortform tried to be measured /â neutral, but I want to also say I found this passage very alarming when I first read it, and itâs wildly more pessimistic than I am by default. I think if this is true, itâs really important to know, but I made my shortform because if itâs false, thatâs really important to know too. I hope we can look into it in a way that moves the needle on our best understanding.
For Pause AI or Stop AI to succeed, pausing /â stopping needs to be a viable solution. I think some AI capabilities people who believe in existential risk may (perhaps?) be motivated by the thought that the risk of civilisational collapse is high without AI, so itâs worth taking the risk of misaligned AI to prevent that outcome.
If this really is cruxy for some people, itâs possible this doesnât get noticed because people take it as a background assumption and donât tend to discuss it directly, so they donât realize how much they disagree and how crucial that disagreement is.
credit to AGB for (in this comment) reminding me where to find the Scott Alexander remarks that pushed me a lot in this direction:
(emphasis mine)
My original shortform tried to be measured /â neutral, but I want to also say I found this passage very alarming when I first read it, and itâs wildly more pessimistic than I am by default. I think if this is true, itâs really important to know, but I made my shortform because if itâs false, thatâs really important to know too. I hope we can look into it in a way that moves the needle on our best understanding.