For Pause AI or Stop AI to succeed, pausing /​ stopping needs to be a viable solution. I think some AI capabilities people who believe in existential risk may (perhaps?) be motivated by the thought that the risk of civilisational collapse is high without AI, so it’s worth taking the risk of misaligned AI to prevent that outcome.
If this really is cruxy for some people, it’s possible this doesn’t get noticed because people take it as a background assumption and don’t tend to discuss it directly, so they don’t realize how much they disagree and how crucial that disagreement is.
credit to AGB for (in this comment) reminding me where to find the Scott Alexander remarks that pushed me a lot in this direction:
Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism + mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality. I don’t spend much time worrying about any of these, because I think they’ll take a few generations to reach crisis level, and I expect technology to flip the gameboard well before then. But if we ban all gameboard-flipping technologies (the only other one I know is genetic enhancement, which is even more bannable), then we do end up with bioweapon catastrophe or social collapse. I’ve said before I think there’s a ~20% chance of AI destroying the world. But if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela. That doesn’t mean I have to support AI accelerationism because 20% is smaller than 50%. Short, carefully-tailored pauses could improve the chance of AI going well by a lot, without increasing the risk of social collapse too much. But it’s something on my mind.
(emphasis mine)
My original shortform tried to be measured /​ neutral, but I want to also say I found this passage very alarming when I first read it, and it’s wildly more pessimistic than I am by default. I think if this is true, it’s really important to know, but I made my shortform because if it’s false, that’s really important to know too. I hope we can look into it in a way that moves the needle on our best understanding.
For Pause AI or Stop AI to succeed, pausing /​ stopping needs to be a viable solution. I think some AI capabilities people who believe in existential risk may (perhaps?) be motivated by the thought that the risk of civilisational collapse is high without AI, so it’s worth taking the risk of misaligned AI to prevent that outcome.
If this really is cruxy for some people, it’s possible this doesn’t get noticed because people take it as a background assumption and don’t tend to discuss it directly, so they don’t realize how much they disagree and how crucial that disagreement is.
credit to AGB for (in this comment) reminding me where to find the Scott Alexander remarks that pushed me a lot in this direction:
(emphasis mine)
My original shortform tried to be measured /​ neutral, but I want to also say I found this passage very alarming when I first read it, and it’s wildly more pessimistic than I am by default. I think if this is true, it’s really important to know, but I made my shortform because if it’s false, that’s really important to know too. I hope we can look into it in a way that moves the needle on our best understanding.