Civilizational collapse would be a historically unprecedented event, and the future is very hard to predict;
I don’t find this reasoning very compelling, mostly on the basis of “this can’t go on”-type logic. Like, we basically know that the next century will be “historically unprecedented”. Indeed, it would be really surprising if the next century would not be unprecedented, since humanity has never remotely been in a similar starting position.
We can’t sustain current growth levels, stagnation at any specific point would be quite weird, and sudden collapse would as you say also be historically unprecedented. I don’t have any stories about the future that seem plausible to me that are not historically unprecedented, so I don’t really understand how something being unprecedented could establish a prior against it. And there are definitely outside-view stories you can tell under which civilizational collapse would be more business than usual than other types of stories.
We are in the middle of a huge exponential growth curve. Any path from here seems quite wild. This means something being wild can’t be the primary reason why something has a low prior.
A better argument is that the wildness of the next century means our models of the future are untrustworthy, which should make us pretty suspicious of any claim that something is the P = 1 - ε outcome without a watertight case for the proposition.
There doesn’t seem to be such a watertight case for AI takeover. Most threat models[1] rest heavily on the assumption that transformative AI will be single-mindedly optimizing for some (misspecified or mislearned) utility function, as opposed to e.g. following a bunch of contextually-activated policies[2]. While this is plausible, and thus warrants significant effort to prevent, it’s far from clear that this is even the most likely outcome “absent highly specific conditions”, never mind a near certainty.
Yep, I think this reasoning is better, and is closer to why I don’t assign 1-ε probability to doom.
The sad thing is that the remaining uncertainty is something that is much harder to work with. Like, I think most of the worlds where we are fine are worlds where I am deeply confused about a lot of stuff, deeply confused about the drivers of civilization, deeply confused about how to reason well, deeply confused about what I care about and whether AI doom even matters. I find it hard to plan around those worlds.
I agree that one or two centuries is pretty plausible, but I think it starts getting quite wild within a few more. 300 years of 2% growth is ~380x. 400 years of 2% growth is ~3000x.
You pretty quickly reach at least a solar-system spanning civilization to be able to get there, and then quite quickly a galaxy-spanning one, and then you just can’t do it within the rules of known physics at all anymore. I agree that 2 centuries of 2% growth is not totally implausible without anything extremely wild happening, but all of that of course would still involve a huge amount of “historically unprecedented” things happening.
My point with the observation you quoted wasn’t “This would be unprecedented, therefore there’s a very low prior probability.” It was more like: “It’s very hard to justify >90% confidence on anything without some strong base rate to go off of. In this case, we have no base rate to go off of; we’re pretty wildly guessing.” I agree something weird has to happen fairly “soon” by zoomed-out historical standards, but there are many possible candidates for what the weird thing is (I also endorse dsj’s comment below).
I don’t find this reasoning very compelling, mostly on the basis of “this can’t go on”-type logic. Like, we basically know that the next century will be “historically unprecedented”. Indeed, it would be really surprising if the next century would not be unprecedented, since humanity has never remotely been in a similar starting position.
We can’t sustain current growth levels, stagnation at any specific point would be quite weird, and sudden collapse would as you say also be historically unprecedented. I don’t have any stories about the future that seem plausible to me that are not historically unprecedented, so I don’t really understand how something being unprecedented could establish a prior against it. And there are definitely outside-view stories you can tell under which civilizational collapse would be more business than usual than other types of stories.
We are in the middle of a huge exponential growth curve. Any path from here seems quite wild. This means something being wild can’t be the primary reason why something has a low prior.
A better argument is that the wildness of the next century means our models of the future are untrustworthy, which should make us pretty suspicious of any claim that something is the P = 1 - ε outcome without a watertight case for the proposition.
There doesn’t seem to be such a watertight case for AI takeover. Most threat models[1] rest heavily on the assumption that transformative AI will be single-mindedly optimizing for some (misspecified or mislearned) utility function, as opposed to e.g. following a bunch of contextually-activated policies[2]. While this is plausible, and thus warrants significant effort to prevent, it’s far from clear that this is even the most likely outcome “absent highly specific conditions”, never mind a near certainty.
e.g. Cotra and Ngo et al
as proposed e.g. by shard theory
Yep, I think this reasoning is better, and is closer to why I don’t assign 1-ε probability to doom.
The sad thing is that the remaining uncertainty is something that is much harder to work with. Like, I think most of the worlds where we are fine are worlds where I am deeply confused about a lot of stuff, deeply confused about the drivers of civilization, deeply confused about how to reason well, deeply confused about what I care about and whether AI doom even matters. I find it hard to plan around those worlds.
Is this about GDP growth or something else? Sustaining 2% GDP growth for a century (or a few) seems reasonably plausible?
I agree that one or two centuries is pretty plausible, but I think it starts getting quite wild within a few more. 300 years of 2% growth is ~380x. 400 years of 2% growth is ~3000x.
You pretty quickly reach at least a solar-system spanning civilization to be able to get there, and then quite quickly a galaxy-spanning one, and then you just can’t do it within the rules of known physics at all anymore. I agree that 2 centuries of 2% growth is not totally implausible without anything extremely wild happening, but all of that of course would still involve a huge amount of “historically unprecedented” things happening.
Makes sense.
My point with the observation you quoted wasn’t “This would be unprecedented, therefore there’s a very low prior probability.” It was more like: “It’s very hard to justify >90% confidence on anything without some strong base rate to go off of. In this case, we have no base rate to go off of; we’re pretty wildly guessing.” I agree something weird has to happen fairly “soon” by zoomed-out historical standards, but there are many possible candidates for what the weird thing is (I also endorse dsj’s comment below).