Yep, I think this reasoning is better, and is closer to why I don’t assign 1-ε probability to doom.
The sad thing is that the remaining uncertainty is something that is much harder to work with. Like, I think most of the worlds where we are fine are worlds where I am deeply confused about a lot of stuff, deeply confused about the drivers of civilization, deeply confused about how to reason well, deeply confused about what I care about and whether AI doom even matters. I find it hard to plan around those worlds.
Yep, I think this reasoning is better, and is closer to why I don’t assign 1-ε probability to doom.
The sad thing is that the remaining uncertainty is something that is much harder to work with. Like, I think most of the worlds where we are fine are worlds where I am deeply confused about a lot of stuff, deeply confused about the drivers of civilization, deeply confused about how to reason well, deeply confused about what I care about and whether AI doom even matters. I find it hard to plan around those worlds.