You could make this precise by thinking of attractor states. E.g., if I’d scored less well in any one exam as a kid, or if some polite chit-chat had gone slightly differently, I think the difference gets rounded down to 0, because that doesn’t end up affecting any decisions.
I’m thinking of some kinds of extinction risk as attractor states that we have to exert some pressure (and thus some large initial random choice) to avoid. E.g., unaligned AGI seems like one of those states.
For example, if you went back in time 1000 years and painted someone’s house a different color, my probability distribution for the weather here and now would look like the historical average for weather here, rather than the weather in the original timeline
This would surprise me, but I could just have wrong intuitions here. But even assuming that is the case, small initial changes would have to snowball fast and far enough to eventually avert an x-risk.
Hmm. At the very least, if you have some idealized particles bouncing around in a box, minutely changing the direction of one causes, as time goes to infinity, the large counterfactual effect of fully randomizing the state of the box (or if you prefer, something like redrawing the state from the random variable representing possible states of the box).
I’d be surprised if our world was much more stable (meaning something like: characterized by attractor states), but this seems like a hard and imprecise empirical question, and I respect that your intuitions differ.
Like, suppose that instead of “preventing extinction”, we were talking about “preventing the industrial revolution”. Sure, there are butterfly effects which could avoid that, but it seems weird.
You could make this precise by thinking of attractor states. E.g., if I’d scored less well in any one exam as a kid, or if some polite chit-chat had gone slightly differently, I think the difference gets rounded down to 0, because that doesn’t end up affecting any decisions.
I’m thinking of some kinds of extinction risk as attractor states that we have to exert some pressure (and thus some large initial random choice) to avoid. E.g., unaligned AGI seems like one of those states.
This would surprise me, but I could just have wrong intuitions here. But even assuming that is the case, small initial changes would have to snowball fast and far enough to eventually avert an x-risk.
Hmm. At the very least, if you have some idealized particles bouncing around in a box, minutely changing the direction of one causes, as time goes to infinity, the large counterfactual effect of fully randomizing the state of the box (or if you prefer, something like redrawing the state from the random variable representing possible states of the box).
I’d be surprised if our world was much more stable (meaning something like: characterized by attractor states), but this seems like a hard and imprecise empirical question, and I respect that your intuitions differ.
Like, suppose that instead of “preventing extinction”, we were talking about “preventing the industrial revolution”. Sure, there are butterfly effects which could avoid that, but it seems weird.