I really like this model and will probably use it to think about hingeyness quite a lot now!
I’ll make an attempt to give my idea of hingeyness, my guess is that the hingeyness is new enough an idea that there isn’t really a correct answer out there.
You can think of every choice in this model as changing the distribution of future utilities (not just at the next time step but the sum across all time). Hingier choices are choices which change this distribution more than any other. For example a choice where one future branch includes −1000 and a bunch of 0s and the other includes 1000 and a bunch of 0s is really hingey as it changes the portfolio from [-1000, many 0s, 1000] to [-1000, many 0s] or [many 0s, 1000]. A choice between [many 0s, 10] and [more 0s, another 10], is not hingey at all and has no effect on history. A good rule of thumb is to think of a choice as hingier the more it reduces the range of possible utilities.
As an extreme example, choosing between powerful world governments where one values utopias and the other values torture seems very hingey as before we had a large range of possible futures and after we’re irreversibly in a really positive or negative state.
I’ll apply this model to some of your questions below.
All older years have more ripple effects, but does that make 1 more hingey?
I think in the diagram above 1 is coincidentally quite hingey because you choose between a world where you’re guaranteed a 7 or 8 utility down the line or a world where you have a range between 0 or 6. The range of possible choices for one option is very different than the range for the other. You can imagine a similar timeline where the hingy moments are somewhere else (I’ve done an ugly drawing of one such world) As you can see in this timeline choice 1 doesn’t matter at all in the long run because the full range of final options is still open to you but the second tier choices (and the one labelled 0 in the third tier) matter a lot as once you make them your range changes in big ways.
The absolute utility that 1 and 2 could add are the same, but the relative utility is very different. So, what is more important for the hingeyness?
I think neither are the key values here. Hingeyness is about leverage over the whole human trajectory so the immediate changes in utility are not the only thing we should consider. We care more about how this affects aggregate expected utility over all the remaining future states. This is why irreversible choices seem so concerning.
One last thought here is that hingeyness should probably also include some measure of tractability. It could be one choice has a large effect on the future but we don’t have much of a capacity to affect that choice. For example, if we discovered an asteroid heading towards earth which we couldn’t stop. There’s no point in considering something the hinge of history if we can’t operate it! Currently, I don’t think that’s in the model but maybe you could add it by imposing costs on each choice? My guess is this model could become pretty mathematically rigorous and useful for thinking about hingeyness.
I really like this model and will probably use it to think about hingeyness quite a lot now!
I’ll make an attempt to give my idea of hingeyness, my guess is that the hingeyness is new enough an idea that there isn’t really a correct answer out there.
You can think of every choice in this model as changing the distribution of future utilities (not just at the next time step but the sum across all time). Hingier choices are choices which change this distribution more than any other. For example a choice where one future branch includes −1000 and a bunch of 0s and the other includes 1000 and a bunch of 0s is really hingey as it changes the portfolio from [-1000, many 0s, 1000] to [-1000, many 0s] or [many 0s, 1000]. A choice between [many 0s, 10] and [more 0s, another 10], is not hingey at all and has no effect on history. A good rule of thumb is to think of a choice as hingier the more it reduces the range of possible utilities.
As an extreme example, choosing between powerful world governments where one values utopias and the other values torture seems very hingey as before we had a large range of possible futures and after we’re irreversibly in a really positive or negative state.
I’ll apply this model to some of your questions below.
I think in the diagram above 1 is coincidentally quite hingey because you choose between a world where you’re guaranteed a 7 or 8 utility down the line or a world where you have a range between 0 or 6. The range of possible choices for one option is very different than the range for the other. You can imagine a similar timeline where the hingy moments are somewhere else (I’ve done an ugly drawing of one such world) As you can see in this timeline choice 1 doesn’t matter at all in the long run because the full range of final options is still open to you but the second tier choices (and the one labelled 0 in the third tier) matter a lot as once you make them your range changes in big ways.
I think neither are the key values here. Hingeyness is about leverage over the whole human trajectory so the immediate changes in utility are not the only thing we should consider. We care more about how this affects aggregate expected utility over all the remaining future states. This is why irreversible choices seem so concerning.
One last thought here is that hingeyness should probably also include some measure of tractability. It could be one choice has a large effect on the future but we don’t have much of a capacity to affect that choice. For example, if we discovered an asteroid heading towards earth which we couldn’t stop. There’s no point in considering something the hinge of history if we can’t operate it! Currently, I don’t think that’s in the model but maybe you could add it by imposing costs on each choice? My guess is this model could become pretty mathematically rigorous and useful for thinking about hingeyness.