Thanks, I haven’t come to grips with this idea, yet, looking forward to the next posts! :)
The way I came to think about this is to visualize sets of future worlds that I’m choosing between. For example imagine I’d be choosing between two options that I think result in
A) a 10% chance of saving the lives of 100 people, and
B) a 99% chance of saving the live of 1 random person.
Then I would imagine choosing between
A) 100[1] worlds in which 10 worlds will have 100 fewer deaths, and
B) 100 worlds in which 99 of them have 1 fewer death.
Then I’d choose A) because EUM and think something something multiverse and imagine that there actually exist this big set of post-decision worlds and that I expect them to be actually made up by 10% with worlds where 100 lives were saved. I should feel sad in whichever of these worlds I end up in because why would it matter in which of those I personally end up in? And I should feel a little extra sad if I end up in a world that didn’t save 100 lives because it’s a small update that the fraction of good worlds was smaller than 10%.
choosing n=100 is theoretically arbitrary, but in practice probably the easiest to think about while still capturing a lot of different worlds. Maybe this relates to the point of Pascal’s mugging: When it’s not even one world in 100, or 1000, most people should be wary of acting on such speculations because they won’t be able to evaluate the world’s likelihood or how such a rare world will concretely look like.
Thanks, I haven’t come to grips with this idea, yet, looking forward to the next posts! :)
The way I came to think about this is to visualize sets of future worlds that I’m choosing between. For example imagine I’d be choosing between two options that I think result in
A) a 10% chance of saving the lives of 100 people, and
B) a 99% chance of saving the live of 1 random person.
Then I would imagine choosing between
A) 100[1] worlds in which 10 worlds will have 100 fewer deaths, and
B) 100 worlds in which 99 of them have 1 fewer death.
Then I’d choose A) because EUM and think something something multiverse and imagine that there actually exist this big set of post-decision worlds and that I expect them to be actually made up by 10% with worlds where 100 lives were saved. I should feel sad in whichever of these worlds I end up in because why would it matter in which of those I personally end up in? And I should feel a little extra sad if I end up in a world that didn’t save 100 lives because it’s a small update that the fraction of good worlds was smaller than 10%.
choosing n=100 is theoretically arbitrary, but in practice probably the easiest to think about while still capturing a lot of different worlds. Maybe this relates to the point of Pascal’s mugging: When it’s not even one world in 100, or 1000, most people should be wary of acting on such speculations because they won’t be able to evaluate the world’s likelihood or how such a rare world will concretely look like.