I have conflicting intuitions here. In the case as you described it, I want to bite the bullet and say that everyone is acting rationally, and the 8000 are just unlucky. Something seems off about reducing risk for yourself in service of the project of reducing overall suffering, when you wouldn’t do it in service of the project of reducing your own suffering.
That said, if you change the thought experiment so that everyone can experience Y in order to prevent a 1 in a million chance of someone else experiencing X, I’m much more inclined to say that we should integrate as you’ve described. It seems like the dynamics are genuinely different enough that maybe I can make this distinction coherently?
Re: seatbelts, I was a bit confused; you seemed to be saying that when you integrate over “decisions anyone makes that cause benefit/harm,” you collapse back to expected utility theory. I was suggesting that expected utility theory as I understand it does not involve integrating over everyone’s decisions, since then e.g. the driver with deflated self-worth in my previous example should wear a seatbelt anyway.
It seems very strange to me to treat reducing someone’s else chance of X differently to reducing your own (if you’re confident it would affect each of you similarly)! But thank you for engaging with these questions, it’s helping me understand your position better I think.
By ‘collapsing back to expected utility theory’ I only meant that if you consider a large enough reference class of similar decisions, it seems like it will in practice be the same as acting as if you had an extremely low discount threshold? But it sounds like I may just not have understood the original approach well enough.
Right, gotcha.
I have conflicting intuitions here. In the case as you described it, I want to bite the bullet and say that everyone is acting rationally, and the 8000 are just unlucky. Something seems off about reducing risk for yourself in service of the project of reducing overall suffering, when you wouldn’t do it in service of the project of reducing your own suffering.
That said, if you change the thought experiment so that everyone can experience Y in order to prevent a 1 in a million chance of someone else experiencing X, I’m much more inclined to say that we should integrate as you’ve described. It seems like the dynamics are genuinely different enough that maybe I can make this distinction coherently?
Re: seatbelts, I was a bit confused; you seemed to be saying that when you integrate over “decisions anyone makes that cause benefit/harm,” you collapse back to expected utility theory. I was suggesting that expected utility theory as I understand it does not involve integrating over everyone’s decisions, since then e.g. the driver with deflated self-worth in my previous example should wear a seatbelt anyway.
It seems very strange to me to treat reducing someone’s else chance of X differently to reducing your own (if you’re confident it would affect each of you similarly)! But thank you for engaging with these questions, it’s helping me understand your position better I think.
By ‘collapsing back to expected utility theory’ I only meant that if you consider a large enough reference class of similar decisions, it seems like it will in practice be the same as acting as if you had an extremely low discount threshold? But it sounds like I may just not have understood the original approach well enough.