To answer your question, personally, I think we should probably stick with standard expected value reasoning over the approach you are advocating here. So no, I wouldnât tell you to wear one anyway. But Iâm confused about exactly what it is you are advocating.
Iâll try to make the objection I am trying to articulate more forceful:
Suppose we are considering some awful painful experience, X, and some trivial inconvenience, Y. Suppose everyone on earth agrees that, when thinking altruistically about others, 8000 people having experience X would be worse than 8 billion people having experience Y (thatâs how bad experience X is, and how trivial experience Y is).
Suppose also that everyone on earth adopts a discount threshold of just over 1 in a million.
Now suppose that everyone on earth is faced with the choice of experiencing Y or facing a 1 in a million chance of X. Since they have a discount threshold, they all choose to go with the 1 in a million chance of X.
Now, with extremely high probability, ~8,000 people on earth will experience X. Take the perspective of any one individual looking at whatâs happened to everyone else. They will agree that the situation for everyone else is bad. They should, at least when thinking altruistically, wish that everyone else had chosen to experience Y instead (everyone agrees that it is worse for 8000 people to experience X than 8 billion to experience Y). But they canât actually recommend that any particular person should have decided any differently, because they have done exactly the same thing themselves!
I have conflicting intuitions here. In the case as you described it, I want to bite the bullet and say that everyone is acting rationally, and the 8000 are just unlucky. Something seems off about reducing risk for yourself in service of the project of reducing overall suffering, when you wouldnât do it in service of the project of reducing your own suffering.
That said, if you change the thought experiment so that everyone can experience Y in order to prevent a 1 in a million chance of someone else experiencing X, Iâm much more inclined to say that we should integrate as youâve described. It seems like the dynamics are genuinely different enough that maybe I can make this distinction coherently?
Re: seatbelts, I was a bit confused; you seemed to be saying that when you integrate over âdecisions anyone makes that cause benefit/âharm,â you collapse back to expected utility theory. I was suggesting that expected utility theory as I understand it does not involve integrating over everyoneâs decisions, since then e.g. the driver with deflated self-worth in my previous example should wear a seatbelt anyway.
It seems very strange to me to treat reducing someoneâs else chance of X differently to reducing your own (if youâre confident it would affect each of you similarly)! But thank you for engaging with these questions, itâs helping me understand your position better I think.
By âcollapsing back to expected utility theoryâ I only meant that if you consider a large enough reference class of similar decisions, it seems like it will in practice be the same as acting as if you had an extremely low discount threshold? But it sounds like I may just not have understood the original approach well enough.
To answer your question, personally, I think we should probably stick with standard expected value reasoning over the approach you are advocating here. So no, I wouldnât tell you to wear one anyway. But Iâm confused about exactly what it is you are advocating.
Iâll try to make the objection I am trying to articulate more forceful:
Suppose we are considering some awful painful experience, X, and some trivial inconvenience, Y. Suppose everyone on earth agrees that, when thinking altruistically about others, 8000 people having experience X would be worse than 8 billion people having experience Y (thatâs how bad experience X is, and how trivial experience Y is).
Suppose also that everyone on earth adopts a discount threshold of just over 1 in a million.
Now suppose that everyone on earth is faced with the choice of experiencing Y or facing a 1 in a million chance of X. Since they have a discount threshold, they all choose to go with the 1 in a million chance of X.
Now, with extremely high probability, ~8,000 people on earth will experience X. Take the perspective of any one individual looking at whatâs happened to everyone else. They will agree that the situation for everyone else is bad. They should, at least when thinking altruistically, wish that everyone else had chosen to experience Y instead (everyone agrees that it is worse for 8000 people to experience X than 8 billion to experience Y). But they canât actually recommend that any particular person should have decided any differently, because they have done exactly the same thing themselves!
Something seems wrong here!
Right, gotcha.
I have conflicting intuitions here. In the case as you described it, I want to bite the bullet and say that everyone is acting rationally, and the 8000 are just unlucky. Something seems off about reducing risk for yourself in service of the project of reducing overall suffering, when you wouldnât do it in service of the project of reducing your own suffering.
That said, if you change the thought experiment so that everyone can experience Y in order to prevent a 1 in a million chance of someone else experiencing X, Iâm much more inclined to say that we should integrate as youâve described. It seems like the dynamics are genuinely different enough that maybe I can make this distinction coherently?
Re: seatbelts, I was a bit confused; you seemed to be saying that when you integrate over âdecisions anyone makes that cause benefit/âharm,â you collapse back to expected utility theory. I was suggesting that expected utility theory as I understand it does not involve integrating over everyoneâs decisions, since then e.g. the driver with deflated self-worth in my previous example should wear a seatbelt anyway.
It seems very strange to me to treat reducing someoneâs else chance of X differently to reducing your own (if youâre confident it would affect each of you similarly)! But thank you for engaging with these questions, itâs helping me understand your position better I think.
By âcollapsing back to expected utility theoryâ I only meant that if you consider a large enough reference class of similar decisions, it seems like it will in practice be the same as acting as if you had an extremely low discount threshold? But it sounds like I may just not have understood the original approach well enough.