Iām still a bit confused about exactly how to apply this method in practice though. If I am understanding it correctly, then if someone knows they will only drive 1 mile in their entire life, you would say that wearing a seatbelt becomes the wrong thing to do for them? On the other hand, if they know that they will drive 500 miles, then wearing a seatbelt for those 500 miles might make sense?
But what if they are in a situation where they do not know how long their journeys are going to be? They are taking 1 car journey in their life, and maybe the car will stop after 1 mile, or maybe after 1000. Maybe they have some subjective probability distribution over these possible journey lengths. How do they make their decision in this situation? Iād be interested to see a worked example here!
Iām also still confused about how you decide on the groupings in practice. If I know that I will travel 250 miles by car in my life, and 250 miles by bike, and each risk is below the discount threshold, does that mean I should wear neither a seatbelt nor a bike helmet? Or should I wear both, if the combined risk of both driving and cycling is together enough to cross the threshold? Should I treat seatbelts/ābike-helmets as one decision or two separate ones?
If Iām treating it as two separate decisions, then this feels arbitrary (why not split the seatbelt decision into driving on main roads vs driving on side roads, to push things under the threshold when it previously would have been over?), but if I treat them together, then it feels like my actual discount threshold in practical situations is going to become far smaller than the one I decide on a priori (since I make a lot of decisions in my life!)
Indeed, if all weāre considering is the decision to wear seatbelts or not, I would say that wearing a seatbelt for a lifetime total of 1 mile is (maybe) fanatical, and 500 is (maybe) not. In practice, your second question about groupings will come into play, see below. If you donāt know how many miles youāll drive and have a probability distribution, I suppose youād treat it the same way as the scenarios I discuss in the post: discretize the distribution according to your discount threshold so you donāt end up discounting everything, then take the expected value as normal and see if itās worth all the seatbelt applications. The results will depend heavily on the shape of the distribution and your numbers for discount threshold, value of life, etc.
The grouping issue is tricky. It seems to me like we ought to consider the decisions together, since Iām (more or less) indifferent between dying in a bike crash vs a car crash. Perhaps we ought to group all ādecisions that might kill youā together, and think of it somewhat like the repeated trade offer described in the post; each time you contemplate going helmet or seatbelt-less, you have the option to gain some utility at the cost of a slightly higher risk of dying, and the reasonable thing to do is integrate over your decisions (although itāll be slightly more complicated, since maybe you expect to e.g. drive many more miles in the future and need to account for that somehow).
As mentioned in the post, integrating like this in situations of repeated decision-making can mean that you reject arbitrarily small changes in probability, even those below your discount threshold. I wouldnāt say that this effect means that your practical discount threshold is arbitrarily small.
On the grouping issue, suppose we take your suggestion of grouping all ādecisions that might kill youā into one, and suppose that everyone on earth follows this policy. Suppose also that there is some precaution against painful death (like seatbelt wearing) that everyone decides not to do, in order to gain some trivial benefit. Suppose that integrating over their own life, this decision makes sense, because the risk is still below their discount threshold. Whereas on expected value terms it does not.
It might then be the case that if everyone follows this policy independently, that globally, over billions of people, we might still expect thousands of people to end up dying avoidable painful deaths. Which seems bad!
This seems like a strong case for integrating not just over ādecisions that might kill youā, but over ādecisions that anyone takes that might kill themā.. and I think through similar appeals you could imagine extending that to ādecisions anyone takes that might cause any benefit/āharm to any sentient beingā, at which point, in a big universe, have you not just arrived back at expected utility theory again..?
Hmm, I donāt really feel the force of this objection. My decision to wear my own seatbelt is causally unconnected to both everyone elseās decisions and whatever consequences everyone else faces, and everyone elseās decisions are unconnected to mine. It seems odd that I should then be integrating over those decisions, regardless of what decision theory/āheuristic Iām using.
For example, suppose I use expected value theory, and I value my own life a little less than everyone elseās. I judge that the trivial inconvenience of putting on a seatbelt genuinely is not worth the decreased risk to my life, although I would counsel other people to wear seatbelts given the higher value of their lives (and thus upon reflection support a universal policy of seatbelt wearing). Do you think I ought to integrate over everyoneās decisions and wear a seatbelt anyway? If so, I think youāre arguing for something much stronger than standard expected value reasoning.
To answer your question, personally, I think we should probably stick with standard expected value reasoning over the approach you are advocating here. So no, I wouldnāt tell you to wear one anyway. But Iām confused about exactly what it is you are advocating.
Iāll try to make the objection I am trying to articulate more forceful:
Suppose we are considering some awful painful experience, X, and some trivial inconvenience, Y. Suppose everyone on earth agrees that, when thinking altruistically about others, 8000 people having experience X would be worse than 8 billion people having experience Y (thatās how bad experience X is, and how trivial experience Y is).
Suppose also that everyone on earth adopts a discount threshold of just over 1 in a million.
Now suppose that everyone on earth is faced with the choice of experiencing Y or facing a 1 in a million chance of X. Since they have a discount threshold, they all choose to go with the 1 in a million chance of X.
Now, with extremely high probability, ~8,000 people on earth will experience X. Take the perspective of any one individual looking at whatās happened to everyone else. They will agree that the situation for everyone else is bad. They should, at least when thinking altruistically, wish that everyone else had chosen to experience Y instead (everyone agrees that it is worse for 8000 people to experience X than 8 billion to experience Y). But they canāt actually recommend that any particular person should have decided any differently, because they have done exactly the same thing themselves!
I have conflicting intuitions here. In the case as you described it, I want to bite the bullet and say that everyone is acting rationally, and the 8000 are just unlucky. Something seems off about reducing risk for yourself in service of the project of reducing overall suffering, when you wouldnāt do it in service of the project of reducing your own suffering.
That said, if you change the thought experiment so that everyone can experience Y in order to prevent a 1 in a million chance of someone else experiencing X, Iām much more inclined to say that we should integrate as youāve described. It seems like the dynamics are genuinely different enough that maybe I can make this distinction coherently?
Re: seatbelts, I was a bit confused; you seemed to be saying that when you integrate over ādecisions anyone makes that cause benefit/āharm,ā you collapse back to expected utility theory. I was suggesting that expected utility theory as I understand it does not involve integrating over everyoneās decisions, since then e.g. the driver with deflated self-worth in my previous example should wear a seatbelt anyway.
It seems very strange to me to treat reducing someoneās else chance of X differently to reducing your own (if youāre confident it would affect each of you similarly)! But thank you for engaging with these questions, itās helping me understand your position better I think.
By ācollapsing back to expected utility theoryā I only meant that if you consider a large enough reference class of similar decisions, it seems like it will in practice be the same as acting as if you had an extremely low discount threshold? But it sounds like I may just not have understood the original approach well enough.
This is an interesting approach!
Iām still a bit confused about exactly how to apply this method in practice though. If I am understanding it correctly, then if someone knows they will only drive 1 mile in their entire life, you would say that wearing a seatbelt becomes the wrong thing to do for them? On the other hand, if they know that they will drive 500 miles, then wearing a seatbelt for those 500 miles might make sense?
But what if they are in a situation where they do not know how long their journeys are going to be? They are taking 1 car journey in their life, and maybe the car will stop after 1 mile, or maybe after 1000. Maybe they have some subjective probability distribution over these possible journey lengths. How do they make their decision in this situation? Iād be interested to see a worked example here!
Iām also still confused about how you decide on the groupings in practice. If I know that I will travel 250 miles by car in my life, and 250 miles by bike, and each risk is below the discount threshold, does that mean I should wear neither a seatbelt nor a bike helmet? Or should I wear both, if the combined risk of both driving and cycling is together enough to cross the threshold? Should I treat seatbelts/ābike-helmets as one decision or two separate ones?
If Iām treating it as two separate decisions, then this feels arbitrary (why not split the seatbelt decision into driving on main roads vs driving on side roads, to push things under the threshold when it previously would have been over?), but if I treat them together, then it feels like my actual discount threshold in practical situations is going to become far smaller than the one I decide on a priori (since I make a lot of decisions in my life!)
Thanks!
Indeed, if all weāre considering is the decision to wear seatbelts or not, I would say that wearing a seatbelt for a lifetime total of 1 mile is (maybe) fanatical, and 500 is (maybe) not. In practice, your second question about groupings will come into play, see below. If you donāt know how many miles youāll drive and have a probability distribution, I suppose youād treat it the same way as the scenarios I discuss in the post: discretize the distribution according to your discount threshold so you donāt end up discounting everything, then take the expected value as normal and see if itās worth all the seatbelt applications. The results will depend heavily on the shape of the distribution and your numbers for discount threshold, value of life, etc.
The grouping issue is tricky. It seems to me like we ought to consider the decisions together, since Iām (more or less) indifferent between dying in a bike crash vs a car crash. Perhaps we ought to group all ādecisions that might kill youā together, and think of it somewhat like the repeated trade offer described in the post; each time you contemplate going helmet or seatbelt-less, you have the option to gain some utility at the cost of a slightly higher risk of dying, and the reasonable thing to do is integrate over your decisions (although itāll be slightly more complicated, since maybe you expect to e.g. drive many more miles in the future and need to account for that somehow).
As mentioned in the post, integrating like this in situations of repeated decision-making can mean that you reject arbitrarily small changes in probability, even those below your discount threshold. I wouldnāt say that this effect means that your practical discount threshold is arbitrarily small.
Thanks for the reply!
On the grouping issue, suppose we take your suggestion of grouping all ādecisions that might kill youā into one, and suppose that everyone on earth follows this policy. Suppose also that there is some precaution against painful death (like seatbelt wearing) that everyone decides not to do, in order to gain some trivial benefit. Suppose that integrating over their own life, this decision makes sense, because the risk is still below their discount threshold. Whereas on expected value terms it does not.
It might then be the case that if everyone follows this policy independently, that globally, over billions of people, we might still expect thousands of people to end up dying avoidable painful deaths. Which seems bad!
This seems like a strong case for integrating not just over ādecisions that might kill youā, but over ādecisions that anyone takes that might kill themā.. and I think through similar appeals you could imagine extending that to ādecisions anyone takes that might cause any benefit/āharm to any sentient beingā, at which point, in a big universe, have you not just arrived back at expected utility theory again..?
Hmm, I donāt really feel the force of this objection. My decision to wear my own seatbelt is causally unconnected to both everyone elseās decisions and whatever consequences everyone else faces, and everyone elseās decisions are unconnected to mine. It seems odd that I should then be integrating over those decisions, regardless of what decision theory/āheuristic Iām using.
For example, suppose I use expected value theory, and I value my own life a little less than everyone elseās. I judge that the trivial inconvenience of putting on a seatbelt genuinely is not worth the decreased risk to my life, although I would counsel other people to wear seatbelts given the higher value of their lives (and thus upon reflection support a universal policy of seatbelt wearing). Do you think I ought to integrate over everyoneās decisions and wear a seatbelt anyway? If so, I think youāre arguing for something much stronger than standard expected value reasoning.
To answer your question, personally, I think we should probably stick with standard expected value reasoning over the approach you are advocating here. So no, I wouldnāt tell you to wear one anyway. But Iām confused about exactly what it is you are advocating.
Iāll try to make the objection I am trying to articulate more forceful:
Suppose we are considering some awful painful experience, X, and some trivial inconvenience, Y. Suppose everyone on earth agrees that, when thinking altruistically about others, 8000 people having experience X would be worse than 8 billion people having experience Y (thatās how bad experience X is, and how trivial experience Y is).
Suppose also that everyone on earth adopts a discount threshold of just over 1 in a million.
Now suppose that everyone on earth is faced with the choice of experiencing Y or facing a 1 in a million chance of X. Since they have a discount threshold, they all choose to go with the 1 in a million chance of X.
Now, with extremely high probability, ~8,000 people on earth will experience X. Take the perspective of any one individual looking at whatās happened to everyone else. They will agree that the situation for everyone else is bad. They should, at least when thinking altruistically, wish that everyone else had chosen to experience Y instead (everyone agrees that it is worse for 8000 people to experience X than 8 billion to experience Y). But they canāt actually recommend that any particular person should have decided any differently, because they have done exactly the same thing themselves!
Something seems wrong here!
Right, gotcha.
I have conflicting intuitions here. In the case as you described it, I want to bite the bullet and say that everyone is acting rationally, and the 8000 are just unlucky. Something seems off about reducing risk for yourself in service of the project of reducing overall suffering, when you wouldnāt do it in service of the project of reducing your own suffering.
That said, if you change the thought experiment so that everyone can experience Y in order to prevent a 1 in a million chance of someone else experiencing X, Iām much more inclined to say that we should integrate as youāve described. It seems like the dynamics are genuinely different enough that maybe I can make this distinction coherently?
Re: seatbelts, I was a bit confused; you seemed to be saying that when you integrate over ādecisions anyone makes that cause benefit/āharm,ā you collapse back to expected utility theory. I was suggesting that expected utility theory as I understand it does not involve integrating over everyoneās decisions, since then e.g. the driver with deflated self-worth in my previous example should wear a seatbelt anyway.
It seems very strange to me to treat reducing someoneās else chance of X differently to reducing your own (if youāre confident it would affect each of you similarly)! But thank you for engaging with these questions, itās helping me understand your position better I think.
By ācollapsing back to expected utility theoryā I only meant that if you consider a large enough reference class of similar decisions, it seems like it will in practice be the same as acting as if you had an extremely low discount threshold? But it sounds like I may just not have understood the original approach well enough.