Hmm, I don’t really feel the force of this objection. My decision to wear my own seatbelt is causally unconnected to both everyone else’s decisions and whatever consequences everyone else faces, and everyone else’s decisions are unconnected to mine. It seems odd that I should then be integrating over those decisions, regardless of what decision theory/heuristic I’m using.
For example, suppose I use expected value theory, and I value my own life a little less than everyone else’s. I judge that the trivial inconvenience of putting on a seatbelt genuinely is not worth the decreased risk to my life, although I would counsel other people to wear seatbelts given the higher value of their lives (and thus upon reflection support a universal policy of seatbelt wearing). Do you think I ought to integrate over everyone’s decisions and wear a seatbelt anyway? If so, I think you’re arguing for something much stronger than standard expected value reasoning.
To answer your question, personally, I think we should probably stick with standard expected value reasoning over the approach you are advocating here. So no, I wouldn’t tell you to wear one anyway. But I’m confused about exactly what it is you are advocating.
I’ll try to make the objection I am trying to articulate more forceful:
Suppose we are considering some awful painful experience, X, and some trivial inconvenience, Y. Suppose everyone on earth agrees that, when thinking altruistically about others, 8000 people having experience X would be worse than 8 billion people having experience Y (that’s how bad experience X is, and how trivial experience Y is).
Suppose also that everyone on earth adopts a discount threshold of just over 1 in a million.
Now suppose that everyone on earth is faced with the choice of experiencing Y or facing a 1 in a million chance of X. Since they have a discount threshold, they all choose to go with the 1 in a million chance of X.
Now, with extremely high probability, ~8,000 people on earth will experience X. Take the perspective of any one individual looking at what’s happened to everyone else. They will agree that the situation for everyone else is bad. They should, at least when thinking altruistically, wish that everyone else had chosen to experience Y instead (everyone agrees that it is worse for 8000 people to experience X than 8 billion to experience Y). But they can’t actually recommend that any particular person should have decided any differently, because they have done exactly the same thing themselves!
I have conflicting intuitions here. In the case as you described it, I want to bite the bullet and say that everyone is acting rationally, and the 8000 are just unlucky. Something seems off about reducing risk for yourself in service of the project of reducing overall suffering, when you wouldn’t do it in service of the project of reducing your own suffering.
That said, if you change the thought experiment so that everyone can experience Y in order to prevent a 1 in a million chance of someone else experiencing X, I’m much more inclined to say that we should integrate as you’ve described. It seems like the dynamics are genuinely different enough that maybe I can make this distinction coherently?
Re: seatbelts, I was a bit confused; you seemed to be saying that when you integrate over “decisions anyone makes that cause benefit/harm,” you collapse back to expected utility theory. I was suggesting that expected utility theory as I understand it does not involve integrating over everyone’s decisions, since then e.g. the driver with deflated self-worth in my previous example should wear a seatbelt anyway.
It seems very strange to me to treat reducing someone’s else chance of X differently to reducing your own (if you’re confident it would affect each of you similarly)! But thank you for engaging with these questions, it’s helping me understand your position better I think.
By ‘collapsing back to expected utility theory’ I only meant that if you consider a large enough reference class of similar decisions, it seems like it will in practice be the same as acting as if you had an extremely low discount threshold? But it sounds like I may just not have understood the original approach well enough.
Hmm, I don’t really feel the force of this objection. My decision to wear my own seatbelt is causally unconnected to both everyone else’s decisions and whatever consequences everyone else faces, and everyone else’s decisions are unconnected to mine. It seems odd that I should then be integrating over those decisions, regardless of what decision theory/heuristic I’m using.
For example, suppose I use expected value theory, and I value my own life a little less than everyone else’s. I judge that the trivial inconvenience of putting on a seatbelt genuinely is not worth the decreased risk to my life, although I would counsel other people to wear seatbelts given the higher value of their lives (and thus upon reflection support a universal policy of seatbelt wearing). Do you think I ought to integrate over everyone’s decisions and wear a seatbelt anyway? If so, I think you’re arguing for something much stronger than standard expected value reasoning.
To answer your question, personally, I think we should probably stick with standard expected value reasoning over the approach you are advocating here. So no, I wouldn’t tell you to wear one anyway. But I’m confused about exactly what it is you are advocating.
I’ll try to make the objection I am trying to articulate more forceful:
Suppose we are considering some awful painful experience, X, and some trivial inconvenience, Y. Suppose everyone on earth agrees that, when thinking altruistically about others, 8000 people having experience X would be worse than 8 billion people having experience Y (that’s how bad experience X is, and how trivial experience Y is).
Suppose also that everyone on earth adopts a discount threshold of just over 1 in a million.
Now suppose that everyone on earth is faced with the choice of experiencing Y or facing a 1 in a million chance of X. Since they have a discount threshold, they all choose to go with the 1 in a million chance of X.
Now, with extremely high probability, ~8,000 people on earth will experience X. Take the perspective of any one individual looking at what’s happened to everyone else. They will agree that the situation for everyone else is bad. They should, at least when thinking altruistically, wish that everyone else had chosen to experience Y instead (everyone agrees that it is worse for 8000 people to experience X than 8 billion to experience Y). But they can’t actually recommend that any particular person should have decided any differently, because they have done exactly the same thing themselves!
Something seems wrong here!
Right, gotcha.
I have conflicting intuitions here. In the case as you described it, I want to bite the bullet and say that everyone is acting rationally, and the 8000 are just unlucky. Something seems off about reducing risk for yourself in service of the project of reducing overall suffering, when you wouldn’t do it in service of the project of reducing your own suffering.
That said, if you change the thought experiment so that everyone can experience Y in order to prevent a 1 in a million chance of someone else experiencing X, I’m much more inclined to say that we should integrate as you’ve described. It seems like the dynamics are genuinely different enough that maybe I can make this distinction coherently?
Re: seatbelts, I was a bit confused; you seemed to be saying that when you integrate over “decisions anyone makes that cause benefit/harm,” you collapse back to expected utility theory. I was suggesting that expected utility theory as I understand it does not involve integrating over everyone’s decisions, since then e.g. the driver with deflated self-worth in my previous example should wear a seatbelt anyway.
It seems very strange to me to treat reducing someone’s else chance of X differently to reducing your own (if you’re confident it would affect each of you similarly)! But thank you for engaging with these questions, it’s helping me understand your position better I think.
By ‘collapsing back to expected utility theory’ I only meant that if you consider a large enough reference class of similar decisions, it seems like it will in practice be the same as acting as if you had an extremely low discount threshold? But it sounds like I may just not have understood the original approach well enough.