Actually, I think it’s worth being a bit more careful about treating low-likelihood outcomes as irrelevant simply because you aren’t able to attempt to get that outcome more often: your intuition might be right, but you would likely be wrong in then concluding “expected utility/value theory is bunk.” Rather than throw out EV, you should figure out whether your intuition is recognizing something that your EV model is ignoring, and if so, figure out what that is. I listed a few example points above, to give another illustration: Suppose you have a case where you have the chance to push button X or button Y once: if you push button X, there is a 1⁄10,000 chance that you will save 10,000,000 people from certain death (but a 9,999⁄10,000 chance that they will all still die); if you push button Y there is a 100% chance that 1 person will be saved (but 9,999,999 people will die). There are definitely some selfish reasons to choose button Y (e.g., you won’t feel guilty like if you pressed button X and everyone still died), and there may also be some aspect of non-linearity in the impact of how many people are dying (refer back to (1) in my original answer). However, if we assume away those other details (e.g., you won’t feel guilty, the deaths to utility loss is relatively linear) -- if we just assume the situation is “press button X for a 1⁄10,000 chance of 10,000,000 utils; press button Y for a 100% chance of 1 util” the answer is perhaps counterintuitive but still reasonable: without having a crystal ball that perfectly tells the future, the optimal strategy is to press button X.
Actually, I think it’s worth being a bit more careful about treating low-likelihood outcomes as irrelevant simply because you aren’t able to attempt to get that outcome more often: your intuition might be right, but you would likely be wrong in then concluding “expected utility/value theory is bunk.” Rather than throw out EV, you should figure out whether your intuition is recognizing something that your EV model is ignoring, and if so, figure out what that is. I listed a few example points above, to give another illustration:
Suppose you have a case where you have the chance to push button X or button Y once: if you push button X, there is a 1⁄10,000 chance that you will save 10,000,000 people from certain death (but a 9,999⁄10,000 chance that they will all still die); if you push button Y there is a 100% chance that 1 person will be saved (but 9,999,999 people will die). There are definitely some selfish reasons to choose button Y (e.g., you won’t feel guilty like if you pressed button X and everyone still died), and there may also be some aspect of non-linearity in the impact of how many people are dying (refer back to (1) in my original answer). However, if we assume away those other details (e.g., you won’t feel guilty, the deaths to utility loss is relatively linear) -- if we just assume the situation is “press button X for a 1⁄10,000 chance of 10,000,000 utils; press button Y for a 100% chance of 1 util” the answer is perhaps counterintuitive but still reasonable: without having a crystal ball that perfectly tells the future, the optimal strategy is to press button X.