I won’t try to answer your three numbered points since they are more than a bit outside my wheelhouse + other people have already started to address them, but I will mention a few things about your preface to that (e.g., Pascal’s mugging). I was a bit surprised to not see a mention of the so-called Petersburg Paradox, since that posed the most longstanding challenge to my understanding of expected value. The major takeaways I’ve had for dealing with both the Petersburg Paradox and Pascal’s mugging (more specifically, “why is it that this supposedly accurate decision theory rule seems to lead me to make a clearly bad decision?”) are somewhat-interrelated and are as follows: 1. Non-linear valuation/utility: money should not be assumed to linearly translate to utility, meaning that as your numerical winnings reach massive numbers you typically will see massive drops in marginal utility. This by itself should mostly address the issue with the lottery choice you mentioned: the “expected payoff/winnings” (in currency terms) is almost meaningless because it totally fails to reflect the expected value, which is probably miniscule/negative since getting $100 trillion likely does not make you that much happier than getting $1 trillion (for numerical illustration, let’s suppose 1000 utils vs. 995u), which itself likely is only slightly better than winning $100 billion (say, 950u) … and so on whereas it costs you 40 years if you don’t win (let’s suppose that’s like −100u). 2. Bounded bankrolling: with things like the Petersburg Paradox, my understanding is that the longer you play, the higher your average payoff tends to be. However, that might still be -$99 by the time you go bankrupt and literally starve to death, after which point you no longer can play. 3. Bounded payoff: in reality, you would expect that payoffs to be limited to some reasonable, finite amount. If we suppose that they are for whatever reason not limited, then that essentially “breaks the barrier” for other outcomes, which are the next point: 4. Countervailing cases: This is really crucial for bringing things together, yet I feel like it is consistently underappreciated. Take for example classic Pascal’s mugging-type situations, like “A strange-looking man in a suit walks up to you and says that he will warp up to his spaceship and detonate a super-mega nuke that will eradicate all life on earth if and only if you do not give him $50 (which you have in your wallet), but he will give you $3^^^3 tomorrow if and only if you give him $50.” We could technically/formally suppose the chance he is being honest is nonzero (e.g., 0.0000000001%), but still abide by rational expectation theory if you suppose that there are indistinguishably likely cases that cause the opposite expected value—for example, the possibility that he is telling you the exact opposite of what he will do if you give him the money (for comparison, see the philosopher God response to Pascal’s wager), or the possibility that the “true” mega-punisher/rewarder is actually just a block down the street and if you give your money to this random lunatic you won’t have the $50 to give to the true one (for comparison, see the “other religions” response to the narrow/Christianity-specific Pascal’s wager). Ultimately, this is the concept of fighting (imaginary) fire with (imaginary) fire, occasionally shows up in realms like competitive policy debate (where people make absurd arguments about how some random policy may lead to extinction), and is a major reason why I have a probability-trimming heuristic for these kinds of situations/hypotheticals.
Hi Harrison. I think I agree strongly with (2) and (3) here. I’d argue Infinite expected values that depend on (very) large numbers of trials / bankrolls etc. can and should be ignored. With the Petersburg Paradox as state in the link you included, making any vaguely reasonable assumption about the wealth of the casino, or lifetime of the player, the expected value falls to something much less appealing! This is kind of related to my “saving lives” example in my question—if you only get to play once, the expected value becomes basically irrelevant because the good outcome just actually doesn’t happen. It only starts to be worthwhile when you get to play many times. And hey, maybe you do. If there are 10,000 EAs all doing totally (probabilistically) independent things that each have a 1 in a million chance of some huge payoff, we start to get into realms worth thinking about.
Actually, I think it’s worth being a bit more careful about treating low-likelihood outcomes as irrelevant simply because you aren’t able to attempt to get that outcome more often: your intuition might be right, but you would likely be wrong in then concluding “expected utility/value theory is bunk.” Rather than throw out EV, you should figure out whether your intuition is recognizing something that your EV model is ignoring, and if so, figure out what that is. I listed a few example points above, to give another illustration: Suppose you have a case where you have the chance to push button X or button Y once: if you push button X, there is a 1⁄10,000 chance that you will save 10,000,000 people from certain death (but a 9,999⁄10,000 chance that they will all still die); if you push button Y there is a 100% chance that 1 person will be saved (but 9,999,999 people will die). There are definitely some selfish reasons to choose button Y (e.g., you won’t feel guilty like if you pressed button X and everyone still died), and there may also be some aspect of non-linearity in the impact of how many people are dying (refer back to (1) in my original answer). However, if we assume away those other details (e.g., you won’t feel guilty, the deaths to utility loss is relatively linear) -- if we just assume the situation is “press button X for a 1⁄10,000 chance of 10,000,000 utils; press button Y for a 100% chance of 1 util” the answer is perhaps counterintuitive but still reasonable: without having a crystal ball that perfectly tells the future, the optimal strategy is to press button X.
I won’t try to answer your three numbered points since they are more than a bit outside my wheelhouse + other people have already started to address them, but I will mention a few things about your preface to that (e.g., Pascal’s mugging).
I was a bit surprised to not see a mention of the so-called Petersburg Paradox, since that posed the most longstanding challenge to my understanding of expected value. The major takeaways I’ve had for dealing with both the Petersburg Paradox and Pascal’s mugging (more specifically, “why is it that this supposedly accurate decision theory rule seems to lead me to make a clearly bad decision?”) are somewhat-interrelated and are as follows:
1. Non-linear valuation/utility: money should not be assumed to linearly translate to utility, meaning that as your numerical winnings reach massive numbers you typically will see massive drops in marginal utility. This by itself should mostly address the issue with the lottery choice you mentioned: the “expected payoff/winnings” (in currency terms) is almost meaningless because it totally fails to reflect the expected value, which is probably miniscule/negative since getting $100 trillion likely does not make you that much happier than getting $1 trillion (for numerical illustration, let’s suppose 1000 utils vs. 995u), which itself likely is only slightly better than winning $100 billion (say, 950u) … and so on whereas it costs you 40 years if you don’t win (let’s suppose that’s like −100u).
2. Bounded bankrolling: with things like the Petersburg Paradox, my understanding is that the longer you play, the higher your average payoff tends to be. However, that might still be -$99 by the time you go bankrupt and literally starve to death, after which point you no longer can play.
3. Bounded payoff: in reality, you would expect that payoffs to be limited to some reasonable, finite amount. If we suppose that they are for whatever reason not limited, then that essentially “breaks the barrier” for other outcomes, which are the next point:
4. Countervailing cases: This is really crucial for bringing things together, yet I feel like it is consistently underappreciated. Take for example classic Pascal’s mugging-type situations, like “A strange-looking man in a suit walks up to you and says that he will warp up to his spaceship and detonate a super-mega nuke that will eradicate all life on earth if and only if you do not give him $50 (which you have in your wallet), but he will give you $3^^^3 tomorrow if and only if you give him $50.” We could technically/formally suppose the chance he is being honest is nonzero (e.g., 0.0000000001%), but still abide by rational expectation theory if you suppose that there are indistinguishably likely cases that cause the opposite expected value—for example, the possibility that he is telling you the exact opposite of what he will do if you give him the money (for comparison, see the philosopher God response to Pascal’s wager), or the possibility that the “true” mega-punisher/rewarder is actually just a block down the street and if you give your money to this random lunatic you won’t have the $50 to give to the true one (for comparison, see the “other religions” response to the narrow/Christianity-specific Pascal’s wager). Ultimately, this is the concept of fighting (imaginary) fire with (imaginary) fire, occasionally shows up in realms like competitive policy debate (where people make absurd arguments about how some random policy may lead to extinction), and is a major reason why I have a probability-trimming heuristic for these kinds of situations/hypotheticals.
Hi Harrison. I think I agree strongly with (2) and (3) here. I’d argue Infinite expected values that depend on (very) large numbers of trials / bankrolls etc. can and should be ignored. With the Petersburg Paradox as state in the link you included, making any vaguely reasonable assumption about the wealth of the casino, or lifetime of the player, the expected value falls to something much less appealing! This is kind of related to my “saving lives” example in my question—if you only get to play once, the expected value becomes basically irrelevant because the good outcome just actually doesn’t happen. It only starts to be worthwhile when you get to play many times. And hey, maybe you do. If there are 10,000 EAs all doing totally (probabilistically) independent things that each have a 1 in a million chance of some huge payoff, we start to get into realms worth thinking about.
Actually, I think it’s worth being a bit more careful about treating low-likelihood outcomes as irrelevant simply because you aren’t able to attempt to get that outcome more often: your intuition might be right, but you would likely be wrong in then concluding “expected utility/value theory is bunk.” Rather than throw out EV, you should figure out whether your intuition is recognizing something that your EV model is ignoring, and if so, figure out what that is. I listed a few example points above, to give another illustration:
Suppose you have a case where you have the chance to push button X or button Y once: if you push button X, there is a 1⁄10,000 chance that you will save 10,000,000 people from certain death (but a 9,999⁄10,000 chance that they will all still die); if you push button Y there is a 100% chance that 1 person will be saved (but 9,999,999 people will die). There are definitely some selfish reasons to choose button Y (e.g., you won’t feel guilty like if you pressed button X and everyone still died), and there may also be some aspect of non-linearity in the impact of how many people are dying (refer back to (1) in my original answer). However, if we assume away those other details (e.g., you won’t feel guilty, the deaths to utility loss is relatively linear) -- if we just assume the situation is “press button X for a 1⁄10,000 chance of 10,000,000 utils; press button Y for a 100% chance of 1 util” the answer is perhaps counterintuitive but still reasonable: without having a crystal ball that perfectly tells the future, the optimal strategy is to press button X.