How about this: fanaticism is fine in principle, but in practice we never face any actual fanatical choices. For any actions with extremely large value V, we estimate p < 1/V, so that the expected value is <1, and we ignore these actions based on standard EV reasoning.
It could just always happen to have been the case or you could have a strong prior like this, although I don’t think you can just declare this to necessarily be true; you should accept that evidence can in principle overcome the prior. It would be motivated reasoning to decide what probabilities to assign to empirical questions just to make sure you don’t accept a normative implication you don’t like. (Then again, even choosing your prior this way may also be motivated reasoning, unless you can justify it another way.)
Also, I think there have been serious proposals for Pascalian cases, e.g. see this paper.
Furthermore, you’d have to assign p=0 when V=∞, which means perfect certainty in an empirical claim, which seems wrong.
Also related is this GiveWell post, where they model the standard deviation for the value/good accomplished as proportional (equal) to the estimate of value, with a normal distribution with mean X and a standard deviation X, where X is your value estimate. In this way, larger estimates of X are so suspicious they actually reduce the expected value.
In real cases, I think V should have a distribution with support ranging over the whole real line and incude both positive and negative infinity. This is for both the fanatical and non-fanatical option (compared to each other or some fixed third option). The difference is that most of the (difference in) expected value of the fanatical option comes from a region of very low probability. This way, I’m not assigning perfect certainty to infinity, just a greater probability for a fanatical option than a non-fanatical option.
(I’m kind of skipping over some subtleties about dealing with infinities. I think there are reasonable approaches, although they aren’t perfectly satisfying.)
I think V=∞ is logically possible when you aggregate over space and time, and I think we shouldn’t generally assign probability 0 to anything that’s logically possible (except where a measure is continuous; I think this requirement had a name, but I forget). Pascal’s wager and Dyson’s wager illustrate this.
We have reason to believe the universe is infinite in extent, and there’s a chance that it’s infinite temporally. You might claim that our lightcone is finite/bounded and we can’t affect anything outside of it (setting aside multiverses), but this is an empirical claim, so we should give it some chance of being false. That we could affect an infinite region of spacetime is also not a logical impossibility, so we shouldn’t absolutely rule it out.
Yep, we’ve got pretty good evidence that our spacetime will have infinite 4D volume and, if you arranged happy lives uniformly across that volume, we’d have to say that the outcome is better than any outcome with merely finite total value. Nothing logically impossible there (even if it were practically impossible).
That said, assigning value “∞” to such an outcome is pretty crude and unhelpful. And what it means will depend entirely on how we’ve defined ∞ in our number system. So, what I think we should do in such a case is not say V equals such and such. Instead, ditch the value function when you’ve left the domain where it works. Instead, just deal with your set of possible outcomes, your lotteries (probability measures over that set), and a betterness relation which might sometimes follow a value function but might also extend to outcomes beyond the function’s domain. That’s what people tend to do in the infinite aggregation literature (including the social choice papers that consider infinite time horizons), and for good reason.
we shouldn’t generally assign probability 0 to anything that’s logically possible (except where a measure is continuous; I think this requirement had a name, but I forget)
You’re probably (pun not intended) thinking of Cromwell’s rule.
That’d be fine for the paper, but I do think we face at least some decisions in which EV theory gets fanatical. The example in the paper—Dyson’s Wager—is intended as a mostly realistic such example. Another one would be a Pascal’s Mugging case in which the threat was a moral one. I know I put P>0 on that sort of thing being possible, so I’d face cases like that if anyone really wanted to exploit me. (That said, I think we can probably overcome Pascal’s Muggings using other principles.)
How about this: fanaticism is fine in principle, but in practice we never face any actual fanatical choices. For any actions with extremely large value V, we estimate p < 1/V, so that the expected value is <1, and we ignore these actions based on standard EV reasoning.
It could just always happen to have been the case or you could have a strong prior like this, although I don’t think you can just declare this to necessarily be true; you should accept that evidence can in principle overcome the prior. It would be motivated reasoning to decide what probabilities to assign to empirical questions just to make sure you don’t accept a normative implication you don’t like. (Then again, even choosing your prior this way may also be motivated reasoning, unless you can justify it another way.)
Also, I think there have been serious proposals for Pascalian cases, e.g. see this paper.
Furthermore, you’d have to assign p=0 when V=∞, which means perfect certainty in an empirical claim, which seems wrong.
Also related is this GiveWell post, where they model the standard deviation for the value/good accomplished as proportional (equal) to the estimate of value, with a normal distribution with mean X and a standard deviation X, where X is your value estimate. In this way, larger estimates of X are so suspicious they actually reduce the expected value.
Yes, I’m saying that it happens to be the case that, in practice, fanatical tradeoffs never come up.
Hm, doesn’t claiming V=∞ also require perfect certainty? Ie, to know that V is literally infinite rather than some large number.
In real cases, I think V should have a distribution with support ranging over the whole real line and incude both positive and negative infinity. This is for both the fanatical and non-fanatical option (compared to each other or some fixed third option). The difference is that most of the (difference in) expected value of the fanatical option comes from a region of very low probability. This way, I’m not assigning perfect certainty to infinity, just a greater probability for a fanatical option than a non-fanatical option.
(I’m kind of skipping over some subtleties about dealing with infinities. I think there are reasonable approaches, although they aren’t perfectly satisfying.)
I guess the problem is that V=∞ is nonsensical. We can talk about V→∞, but not equality.
I think V=∞ is logically possible when you aggregate over space and time, and I think we shouldn’t generally assign probability 0 to anything that’s logically possible (except where a measure is continuous; I think this requirement had a name, but I forget). Pascal’s wager and Dyson’s wager illustrate this.
We have reason to believe the universe is infinite in extent, and there’s a chance that it’s infinite temporally. You might claim that our lightcone is finite/bounded and we can’t affect anything outside of it (setting aside multiverses), but this is an empirical claim, so we should give it some chance of being false. That we could affect an infinite region of spacetime is also not a logical impossibility, so we shouldn’t absolutely rule it out.
Yep, we’ve got pretty good evidence that our spacetime will have infinite 4D volume and, if you arranged happy lives uniformly across that volume, we’d have to say that the outcome is better than any outcome with merely finite total value. Nothing logically impossible there (even if it were practically impossible).
That said, assigning value “∞” to such an outcome is pretty crude and unhelpful. And what it means will depend entirely on how we’ve defined ∞ in our number system. So, what I think we should do in such a case is not say V equals such and such. Instead, ditch the value function when you’ve left the domain where it works. Instead, just deal with your set of possible outcomes, your lotteries (probability measures over that set), and a betterness relation which might sometimes follow a value function but might also extend to outcomes beyond the function’s domain. That’s what people tend to do in the infinite aggregation literature (including the social choice papers that consider infinite time horizons), and for good reason.
You’re probably (pun not intended) thinking of Cromwell’s rule.
Yes, thanks!
That’d be fine for the paper, but I do think we face at least some decisions in which EV theory gets fanatical. The example in the paper—Dyson’s Wager—is intended as a mostly realistic such example. Another one would be a Pascal’s Mugging case in which the threat was a moral one. I know I put P>0 on that sort of thing being possible, so I’d face cases like that if anyone really wanted to exploit me. (That said, I think we can probably overcome Pascal’s Muggings using other principles.)