It depends on what the X is. In most real-world cases I don’t think our imprecision ought to be that extreme. (It will also be vague, not “[0,1]” or “(0.01, 0.99)” but, “eh, seems like lots of different precise beliefs are defensible as long as they’re not super close to 1 or 0”, and in that state it will feel reasonable to say that we should strictly prefer such an extreme bet.)
But FWIW I do think there are hypothetical cases where incomparability looks correct. Suppose a demon appears to me and says “The F of every X is between 0 and 1. What’s the probability that the F of the next X is less than ½?” I have no clue what X and F mean. In particular, I have no idea if F is in “natural” units that would compel me to put a uniform prior over F-values. Why not a uniform prior over F^2 or F^-100? So it does seem sensible to have maximally imprecise beliefs here, and to say it’s indeterminate whether we should take bets like yours.
Yes, it feels bad not to strictly prefer a bet which pays 10^10 if F < ½. But adopting a precise prior would commit me to turning down other bets that look extremely good on other arbitrarily-chosen priors, which also feels bad.
FWIW, unless you have reason otherwise (you may very well think some Fs are more likely than others), there’s some symmetry here between any function F and the function 1-F, and I think if you apply it, you could say P(F > 1⁄2) = P(1-F < 1⁄2) = P(F < 1⁄2), so P(F < 1⁄2) ≤ 1⁄2, and strictly less iff P(F = 1⁄2) > 0.
If you can rule out P(F = 1⁄2) > 0 (say by an additional assumption), or the bet were on F ≤ 1⁄2 instead of F < 1⁄2, then the probability would just be 1⁄2.
Thanks jesse. Is there a way that we could actually do this? Like choose some F(X) which is unknown to both of us but guaranteed to be between 0 and 1, and if it’s less than 1⁄2 I pay you a dollar and if it’s greater than 1⁄2 you pay me some large amount of money.
I feel pretty confident I would take that bet if the selection of F was not obviously antagonistic towards me, but maybe I’m not understanding the types of scenarios you are imagining.
Good question! Yeah, I can’t think of a real-world process about which I’d want to have maximally imprecise beliefs. (The point of choosing a “demon” in the example is that we would have good reason to worry the process is adversarial if we’re talking about a demon…)
(Is this supposed to be part of an argument against imprecision in general / sufficient imprecision to imply consequentialist cluelessness? Because I don’t think you need anywhere near maximally imprecise beliefs for that. The examples in the paper just use the range [0,1] for simplicity.)
Thanks, Ben!
It depends on what the X is. In most real-world cases I don’t think our imprecision ought to be that extreme. (It will also be vague, not “[0,1]” or “(0.01, 0.99)” but, “eh, seems like lots of different precise beliefs are defensible as long as they’re not super close to 1 or 0”, and in that state it will feel reasonable to say that we should strictly prefer such an extreme bet.)
But FWIW I do think there are hypothetical cases where incomparability looks correct. Suppose a demon appears to me and says “The F of every X is between 0 and 1. What’s the probability that the F of the next X is less than ½?” I have no clue what X and F mean. In particular, I have no idea if F is in “natural” units that would compel me to put a uniform prior over F-values. Why not a uniform prior over F^2 or F^-100? So it does seem sensible to have maximally imprecise beliefs here, and to say it’s indeterminate whether we should take bets like yours.
Yes, it feels bad not to strictly prefer a bet which pays 10^10 if F < ½. But adopting a precise prior would commit me to turning down other bets that look extremely good on other arbitrarily-chosen priors, which also feels bad.
FWIW, unless you have reason otherwise (you may very well think some Fs are more likely than others), there’s some symmetry here between any function F and the function 1-F, and I think if you apply it, you could say P(F > 1⁄2) = P(1-F < 1⁄2) = P(F < 1⁄2), so P(F < 1⁄2) ≤ 1⁄2, and strictly less iff P(F = 1⁄2) > 0.
If you can rule out P(F = 1⁄2) > 0 (say by an additional assumption), or the bet were on F ≤ 1⁄2 instead of F < 1⁄2, then the probability would just be 1⁄2.
Thanks jesse. Is there a way that we could actually do this? Like choose some F(X) which is unknown to both of us but guaranteed to be between 0 and 1, and if it’s less than 1⁄2 I pay you a dollar and if it’s greater than 1⁄2 you pay me some large amount of money.
I feel pretty confident I would take that bet if the selection of F was not obviously antagonistic towards me, but maybe I’m not understanding the types of scenarios you are imagining.
Good question! Yeah, I can’t think of a real-world process about which I’d want to have maximally imprecise beliefs. (The point of choosing a “demon” in the example is that we would have good reason to worry the process is adversarial if we’re talking about a demon…)
(Is this supposed to be part of an argument against imprecision in general / sufficient imprecision to imply consequentialist cluelessness? Because I don’t think you need anywhere near maximally imprecise beliefs for that. The examples in the paper just use the range [0,1] for simplicity.)