âReturning to the epistemic perspective though: letâs suppose you do trust your future credences, and you want to avoid the Bayesian âgut problemsâ I discussed above. In that case, at least in theory, there are hard constraints on how you should expect your beliefs to change over time, even as you move from far away to up close.
In particular, you should never think that thereâs more than a 1/âx chance that your credence will increase by x times: i.e., never more than a 50% chance that itâll double, never more than a 10% chance that itâll 10x. And if your credence is very small, then even very small additive increases can easily amount to sufficiently substantive multiplicative increases that these constraints bite. If you move from .01% to .1%, youâve only gone up .09% in additive terms â only nine parts in ten thousand. But youâve also gone up by a factor of 10 â something you shouldâve been at least 90% sure would never happen.â˛
I couldnât follow the reasoning here, can you explain further?
If you think there is a 50% chance that your credences will say go from 10% to 30%+. Then you believe that with a 50% probability, you live in a â30%+ world.â But then you live in at least a 50% * 30%+ = 15%+ world rather than a 10% world, as you originally thought.
âGood forecasts should be a martingaleâ is another (more general) way to say the same thing, in case the alternative phrasing is helpful for other people.
I imagine a proof (by contradiction) would work something like this:
Suppose you place > 1/âx probability on your credences moving by a factor of x. Then the expectation of your future beliefs is > prior * x* 1/âx = prior, so your credence will increase. With our remaining probability mass, can we anticipate some evidence in the other direction, such that our beliefs still satisfy conservation of expected evidence? The lowest our credence can go is 0, but even if we place our remaining < 1 â 1/âx probability on 0, we would still find future beliefs > prior * x * 1/âx + 0 * [remaining probability] = prior. So we would necessarily violate conservation of expected evidence, and we conclude that Joeâs rule holds.
Note that all of these comments apply, symmetrically, to people nearly certain of doom. 99.99%? OK, so less than 1% than you ever drop to 99% or lower?
But I donât think this proof works for beliefs decreasing (because we donât have the lower bound of 0). Consider this counterexample:
prior = 10%
probability of decreasing to 5% (factor of 2) = 60% > 1â2 â> violates the rule
So conservation of expected evidence doesnât seem to imply Joeâs rule in this direction? (Maybe it holds once you introduce some restrictions on your prior, like in his 99.99% example, where you canât place the remaining probability mass any higher than 1, so the rule still bites.)
This asymmetry seems weird?? Would love for someone to clear this up.
âReturning to the epistemic perspective though: letâs suppose you do trust your future credences, and you want to avoid the Bayesian âgut problemsâ I discussed above. In that case, at least in theory, there are hard constraints on how you should expect your beliefs to change over time, even as you move from far away to up close.
In particular, you should never think that thereâs more than a 1/âx chance that your credence will increase by x times: i.e., never more than a 50% chance that itâll double, never more than a 10% chance that itâll 10x. And if your credence is very small, then even very small additive increases can easily amount to sufficiently substantive multiplicative increases that these constraints bite. If you move from .01% to .1%, youâve only gone up .09% in additive terms â only nine parts in ten thousand. But youâve also gone up by a factor of 10 â something you shouldâve been at least 90% sure would never happen.â˛
I couldnât follow the reasoning here, can you explain further?
If you think there is a 50% chance that your credences will say go from 10% to 30%+. Then you believe that with a 50% probability, you live in a â30%+ world.â But then you live in at least a 50% * 30%+ = 15%+ world rather than a 10% world, as you originally thought.
âGood forecasts should be a martingaleâ is another (more general) way to say the same thing, in case the alternative phrasing is helpful for other people.
I imagine a proof (by contradiction) would work something like this:
Suppose you place > 1/âx probability on your credences moving by a factor of x. Then the expectation of your future beliefs is > prior * x * 1/âx = prior, so your credence will increase. With our remaining probability mass, can we anticipate some evidence in the other direction, such that our beliefs still satisfy conservation of expected evidence? The lowest our credence can go is 0, but even if we place our remaining < 1 â 1/âx probability on 0, we would still find future beliefs > prior * x * 1/âx + 0 * [remaining probability] = prior. So we would necessarily violate conservation of expected evidence, and we conclude that Joeâs rule holds.
But I donât think this proof works for beliefs decreasing (because we donât have the lower bound of 0). Consider this counterexample:
prior = 10%
probability of decreasing to 5% (factor of 2) = 60% > 1â2 â> violates the rule
probability of increasing to 17.5% = 40%
Then, expectation of future beliefs = 5% * 60% + 17.5% * 40% = 10%
So conservation of expected evidence doesnât seem to imply Joeâs rule in this direction? (Maybe it holds once you introduce some restrictions on your prior, like in his 99.99% example, where you canât place the remaining probability mass any higher than 1, so the rule still bites.)
This asymmetry seems weird?? Would love for someone to clear this up.