A classical example of belief inertia goes like this: suppose a coin of unknown bias. It seems rationality permissible for one’s representor on the probability of said coin landing heads to be (0,1).[6] Suppose one starts flipping this coin. No matter the number of coin flips (and how many land heads), the representor on the posterior seems stuck to (0,1): for any element in this posterior representor, for any given sequence of observations, we can find an element in the prior representor which would update to it.
This is an interesting example, but I wouldn’t use such a wide range in practice, both at the start and especially after flipping several times. I suppose I’d be drawing arbitrary lines, but better that than just choosing one distribution, which to me is far more arbitrary. If we generalize this example, then wouldn’t we be stuck to (0, 1) for basically any event?
The issue is more the being stuck than the range: say it is (0.4, 0.6) rather than (0, 1), you’d still be inert. Vallinder (2018) discusses this extensively, including issues around infectiousness and generality.
You can entertain both a limited range for your prior probability, and a limited range of likelihood functions, and use closed (compact) sets if you’re away from 0 and 1 anyway. Surely you can update down from 0.6 if you had only one prior and likelihood, and if you can do so with your hardest to update distribution with 0.6, then this will reduce the right boundary.
This is an interesting example, but I wouldn’t use such a wide range in practice, both at the start and especially after flipping several times. I suppose I’d be drawing arbitrary lines, but better that than just choosing one distribution, which to me is far more arbitrary. If we generalize this example, then wouldn’t we be stuck to (0, 1) for basically any event?
The issue is more the being stuck than the range: say it is (0.4, 0.6) rather than (0, 1), you’d still be inert. Vallinder (2018) discusses this extensively, including issues around infectiousness and generality.
You can entertain both a limited range for your prior probability, and a limited range of likelihood functions, and use closed (compact) sets if you’re away from 0 and 1 anyway. Surely you can update down from 0.6 if you had only one prior and likelihood, and if you can do so with your hardest to update distribution with 0.6, then this will reduce the right boundary.