Credal resilience is a measure of the degree to which a credence is expected to change in response to new evidence.
Suppose a person has two coins in front of them: coin A and coin B. They have flipped coin A thousands of times to confirm that it is unbiased, so it seems reasonable for the person to have a credence of 0.5 that if they flip coin A again, it will land heads. Coin B, on the other hand, might be biased or unbiased. However even if it is biased, the person has no evidence about which way it is biased, and so has no reason to think that the bias will favor either heads or tails. In these circumstances, it also seems reasonable for them to have a credence of 0.5 that if they flip coin B, it will land heads.
A person can have the same credence in two different propositions, and yet think that one of those credences is more likely to move given new evidence. For example, suppose that coin B lands on heads each of the first four times they test it. Based on these observations, their credence that coin B will land on heads on the next toss will be higher than 0.5, because they now have evidence that the coin is biased. However, if they saw coin A land heads four times in a row, their credence that the next toss will be a head will still be close to 0.5, because they already have considerable evidence that the coin is fair.
In choice situations where we have low-resilience credences, the value of information will usually be higher, because new evidence is more likely to change our credences.
Further reading
Egan, Andy & Adam Elga (2005) I can’t believe I’m stupid, Philosophical Perspectives, vol. 19, pp. 77–93.
Popper, Karl (1959) The logic of scientific discovery, New York: Basic Books.
Skyrms, Brian (1977) Resiliency, propensities, and causal necessity, The Journal of Philosophy, vol. 74, pp. 704–713.
Related entries
cluelessness | credence | decision theory | expected value | forecasting | model uncertainty | value of information
The example used here is a stochastic process, which is a case where resilience of a subjective probability can be easily described with a probability distribution and Bayesian updates on observations. But the most important applications of the idea are one-off events with mainly epistemic uncertainty. Is there a good example we could include for that? Maybe a description of how you might express/quantify the resilience of a forecast for a past event whose outcome is not known yet?
I think this entry could be improved and expanded using some of the content, terms/concepts, and/or links from this shortform of mine: https://www.lesswrong.com/posts/gcEayv6HtBogfov2n/michaela-s-shortform?commentId=otZLATzjTfMRuE9JA
I don’t have time to do that myself right now, plus I think other people in the EA community are more knowledgeable about this stuff than me, so hopefully someone else can do that!
Otherwise it’s possible I’ll circle back in a few weeks/months.