Hmm I agree with those examples, the first one of which wasn’t in my radar for “sans a few broad categories of cases.”
I especially agree with the “sometimes you can update to 0 or 1 because of the nature of the proposition” for situations where you already have moderately high probability in, and I find it uninteresting. This is possibly an issue with the language of expressing things in odds ratios. So for the example of
conditional on seeing Zach again) when I next see Zach will he appear (to me) to be wearing a mostly-blue shirt.”
maybe my prior probability was 20% -- 1:4 -- odds ratio, and my posterior probability is like 99999:1. This ~400,000 factor update seems unproblematic to me. So I want to exclude that.
Maybe you meant to refer only to (binary) propositions (and exclude unprivileged propositions like “the stranger’s name is Mark Xu”).
I do want my operationalization to be more general than binary propositions. How about this revised one:
operationalized e.g. as a >1000x or 10000x odds update on a {question, answer} pairthat you’ve considered for at least an hour beforehand and settled on a probability <~1/1000 or >~999/1000 in.
Suppose I thought there was a 1⁄10,000 chance that the answer to a math question is pi. And then I look at the back at the math textbook and the answer was pi. I’d be like “huh that’s odd.” And if this happened several times in succession, I could be reasonably confident that it’s much more likely that my math uncertainty is miscalibrated than that I just happen to get unlucky.
Similarly if I spent an hour considering the specific probability I get struck by lightning tomorrow, or a specific sequence of numbers for the Powerball tomorrow, and then was wrong, well that sure will be weird, and surprising.
Minor note: I think it’s kinda inelegant that your operationalization depends on the kinds of question-answer pairs humans consider rather than asserting something about the counterfactual where you consider an arbitrary question-answer pair for an hour.
Hmm I’m not sure I understand the inelegance remark, but I do want to distinguish between something like --
welp I considered scientific hypothesis for a while and concluded it was a 10^-9 probability, then light evidence got me to update towards it being 10^-2, then somebody offered an argument and I went down to 10^-8
which, while not technically excluded by the laws of probability, sure seems wild if my beliefs are anything even approximately approaching a martingale -- from a situation like
hmm surely the probability of meeting a new person in any given microsecond is vanishingly low, what are the odds?
I want to be careful to not borrow the credulity from the second case (a situation that is natural, normal, commonplace under most formulations) and apply to the first.
Hmm I agree with those examples, the first one of which wasn’t in my radar for “sans a few broad categories of cases.”
I especially agree with the “sometimes you can update to 0 or 1 because of the nature of the proposition” for situations where you already have moderately high probability in, and I find it uninteresting. This is possibly an issue with the language of expressing things in odds ratios. So for the example of
maybe my prior probability was 20% -- 1:4 -- odds ratio, and my posterior probability is like 99999:1. This ~400,000 factor update seems unproblematic to me. So I want to exclude that.
I do want my operationalization to be more general than binary propositions. How about this revised one:
Suppose I thought there was a 1⁄10,000 chance that the answer to a math question is pi. And then I look at the back at the math textbook and the answer was pi. I’d be like “huh that’s odd.” And if this happened several times in succession, I could be reasonably confident that it’s much more likely that my math uncertainty is miscalibrated than that I just happen to get unlucky.
Similarly if I spent an hour considering the specific probability I get struck by lightning tomorrow, or a specific sequence of numbers for the Powerball tomorrow, and then was wrong, well that sure will be weird, and surprising.
Minor note: I think it’s kinda inelegant that your operationalization depends on the kinds of question-answer pairs humans consider rather than asserting something about the counterfactual where you consider an arbitrary question-answer pair for an hour.
Hmm I’m not sure I understand the inelegance remark, but I do want to distinguish between something like --
which, while not technically excluded by the laws of probability, sure seems wild if my beliefs are anything even approximately approaching a martingale -- from a situation like
I want to be careful to not borrow the credulity from the second case (a situation that is natural, normal, commonplace under most formulations) and apply to the first.