The way this finally clicked for me was: Sure, Bayesian probability theory is the one true way to do probability. But you can’t actually implement it.
In particular, problems I’ve experienced are:
- I’m sometimes not sure about my calibration in new domains
- Sometimes something happens that I couldn’t have predicted beforehand (particularly if it’s very specific), and it’s not clear what the Bayesian update should be. Note that I’m talking about “something took me completely by surprise” rather than “something happened to which I assigned a low probability”
- I can’t actually compute how many bits of evidence new data comes. So for instance I get some new information, and I don’t actually just instantaneously know that I was at 12.345% and now I’m at 54.321%. I have to think about it. But before I’ve thought about it I’m sometimes like a deer in the headlights, and my probability might be “Aaaah, I don’t know.”
- Sometimes I’ll be in an uncertain situation, and yeah, I’m uncertain, but I’d still offer a $10k bet on it. Or I’d offer a smaller bet with a spread (e.g., I’d be willing to bet $100 at 1:99 in favor but 5:95 against). But sometimes I really am just very un-eager to bet.
That said, I do think that people are too eager to say that something is under “Knightian uncertainty” when they could just put up a question on Metaculus (or on a prediction market) about it.
A comment I left on Knightian Uncertainty here.:
The way this finally clicked for me was: Sure, Bayesian probability theory is the one true way to do probability. But you can’t actually implement it.
In particular, problems I’ve experienced are:
- I’m sometimes not sure about my calibration in new domains
- Sometimes something happens that I couldn’t have predicted beforehand (particularly if it’s very specific), and it’s not clear what the Bayesian update should be. Note that I’m talking about “something took me completely by surprise” rather than “something happened to which I assigned a low probability”
- I can’t actually compute how many bits of evidence new data comes. So for instance I get some new information, and I don’t actually just instantaneously know that I was at 12.345% and now I’m at 54.321%. I have to think about it. But before I’ve thought about it I’m sometimes like a deer in the headlights, and my probability might be “Aaaah, I don’t know.”
- Sometimes I’ll be in an uncertain situation, and yeah, I’m uncertain, but I’d still offer a $10k bet on it. Or I’d offer a smaller bet with a spread (e.g., I’d be willing to bet $100 at 1:99 in favor but 5:95 against). But sometimes I really am just very un-eager to bet.
Some links to dig into this might be: https://www.lesswrong.com/posts/vpvLqinp4FoigqvKy/reflective-bayesianism, https://www.lesswrong.com/posts/xJyY5QkQvNJpZLJRo/radical-probabilism-1
That said, I do think that people are too eager to say that something is under “Knightian uncertainty” when they could just put up a question on Metaculus (or on a prediction market) about it.