I think it’s super exciting—a really useful application of probability!
I don’t know as much as I’d like to about Tetlock’s work. My understanding is that the work has focused mostly on geopolitical events where forecasters have been awfully successful. Geopolitical events are a kind of thing I think people are in an OK position for predicting—i.e. we’ve seen a lot of geopolitical events in the past that are similar to the events we expect to see in the future. We have decent theories that can explain why certain events came to pass while others didn’t.
I doubt that Tetlock-style forecasting would be as fruitful in unfamiliar domains that involve Knightian-ish uncertainty. Forecasting may not be particularly reliable for questions like:
-Will we have a detailed, broadly accepted theory of consciousness this century?
-Will quantum computers take off in the next 50 years?
-Will any humans leave the solar system by 2100?
(That said, following Tetlock’s guidelines may still be worthwhile if you’re trying to predict hard-to-predict things.)
I think I agree with everything you’ve said there, except that I’d prefer to stay away from the term “Knightian”, as it seems to be so often taken to refer to an absolute, binary distinction. It seems you wouldn’t endorse that binary distinction yourself, given that you say “Knightian-ish”, and that in your post you write:
we don’t need to assume a strict dichotomy separates quantifiable risks from unquantifiable risks. Instead, real-world uncertainty falls on something like a spectrum.
But I think, whatever one’s own intentions, the term “Knightian” sneaks in a lot of baggage and connotations. And on top of that, the term is interpreted in so many different ways by different people. For example, I happened to have recently seen events very similar to those you contrasted against cases of Knightian-ish uncertainty used as examples to explain the concept of Knightian uncertainty (in this paper):
Finally, there are situations with so many unique features that they can hardly be grouped with similar cases, such as the danger resulting from a new type of virus, or the consequences of military intervention in conflict areas. These represent cases of (Knightian) uncertainty where no data are available to estimate objective probabilities. While we may rely on our subjective estimates under such conditions, no objective basis exists by which to judge them (e.g., LeRoy & Singell, 1987). (emphasis added)
So I see the term “Knightian” as introducing more confusion than it’s worth, and I’d prefer to only use it if I also give caveats to that effect, or to highlight the confusions it causes. Typically, I’d prefer to rely instead on terms like more or lessresilient, precise, or (your term) hazy probabilities/credences. (I collected various terms that can be used for this sort of idea here.)
[I know this comment is very late to the party, but I’m working on some posts about the idea of a risk-uncertainty distinction, and was re-reading your post to help inform that.]
Do you have any thoughts on Tetlock’s work which recommends the use of probabilistic reasoning and breaking questions down to make accurate forecasts?
I think it’s super exciting—a really useful application of probability!
I don’t know as much as I’d like to about Tetlock’s work. My understanding is that the work has focused mostly on geopolitical events where forecasters have been awfully successful. Geopolitical events are a kind of thing I think people are in an OK position for predicting—i.e. we’ve seen a lot of geopolitical events in the past that are similar to the events we expect to see in the future. We have decent theories that can explain why certain events came to pass while others didn’t.
I doubt that Tetlock-style forecasting would be as fruitful in unfamiliar domains that involve Knightian-ish uncertainty. Forecasting may not be particularly reliable for questions like:
-Will we have a detailed, broadly accepted theory of consciousness this century?
-Will quantum computers take off in the next 50 years?
-Will any humans leave the solar system by 2100?
(That said, following Tetlock’s guidelines may still be worthwhile if you’re trying to predict hard-to-predict things.)
I think I agree with everything you’ve said there, except that I’d prefer to stay away from the term “Knightian”, as it seems to be so often taken to refer to an absolute, binary distinction. It seems you wouldn’t endorse that binary distinction yourself, given that you say “Knightian-ish”, and that in your post you write:
But I think, whatever one’s own intentions, the term “Knightian” sneaks in a lot of baggage and connotations. And on top of that, the term is interpreted in so many different ways by different people. For example, I happened to have recently seen events very similar to those you contrasted against cases of Knightian-ish uncertainty used as examples to explain the concept of Knightian uncertainty (in this paper):
So I see the term “Knightian” as introducing more confusion than it’s worth, and I’d prefer to only use it if I also give caveats to that effect, or to highlight the confusions it causes. Typically, I’d prefer to rely instead on terms like more or less resilient, precise, or (your term) hazy probabilities/credences. (I collected various terms that can be used for this sort of idea here.)
[I know this comment is very late to the party, but I’m working on some posts about the idea of a risk-uncertainty distinction, and was re-reading your post to help inform that.]