While browsing types of uncertainties, I stumbled upon the idea of state space uncertainty and conscious unawareness, which sounds similar to your explanation of cluelessness and which might be another helpful angle for people with a more Bayesian perspective.
There are, in the real world, unforeseen contingencies: eventualities that even the educated decision maker will fail to foresee. For instance, the recent tsunami and subsequent nuclear meltdown in Japan are events that most agents would have omitted from their decision models. If a decision maker is aware of the possibility that they may not be aware of all relevant contingencies—a state that Walker and Dietz (2011) call ‘conscious unawareness’ —then they face state space uncertainty.
There are things you can do to correct for this sort of thing-for instance, go one level more meta, estimate the probability of unforeseen consequences in general, or within the class of problems that your specific problem fits into.
We couldn’t have predicted the fukushima disaster, but perhaps we can predict related things with some degree of certainty—the average cost and death toll of earthquakes worldwide, for instance. In fact, this is a fairly well explored space, since insurers have to understand the risk of earthquakes.
The ongoing pandemic is a harder example—the rarer the black swan, the more difficult it is to predict. But even then, prior to the 2020 pandemic, the WHO had estimated the amortized costs of pandemics as in the order of 1% of global GDP annually (averaged over years when there are and aren’t pandemics), which seems like a reasonable approximation.
I don’t know how much of a realistic solution that would be in practice.
While browsing types of uncertainties, I stumbled upon the idea of state space uncertainty and conscious unawareness, which sounds similar to your explanation of cluelessness and which might be another helpful angle for people with a more Bayesian perspective.
https://link.springer.com/article/10.1007/s10670-013-9518-4
A good point.
There are things you can do to correct for this sort of thing-for instance, go one level more meta, estimate the probability of unforeseen consequences in general, or within the class of problems that your specific problem fits into.
We couldn’t have predicted the fukushima disaster, but perhaps we can predict related things with some degree of certainty—the average cost and death toll of earthquakes worldwide, for instance. In fact, this is a fairly well explored space, since insurers have to understand the risk of earthquakes.
The ongoing pandemic is a harder example—the rarer the black swan, the more difficult it is to predict. But even then, prior to the 2020 pandemic, the WHO had estimated the amortized costs of pandemics as in the order of 1% of global GDP annually (averaged over years when there are and aren’t pandemics), which seems like a reasonable approximation.
I don’t know how much of a realistic solution that would be in practice.