There is an argument from intuition that carry some force by Schoenfield (2012) that we can’t use a probability function:
(1) It is permissible to be insensitive to mild evidential sweetening. (2) If we are insensitive to mild evidential sweetening, our attitudes cannot be represented by a probability function. (3) It is permissible to have attitudes that are not representable by a probability function. (1, 2)
...
You are a confused detective trying to figure out whether Smith or Jones committed the crime. You have an enormous body of evidence that to evaluate. Here is some of it: You know that 68 out of the 103 eyewitnesses claim that Smith did it but Jones’ footprints were found at the crime scene. Smith has an alibi, and Jones doesn’t. But Jones has a clear record while Smith has committed crimes in the past. The gun that killed the victim belonged to Smith. But the lie detector, which is accurate 71% percent of time, suggests that Jones did it. After you have gotten all of this evidence, you have no idea who committed the crime. You are no more confident that Jones committed the crime than that Smith committed the crime, nor are you more confident that Smith committed the crime than that Jones committed the crime.
...
Now imagine that, after considering all of this evidence, you learn a new fact: it turns out that there were actually 69 eyewitnesses (rather than 68) testifying that Smith did it. Does this make it the case that you should now be more confident in S than J? That, if you had to choose right now who to send to jail, it should be Smith? I think not.
...
In our case, you are insensitive to evidential sweetening with respect to S since you are no more confident in S than ~S (i.e. J), and no more confident in ~S (i.e. J) than S. The extra eyewitness supports S more than it supports ~S, and yet despite learning about the extra eyewitness, you are no more confident in S than you are in ~S (i.e. J).
Intuitively, this sounds right. And if you went from this problem trying to understand solve the crime on intuition, you might really have no idea. Reading the passage, it sounds mind-boggling.
On the other hand, if you applied some reasoning and study, you might be able to come up with some probability estimates. You could identify the conditioning of P(Smith did it|an eyewitness says Smith did it), including a probability distribution on that probability itself, if you like. You can identify how to combine evidence from multiple witnesses, i.e., P(Smith did it|eyewitness 1 says Smith did it) & P(Smith did it|eyewitness 2 says Smith did it), and so on up to 68 and 69. You can estimate the independence of eyewitnesses, and from that work out how to properly combine evidence from multiple eyewitnesses.
And it might turn out that you don’t update as a result of the extra eyewitness, under some circumstances. Perhaps you know the eyewitnesses aren’t independent; they’re all card-carrying members of the “We hate Smith” club. In that case it simply turns out that the extra eye-witness is irrelevant to the problem; it doesn’t qualify as evidence, so it it doesn’t mean you’re insensitive to “mild evidential sweetening”.
I think a lot of the problem here is that these authors are discussing what one could do when one sits down for the first time and tries to grapple with a problem. In those cases there’s so many undefined features of the problem that it really does seem impossible and you really are clueless.
But that’s not the same as saying that, with sufficient time, you can’t put probability distributions to everything that’s relevant and try to work out the joint probability.
----
Schoenfield, M. Chilling out on epistemic rationality. Philos Stud158, 197–219 (2012).
While browsing types of uncertainties, I stumbled upon the idea of state space uncertainty and conscious unawareness, which sounds similar to your explanation of cluelessness and which might be another helpful angle for people with a more Bayesian perspective.
There are, in the real world, unforeseen contingencies: eventualities that even the educated decision maker will fail to foresee. For instance, the recent tsunami and subsequent nuclear meltdown in Japan are events that most agents would have omitted from their decision models. If a decision maker is aware of the possibility that they may not be aware of all relevant contingencies—a state that Walker and Dietz (2011) call ‘conscious unawareness’ —then they face state space uncertainty.
There are things you can do to correct for this sort of thing-for instance, go one level more meta, estimate the probability of unforeseen consequences in general, or within the class of problems that your specific problem fits into.
We couldn’t have predicted the fukushima disaster, but perhaps we can predict related things with some degree of certainty—the average cost and death toll of earthquakes worldwide, for instance. In fact, this is a fairly well explored space, since insurers have to understand the risk of earthquakes.
The ongoing pandemic is a harder example—the rarer the black swan, the more difficult it is to predict. But even then, prior to the 2020 pandemic, the WHO had estimated the amortized costs of pandemics as in the order of 1% of global GDP annually (averaged over years when there are and aren’t pandemics), which seems like a reasonable approximation.
I don’t know how much of a realistic solution that would be in practice.
There is an argument from intuition that carry some force by Schoenfield (2012) that we can’t use a probability function:
Intuitively, this sounds right. And if you went from this problem trying to understand solve the crime on intuition, you might really have no idea. Reading the passage, it sounds mind-boggling.
On the other hand, if you applied some reasoning and study, you might be able to come up with some probability estimates. You could identify the conditioning of P(Smith did it|an eyewitness says Smith did it), including a probability distribution on that probability itself, if you like. You can identify how to combine evidence from multiple witnesses, i.e., P(Smith did it|eyewitness 1 says Smith did it) & P(Smith did it|eyewitness 2 says Smith did it), and so on up to 68 and 69. You can estimate the independence of eyewitnesses, and from that work out how to properly combine evidence from multiple eyewitnesses.
And it might turn out that you don’t update as a result of the extra eyewitness, under some circumstances. Perhaps you know the eyewitnesses aren’t independent; they’re all card-carrying members of the “We hate Smith” club. In that case it simply turns out that the extra eye-witness is irrelevant to the problem; it doesn’t qualify as evidence, so it it doesn’t mean you’re insensitive to “mild evidential sweetening”.
I think a lot of the problem here is that these authors are discussing what one could do when one sits down for the first time and tries to grapple with a problem. In those cases there’s so many undefined features of the problem that it really does seem impossible and you really are clueless.
But that’s not the same as saying that, with sufficient time, you can’t put probability distributions to everything that’s relevant and try to work out the joint probability.
----
Schoenfield, M. Chilling out on epistemic rationality. Philos Stud 158, 197–219 (2012).
While browsing types of uncertainties, I stumbled upon the idea of state space uncertainty and conscious unawareness, which sounds similar to your explanation of cluelessness and which might be another helpful angle for people with a more Bayesian perspective.
https://link.springer.com/article/10.1007/s10670-013-9518-4
A good point.
There are things you can do to correct for this sort of thing-for instance, go one level more meta, estimate the probability of unforeseen consequences in general, or within the class of problems that your specific problem fits into.
We couldn’t have predicted the fukushima disaster, but perhaps we can predict related things with some degree of certainty—the average cost and death toll of earthquakes worldwide, for instance. In fact, this is a fairly well explored space, since insurers have to understand the risk of earthquakes.
The ongoing pandemic is a harder example—the rarer the black swan, the more difficult it is to predict. But even then, prior to the 2020 pandemic, the WHO had estimated the amortized costs of pandemics as in the order of 1% of global GDP annually (averaged over years when there are and aren’t pandemics), which seems like a reasonable approximation.
I don’t know how much of a realistic solution that would be in practice.
This is a great example, thanks for sharing!