if you have X% credence in a theory that produces 30% and Y% credence in a theory that produces 50%, then your actual probability is just a weighted sum. Having a range of subjective probabilities does not make sense!
Couldn’t those people just not be able to sum/integrate over those ranges (yet)? I think about it like this: for very routine cognitive tasks, like categorization, there might be some rather precise representation of p(dog|data) in our brains. This information is useful, but we are not trained in consciously putting it into precise buckets, so it’s like we look at our internal p(dog|data)=70%, but we are using a really unclear lense so we can‘t say more than “something in the range of 60-80%”. With more training in probabilistic reasoning, we get better lenses and end up being Superforecasters that can reliably see 1% differences.
Couldn’t those people just not be able to sum/integrate over those ranges (yet)? I think about it like this: for very routine cognitive tasks, like categorization, there might be some rather precise representation of p(dog|data) in our brains. This information is useful, but we are not trained in consciously putting it into precise buckets, so it’s like we look at our internal p(dog|data)=70%, but we are using a really unclear lense so we can‘t say more than “something in the range of 60-80%”. With more training in probabilistic reasoning, we get better lenses and end up being Superforecasters that can reliably see 1% differences.