My immediate response is that I’m making very few theoretical commitments (at least above the commitments I’m already making by using credences in the first place), though I haven’t thought about this a lot.
Note in particular that e.g. saying ’30-50%′ on my interpretation is perfectly consistent with having a sharp credence (say, 37.123123976%) at the same time.
It is also consistent with representing only garden-variety empirical uncertainty: essentially making a prediction of how much additional empirical evidence I would acquire within a certain amount of time, and how much that evidence would update my credence. So no commitment to logical uncertainty required.
Admittedly in practice I do think I’d often find the sharp credence hard to access and the credible interval would represent some mix of empirical and logical uncertainty (or similar). But at least in principle one could try to explain this in a similar way how one explains other human deviations from idealized models of rationality, i.e. in particular without making additional commitments about the theory of idealized rationality.
My immediate response is that I’m making very few theoretical commitments (at least above the commitments I’m already making by using credences in the first place), though I haven’t thought about this a lot.
Note in particular that e.g. saying ’30-50%′ on my interpretation is perfectly consistent with having a sharp credence (say, 37.123123976%) at the same time.
It is also consistent with representing only garden-variety empirical uncertainty: essentially making a prediction of how much additional empirical evidence I would acquire within a certain amount of time, and how much that evidence would update my credence. So no commitment to logical uncertainty required.
Admittedly in practice I do think I’d often find the sharp credence hard to access and the credible interval would represent some mix of empirical and logical uncertainty (or similar). But at least in principle one could try to explain this in a similar way how one explains other human deviations from idealized models of rationality, i.e. in particular without making additional commitments about the theory of idealized rationality.