Thanks, Richard! In some sense, I think I agree; as I say in the conclusion, I’m most inclined to think this is one of those cases where we’ve got a philosophical argument we don’t immediately know how to refute for a conclusion that we should nonetheless reject, and so we ought to infer that one of the premises must be false.
On the other hand, I think I’m most inclined to say that the problem lies in the fact that standard models using imprecise credences and their associated decisions rules have or exploit too little structure in terms of how they model our epistemic predicament, while thinking that it is nonetheless the case that our evidence fails to rule out probability functions that put sufficient probability mass on potential bad downstream effects and thereby make AMF come out worse in terms of maximizing expected value relative to that kind of probability function. I’m more inclined to identify the problem as being that the maximality rule gives probability functions of that kind too much of a say when it comes to determining permissibility. Other standard decision rules for imprecise credences argaubly suffer from similar issues. David Thorstad and I look a bit more in depth at decision rules that draw inspiration from voting theory and rely on some kind of measure on the set of admissible probability functions in our paper ‘Tough enough? Robust satisficing as a decision norm for long-term policy analysis’ but we weren’t especially sold on them.
Thanks, yeah, I remember liking that paper. Though I’m inclined to think you should assign (precise) higher-order probabilities to the various “admissible probability functions”, from which you can derive a kind of higher-order expected value verdict, which helpfully seems to avoid the problems afaict?
General lesson: if we don’t have any good way of dealing with imprecise credences, we probably shouldn’t regard them as rationally mandatory. Especially since the case for thinking that we must have imprecise credences (i.e., that any kind of precision is necessarily irrational) seems kind of weak.
General lesson: if we don’t have any good way of dealing with imprecise credences, we probably shouldn’t regard them as rationally mandatory.
I worry that this is motivated reasoning. Should what we can justifiably believe will happen as a consequence of our actions depend on whether it results in satisfactory moral consequences (e.g. avoiding paralysis)?
I’m more inclined to identify the problem as being that the maximality rule gives probability functions of that kind too much of a say when it comes to determining permissibility.
Another response could be to just look for more structure in our credences we’ve failed to capture. Say we have a bunch of probability functions according to which AMF is bad and a bunch according to which AMF is good, but we nonetheless think AMF is good. Why would we think AMF is good anyway? If we’re epistemically rational, it would presumably be because we doubt the functions according to which it is bad more than we do the ones according to which it is good. So, we’ve actually failed to adequately capture our credences and their structure with these probability functions as they are.
One way to represent this is to have another probability function to mix all of those probability functions (“(precise) higher-order probabilities to the various “admissible probability functions”), reducing to precise credences, in such a way that AMF turns out to look good, like @Richard Y Chappell suggests in reply here. Another, still permitting imprecise credences, is to have multiple such mixing functions of probability functions, but such that AMF still looks good on each mixing function. If you’re sympathetic to imprecise credences in the first place (like I am), the latter seems like a pretty good solution.
Of course, an alternative explanation could be that we aren’t actually justified in thinking AMF is good. We should be careful in how we pick these higher-order probabilities to avoid motivated reasoning. In picking these higher-order probabilities, we should remain open to the possibility that AMF is not actually robustly good.
Thanks, Richard! In some sense, I think I agree; as I say in the conclusion, I’m most inclined to think this is one of those cases where we’ve got a philosophical argument we don’t immediately know how to refute for a conclusion that we should nonetheless reject, and so we ought to infer that one of the premises must be false.
On the other hand, I think I’m most inclined to say that the problem lies in the fact that standard models using imprecise credences and their associated decisions rules have or exploit too little structure in terms of how they model our epistemic predicament, while thinking that it is nonetheless the case that our evidence fails to rule out probability functions that put sufficient probability mass on potential bad downstream effects and thereby make AMF come out worse in terms of maximizing expected value relative to that kind of probability function. I’m more inclined to identify the problem as being that the maximality rule gives probability functions of that kind too much of a say when it comes to determining permissibility. Other standard decision rules for imprecise credences argaubly suffer from similar issues. David Thorstad and I look a bit more in depth at decision rules that draw inspiration from voting theory and rely on some kind of measure on the set of admissible probability functions in our paper ‘Tough enough? Robust satisficing as a decision norm for long-term policy analysis’ but we weren’t especially sold on them.
Thanks, yeah, I remember liking that paper. Though I’m inclined to think you should assign (precise) higher-order probabilities to the various “admissible probability functions”, from which you can derive a kind of higher-order expected value verdict, which helpfully seems to avoid the problems afaict?
General lesson: if we don’t have any good way of dealing with imprecise credences, we probably shouldn’t regard them as rationally mandatory. Especially since the case for thinking that we must have imprecise credences (i.e., that any kind of precision is necessarily irrational) seems kind of weak.
I worry that this is motivated reasoning. Should what we can justifiably believe will happen as a consequence of our actions depend on whether it results in satisfactory moral consequences (e.g. avoiding paralysis)?
Another response could be to just look for more structure in our credences we’ve failed to capture. Say we have a bunch of probability functions according to which AMF is bad and a bunch according to which AMF is good, but we nonetheless think AMF is good. Why would we think AMF is good anyway? If we’re epistemically rational, it would presumably be because we doubt the functions according to which it is bad more than we do the ones according to which it is good. So, we’ve actually failed to adequately capture our credences and their structure with these probability functions as they are.
One way to represent this is to have another probability function to mix all of those probability functions (“(precise) higher-order probabilities to the various “admissible probability functions”), reducing to precise credences, in such a way that AMF turns out to look good, like @Richard Y Chappell suggests in reply here. Another, still permitting imprecise credences, is to have multiple such mixing functions of probability functions, but such that AMF still looks good on each mixing function. If you’re sympathetic to imprecise credences in the first place (like I am), the latter seems like a pretty good solution.
Of course, an alternative explanation could be that we aren’t actually justified in thinking AMF is good. We should be careful in how we pick these higher-order probabilities to avoid motivated reasoning. In picking these higher-order probabilities, we should remain open to the possibility that AMF is not actually robustly good.