To your first point, that we have replaced arbitrariness over the threshold of probabilities with arbitrariness about how uncertain we must be before rounding down: I suppose I’m more inclined to accept that decisions about which metaprinciples to apply will be context-sensitive, vague, and unlikely to be capturable by any simple, idealized decision theory. A non-ideal agent deciding when to round down has to juggle lots of different factors: their epistemic limitations, asymmetries in evidence, costs of being right or wrong, past track records, etc. I doubt that there’s any decision theory that is both stateable and clear on this point. Even if there is a non-arbitrary threshold, I have trouble saying what that is. That is probably not a very satisfying response! I did enjoy Weatherson’s latest that touches on this point.
You suggest that the defenses of rounding down would also bolster decision-theoretic defenses of rounding down. It’s worth thinking what a defense of ambiguity aversion would look like. Indeed, it might turn out to be the same as the epistemic defense given here. I don’t have a favorite formal model of ambiguity aversion, so I’m all ears if you do!
I suppose I’m more inclined to accept that decisions about which metaprinciples to apply will be context-sensitive, vague, and unlikely to be capturable by any simple, idealized decision theory. A non-ideal agent deciding when to round down has to juggle lots of different factors: their epistemic limitations, asymmetries in evidence, costs of being right or wrong, past track records, etc. I doubt that there’s any decision theory that is both stateable and clear on this point.
Couldn’t the decision theory just do exactly the same, and follow the same procedures? It could also just be context-sensitive, vague and complex.
How do we draw the line between which parts are epistemic vs decision-theoretic here? Maybe it’s kind of arbitrary? Maybe they can’t be cleanly separated?
I’m inclined to say that when we’re considering the stakes to decide what credences to use, then that’s decision-theoretic, not epistemic, because it seems like motivated reasoning if epistemic. It just seems very wrong to me to say that an outcome is more likely just because it would be worse (or more important) if it happened. If instead under the epistemic approach, we’re not saying it’s actually more likely, it’s just something we shouldn’t round down in practical decision-making if morally significant enough, then why is this epistemic rather than decision-theoretic? This seems like a matter of deciding what to do with our credences, a decision procedure, and typically the domain of decision theory.
Maybe it’s harder to defend something on decision-theoretic grounds if it leads to Dutch books or money pumps? The procedure would lead to the same results regardless of which parts we call epistemic or decision-theoretic, but we could avoid blaming the decision theory for the apparent failures of instrumental rationality. But I’m also not sold on actually acknowledging such money pump and Dutch book arguments as proof of failure of instrumental rationality at all.
Thank you, Michael!
To your first point, that we have replaced arbitrariness over the threshold of probabilities with arbitrariness about how uncertain we must be before rounding down: I suppose I’m more inclined to accept that decisions about which metaprinciples to apply will be context-sensitive, vague, and unlikely to be capturable by any simple, idealized decision theory. A non-ideal agent deciding when to round down has to juggle lots of different factors: their epistemic limitations, asymmetries in evidence, costs of being right or wrong, past track records, etc. I doubt that there’s any decision theory that is both stateable and clear on this point. Even if there is a non-arbitrary threshold, I have trouble saying what that is. That is probably not a very satisfying response! I did enjoy Weatherson’s latest that touches on this point.
You suggest that the defenses of rounding down would also bolster decision-theoretic defenses of rounding down. It’s worth thinking what a defense of ambiguity aversion would look like. Indeed, it might turn out to be the same as the epistemic defense given here. I don’t have a favorite formal model of ambiguity aversion, so I’m all ears if you do!
Couldn’t the decision theory just do exactly the same, and follow the same procedures? It could also just be context-sensitive, vague and complex.
How do we draw the line between which parts are epistemic vs decision-theoretic here? Maybe it’s kind of arbitrary? Maybe they can’t be cleanly separated?
I’m inclined to say that when we’re considering the stakes to decide what credences to use, then that’s decision-theoretic, not epistemic, because it seems like motivated reasoning if epistemic. It just seems very wrong to me to say that an outcome is more likely just because it would be worse (or more important) if it happened. If instead under the epistemic approach, we’re not saying it’s actually more likely, it’s just something we shouldn’t round down in practical decision-making if morally significant enough, then why is this epistemic rather than decision-theoretic? This seems like a matter of deciding what to do with our credences, a decision procedure, and typically the domain of decision theory.
Maybe it’s harder to defend something on decision-theoretic grounds if it leads to Dutch books or money pumps? The procedure would lead to the same results regardless of which parts we call epistemic or decision-theoretic, but we could avoid blaming the decision theory for the apparent failures of instrumental rationality. But I’m also not sold on actually acknowledging such money pump and Dutch book arguments as proof of failure of instrumental rationality at all.