The problem of arbitrariness has been pushed back from having no external standard for our rounding down value to having some arbitrariness about when that external standard applies. Some progress has been made.
It seems like we just moved the same problem to somewhere else? Let S be “that external standard” to which you refer. What external standard do we use to decide when S applies? It’s hard to know if this is progress until/unless we can actually define and justify that additional external standard. Maybe we’re heading off into a dead end, or it’s just external standards all the way down.
Ultimately, if there’s a precise number — like the threshold here — that looks arbitrary, eventually, we’re going to have to rely on some precise and I’d guess arbitrary-seeming direct intuition about some number.
Second, the epistemic defense does not hold that the normative laws change at some arbitrary threshold, at least when it comes to first-order principles of rational decision.
Doesn’t it still mean the normative laws — as epistemology is also normative — change at some arbitrary threshold? Seems like basically the same problem to me, and equally objectionable.
Likewise, at a first glance (and I’m neither an expert in decision theory nor epistemology), your other responses to the objections in your epistemic defense seem usable for decision-theoretic rounding down. One of your defenses of epistemic rounding down is stakes-sensitive, but then it doesn’t seem so different from risk aversion, ambiguity aversion and their difference-making versions, which are decision-theoretic stances.
In particular
Suppose we adopt Moss’s account on which we are permitted to identify with any of the credences in our interval and that our reasons for picking a particular credence will be extra-evidential (pragmatic, ethical, etc.). In this case, we have strong reasons for accepting a higher credence for the purposes of action.
sounds like an explicit endorsement of motivated reasoning to me. What we believe, i.e. the credences we pick, about what will happen shouldn’t depend on ethical considerations, i.e. our (ethical) preferences. If we’re talking about picking credences from a set of imprecise credences to use in practice, then this seems to fall well under decision-theoretic procedures, like ambiguity aversion. So, such a procedure seems better justified to me as decision-theoretic.
Similarly, I don’t see why this wouldn’t be at least as plausible for decision theory:
Suppose you assign a probability of 0 to state s1 for a particular decision. Later, you are faced with a decision with a state s2 that your evidence says has a lower probability than s1 (even though we don’t know what their precise values are). In this context, you might want to un-zero s1 so as to compare the two states.
To your first point, that we have replaced arbitrariness over the threshold of probabilities with arbitrariness about how uncertain we must be before rounding down: I suppose I’m more inclined to accept that decisions about which metaprinciples to apply will be context-sensitive, vague, and unlikely to be capturable by any simple, idealized decision theory. A non-ideal agent deciding when to round down has to juggle lots of different factors: their epistemic limitations, asymmetries in evidence, costs of being right or wrong, past track records, etc. I doubt that there’s any decision theory that is both stateable and clear on this point. Even if there is a non-arbitrary threshold, I have trouble saying what that is. That is probably not a very satisfying response! I did enjoy Weatherson’s latest that touches on this point.
You suggest that the defenses of rounding down would also bolster decision-theoretic defenses of rounding down. It’s worth thinking what a defense of ambiguity aversion would look like. Indeed, it might turn out to be the same as the epistemic defense given here. I don’t have a favorite formal model of ambiguity aversion, so I’m all ears if you do!
I suppose I’m more inclined to accept that decisions about which metaprinciples to apply will be context-sensitive, vague, and unlikely to be capturable by any simple, idealized decision theory. A non-ideal agent deciding when to round down has to juggle lots of different factors: their epistemic limitations, asymmetries in evidence, costs of being right or wrong, past track records, etc. I doubt that there’s any decision theory that is both stateable and clear on this point.
Couldn’t the decision theory just do exactly the same, and follow the same procedures? It could also just be context-sensitive, vague and complex.
How do we draw the line between which parts are epistemic vs decision-theoretic here? Maybe it’s kind of arbitrary? Maybe they can’t be cleanly separated?
I’m inclined to say that when we’re considering the stakes to decide what credences to use, then that’s decision-theoretic, not epistemic, because it seems like motivated reasoning if epistemic. It just seems very wrong to me to say that an outcome is more likely just because it would be worse (or more important) if it happened. If instead under the epistemic approach, we’re not saying it’s actually more likely, it’s just something we shouldn’t round down in practical decision-making if morally significant enough, then why is this epistemic rather than decision-theoretic? This seems like a matter of deciding what to do with our credences, a decision procedure, and typically the domain of decision theory.
Maybe it’s harder to defend something on decision-theoretic grounds if it leads to Dutch books or money pumps? The procedure would lead to the same results regardless of which parts we call epistemic or decision-theoretic, but we could avoid blaming the decision theory for the apparent failures of instrumental rationality. But I’m also not sold on actually acknowledging such money pump and Dutch book arguments as proof of failure of instrumental rationality at all.
(Edited.)
It seems like we just moved the same problem to somewhere else? Let S be “that external standard” to which you refer. What external standard do we use to decide when S applies? It’s hard to know if this is progress until/unless we can actually define and justify that additional external standard. Maybe we’re heading off into a dead end, or it’s just external standards all the way down.
Ultimately, if there’s a precise number — like the threshold here — that looks arbitrary, eventually, we’re going to have to rely on some precise and I’d guess arbitrary-seeming direct intuition about some number.
Doesn’t it still mean the normative laws — as epistemology is also normative — change at some arbitrary threshold? Seems like basically the same problem to me, and equally objectionable.
Likewise, at a first glance (and I’m neither an expert in decision theory nor epistemology), your other responses to the objections in your epistemic defense seem usable for decision-theoretic rounding down. One of your defenses of epistemic rounding down is stakes-sensitive, but then it doesn’t seem so different from risk aversion, ambiguity aversion and their difference-making versions, which are decision-theoretic stances.
In particular
sounds like an explicit endorsement of motivated reasoning to me. What we believe, i.e. the credences we pick, about what will happen shouldn’t depend on ethical considerations, i.e. our (ethical) preferences. If we’re talking about picking credences from a set of imprecise credences to use in practice, then this seems to fall well under decision-theoretic procedures, like ambiguity aversion. So, such a procedure seems better justified to me as decision-theoretic.
Similarly, I don’t see why this wouldn’t be at least as plausible for decision theory:
Thank you, Michael!
To your first point, that we have replaced arbitrariness over the threshold of probabilities with arbitrariness about how uncertain we must be before rounding down: I suppose I’m more inclined to accept that decisions about which metaprinciples to apply will be context-sensitive, vague, and unlikely to be capturable by any simple, idealized decision theory. A non-ideal agent deciding when to round down has to juggle lots of different factors: their epistemic limitations, asymmetries in evidence, costs of being right or wrong, past track records, etc. I doubt that there’s any decision theory that is both stateable and clear on this point. Even if there is a non-arbitrary threshold, I have trouble saying what that is. That is probably not a very satisfying response! I did enjoy Weatherson’s latest that touches on this point.
You suggest that the defenses of rounding down would also bolster decision-theoretic defenses of rounding down. It’s worth thinking what a defense of ambiguity aversion would look like. Indeed, it might turn out to be the same as the epistemic defense given here. I don’t have a favorite formal model of ambiguity aversion, so I’m all ears if you do!
Couldn’t the decision theory just do exactly the same, and follow the same procedures? It could also just be context-sensitive, vague and complex.
How do we draw the line between which parts are epistemic vs decision-theoretic here? Maybe it’s kind of arbitrary? Maybe they can’t be cleanly separated?
I’m inclined to say that when we’re considering the stakes to decide what credences to use, then that’s decision-theoretic, not epistemic, because it seems like motivated reasoning if epistemic. It just seems very wrong to me to say that an outcome is more likely just because it would be worse (or more important) if it happened. If instead under the epistemic approach, we’re not saying it’s actually more likely, it’s just something we shouldn’t round down in practical decision-making if morally significant enough, then why is this epistemic rather than decision-theoretic? This seems like a matter of deciding what to do with our credences, a decision procedure, and typically the domain of decision theory.
Maybe it’s harder to defend something on decision-theoretic grounds if it leads to Dutch books or money pumps? The procedure would lead to the same results regardless of which parts we call epistemic or decision-theoretic, but we could avoid blaming the decision theory for the apparent failures of instrumental rationality. But I’m also not sold on actually acknowledging such money pump and Dutch book arguments as proof of failure of instrumental rationality at all.