(ETA: The parent comment contains several important misunderstandings of my views, so I figured I should clarify here. Hence my long comments — sorry about that.)
Thanks for this, Ryan! I’ll reply to your main points here, and clear up some less central yet important points in another comment.
Here’s what I think you’re saying (sorry the numbering clashes with the numbering in your comment, couldn’t figure out how to change this):
The best representations of our actual degrees of belief given our evidence, intuitions, etc. — what you call the “terminally correct” credences — should be precise.[1]
In practice, the strategy that maximizes EV w.r.t. our terminally correct credences won’t be “make decisions by actually writing down a precise distribution and trying to maximize EV w.r.t. that distribution”. This is because there are empirical features of our situation that hinder us from executing that strategy ideally.
I (Anthony) am mistakenly inferring from (2) that (1) is false.
(In particular, any argument against (1) that relies on premises about the “empirical aspects of the current situation” must be making that mistake.)
Is that right? If so:
I do disagree with (1), but for reasons that have nothing to do with (2). My case for imprecise credences is: “In our empirical situation, any particular precise credence [or expected value] we might pick would be highly arbitrary” (argued for in detail here). (So I’m also not just saying “you can have imprecise credences without getting money pumped”.)
I’m not saying that “heuristics” based on imprecise credences “outperform” explicit EV max. I don’t think that principles for belief formation can bottom out in “performance” but should instead bottom out in non-pragmatic principles — one of which is (roughly) “if our available information is so ambiguous that picking one precise credence over another seems arbitrary, our credences should be imprecise”.
However, when we use non-pragmatic principles to derive our beliefs, the appropriate beliefs (not the principles themselves) can and should depend on empirical features of our situation that directly bear on our epistemic state: E.g., we face lots of considerations about the plausibility of a given hypothesis, and we seem to have too little evidence (+ too weak constraints from e.g. indifference principles or Occam’s razor) to justify any particular precise weighing of these considerations.[2] Contra (3.a), I don’t see how/why the structure of our credences could/should be independent of very relevant empirical information like this.
Intuition pump: Even an “ideal” precise Bayesian doesn’t actually terminally care about EV, they terminally care about the ex post value. But their empirical situation makes them uncertain what the ex post value of their action will be, so they represent their epistemic state with precise credences, and derive their preferences over actions from EV. This doesn’t imply they’re conflating terminal goals with empirical facts about how best to achieve them.
Separately, I haven’t yet seen convincing positive cases for (1). What are the “reasonably compelling arguments” for precise credences + EV maximization? And (if applicable to you) what are your replies to my counterarguments to the usual arguments here[3] (also here and here, though in fairness to you, those were buried in a comment thread)?
So in particular, I think you’re not saying the terminally correct credences for us are the credences that our computationally unbounded counterparts would have. If you are saying that, please let me know and I can reply to that — FWIW, as argued here, it’s not clear a computationally unbounded agent would be justified in precise credences either.
This is true of pretty much any hypothesis we consider, not just hypotheses about especially distant stuff. This ~adds up to normality / doesn’t collapse into radical skepticism, because we have reasons to have varying degrees of imprecision in our credences, and our credences about mundane stuff will only have a small degree of imprecision (more here and here).
Quote: “[L]et’s revisit why we care about EV in the first place. A common answer: “Coherence theorems! If you can’t be modeled as maximizing EU, you’re shooting yourself in the foot.” For our purposes, the biggest problem with this answer is: Suppose we act as if we maximize the expectation of some utility function. This doesn’t imply we make our decisions by following the procedure “use our impartial altruistic valuefunction to (somehow) assign a number to each hypothesis, and maximize the expectation”.” (In that context, I was taking about assigning precise values to coarse-grained hypotheses, but the same applies to assigning precise credences to any hypothesis.)
(ETA: The parent comment contains several important misunderstandings of my views, so I figured I should clarify here. Hence my long comments — sorry about that.)
Thanks for this, Ryan! I’ll reply to your main points here, and clear up some less central yet important points in another comment.
Here’s what I think you’re saying (sorry the numbering clashes with the numbering in your comment, couldn’t figure out how to change this):
The best representations of our actual degrees of belief given our evidence, intuitions, etc. — what you call the “terminally correct” credences — should be precise.[1]
In practice, the strategy that maximizes EV w.r.t. our terminally correct credences won’t be “make decisions by actually writing down a precise distribution and trying to maximize EV w.r.t. that distribution”. This is because there are empirical features of our situation that hinder us from executing that strategy ideally.
I (Anthony) am mistakenly inferring from (2) that (1) is false.
(In particular, any argument against (1) that relies on premises about the “empirical aspects of the current situation” must be making that mistake.)
Is that right? If so:
I do disagree with (1), but for reasons that have nothing to do with (2). My case for imprecise credences is: “In our empirical situation, any particular precise credence [or expected value] we might pick would be highly arbitrary” (argued for in detail here). (So I’m also not just saying “you can have imprecise credences without getting money pumped”.)
I’m not saying that “heuristics” based on imprecise credences “outperform” explicit EV max. I don’t think that principles for belief formation can bottom out in “performance” but should instead bottom out in non-pragmatic principles — one of which is (roughly) “if our available information is so ambiguous that picking one precise credence over another seems arbitrary, our credences should be imprecise”.
However, when we use non-pragmatic principles to derive our beliefs, the appropriate beliefs (not the principles themselves) can and should depend on empirical features of our situation that directly bear on our epistemic state: E.g., we face lots of considerations about the plausibility of a given hypothesis, and we seem to have too little evidence (+ too weak constraints from e.g. indifference principles or Occam’s razor) to justify any particular precise weighing of these considerations.[2] Contra (3.a), I don’t see how/why the structure of our credences could/should be independent of very relevant empirical information like this.
Intuition pump: Even an “ideal” precise Bayesian doesn’t actually terminally care about EV, they terminally care about the ex post value. But their empirical situation makes them uncertain what the ex post value of their action will be, so they represent their epistemic state with precise credences, and derive their preferences over actions from EV. This doesn’t imply they’re conflating terminal goals with empirical facts about how best to achieve them.
Separately, I haven’t yet seen convincing positive cases for (1). What are the “reasonably compelling arguments” for precise credences + EV maximization? And (if applicable to you) what are your replies to my counterarguments to the usual arguments here[3] (also here and here, though in fairness to you, those were buried in a comment thread)?
So in particular, I think you’re not saying the terminally correct credences for us are the credences that our computationally unbounded counterparts would have. If you are saying that, please let me know and I can reply to that — FWIW, as argued here, it’s not clear a computationally unbounded agent would be justified in precise credences either.
This is true of pretty much any hypothesis we consider, not just hypotheses about especially distant stuff. This ~adds up to normality / doesn’t collapse into radical skepticism, because we have reasons to have varying degrees of imprecision in our credences, and our credences about mundane stuff will only have a small degree of imprecision (more here and here).
Quote: “[L]et’s revisit why we care about EV in the first place. A common answer: “Coherence theorems! If you can’t be modeled as maximizing EU, you’re shooting yourself in the foot.” For our purposes, the biggest problem with this answer is: Suppose we act as if we maximize the expectation of some utility function. This doesn’t imply we make our decisions by following the procedure “use our impartial altruistic value function to (somehow) assign a number to each hypothesis, and maximize the expectation”.” (In that context, I was taking about assigning precise values to coarse-grained hypotheses, but the same applies to assigning precise credences to any hypothesis.)