Iâm strongly in favor of allowing intuitive adjustments on top of quantitative modeling when estimating parameters.
We had a brief thread on this over on LW, but Iâm still keen to hear why you endorse using precise probability distributions to represent these intuitive adjustments/âestimates. I take many of titotalâs critiques in this post to be symptoms of precise Bayesianism gone wrong (not to say titotal would agree with me on that).
ETA: Which, to be clear, is a question I have for EAs in general, not just you. :)
^ Iâm also curious to hear from those who disagree-voted my comment why they disagree. This would be very helpful for my understanding of what peopleâs cruxes for (im)precision are.
I think philosophically, the right ultimate objective (if you were sufficiently enlightened etc) is something like actual EV maximization with precise Bayesianism (with the right decision theory and possibly with âtrue terminal preferenceâ deontological constraints, rather than just instrumental deontological constraints). There isnât any philosophical reason which absolutely forces you to do EV maximization in the same way that nothing forces you not to have a terminal preference for flailing on the floor, but I think there are reasonably compelling arguments that something like EV maximization is basically right. The fact that something doesnât necessarily get money pumped doesnât mean it is a good decision procedure, itâs easy for something to avoid necessarily getting money pumped.
There is another question about whether it is a better strategy in practice to actually do precise Bayesianism given that you agree with the prior bullet (as in, you agree that terminally you should do EV maximization with precise Bayesianism). I think this is a messy empirical question, but in the typical case, I do think itâs useful to act on your best estimates (subject to instrumental deontological/âintegrity constraints, things like unilateralists curse, and handling decision theory reasonably). My understanding is that your proposed policy would be something like ârepresent an interval of credences and only take âactionsâ if the action seems net good across your interval of credencesâ. I think that following this policy in general would lead to lower expected value, do I donât do it. I do think that you should put weight on unilateralists curse and robustness, but I think the weight varies by domain and can derived by properly incorporating model uncertainty into your estimates and being aware of downside. E.g., for actions which have high downside risk if they go wrong relative to the upside benefit, youâll end up being much less likely to take these actions due to various heuristics, incorporating model uncertainty, and deontology. (And I think these outperform intervals.)
A more basic point is that basically any interval which is supposed to include the plausible ranges of belief goes ~all the way from 0 to 1 which would naively be totally parallelizing such that youâd take no actions and do the default. (Starving to death? Itâs unclear what the default should be which makes this heuristic more confusing to apply.) E.g., are chicken welfare interventions good? My understanding is that you work around this by saying âwe ignore considerations which are further down the crazy train (e.g. simulations, long run future, etc) or otherwise seem more âspeculativeâ until weâre able to take literally any actions at all and then proceed at that stop on the trainâ. This seems extremely ad hoc and Iâm skeptical this is a good approach to decision making given that you accept the first bullet.
Iâm worried that in practice youâre conflating between these bullets. Your post on precise bayesianism seems to focus substantially on empirical aspects of the current situation (potential arguments for (2)), but in practice, my understanding is that you actually think the imprecision is terminally correct but partially motivated by observations of our empirical reality. But, I donât think I care about motivating my terminal philosophy based on what we observe in this way!
(Edit: TBC, I get that you understand the distinction between these things, your post discusses this distinction, I just think that you donât really make arguments against (1) except that implying other things are possible.)
My understanding is that your proposed policy would be something like ârepresent an interval of credences and only take âactionsâ if the action seems net good across your interval of credencesâ. ⌠youâd take no actions and do the default. (Starving to death? Itâs unclear what the default should be which makes this heuristic more confusing to apply.)
Definitely not saying this! I donât think that (w.r.t. consequentialism at least) thereâs any privileged distinction between âactionsâ and âinactionâ, nor do I think Iâve ever implied this. My claim is: For any A and B, if itâs not the case that EV_p(A) > EV_p(B) for all p in the representor P,[1] and vice versa, then both A and B are permissible. This means that you have no reason to choose A over B or vice versa (again, w.r.t. consequentialism). Inaction isnât privileged, but neither is any particular action.
Now of course one needs to pick some act (âactionâ or otherwise) all things considered, but I explain my position on that here.
properly incorporating model uncertainty into your estimates
What do you mean by âproperly incorporatingâ? I think any answer here that doesnât admit indeterminacy/âimprecision will be arbitrary, as argued in my unawareness sequence.
basically any interval which is supposed to include the plausible ranges of belief goes ~all the way from 0 to 1
Why do you think this? I argue here and here (see Q4 and links therein) why that need not be the case, especially when weâre forming beliefs relevant to local-scale goals.
My understanding is that you work around this by saying âwe ignore considerations which are further down the crazy train (e.g. simulations, long run future, etc) or otherwise seem more âspeculativeâ until weâre able to take literally any actions at all and then proceed at that stop on the trainâ.
Also definitely not saying this. (I explicitly push back on such ad hoc ignoring of crazy-train considerations here.) My position is: (1) W.r.t. impartial consequentialism we canât ignore any considerations. (2) But insofar as weâre making decisions based on ~immediate self-interest, parochial concern for others near to us, and non-consequentialist reasons, crazy-train considerations arenât normatively relevant â so itâs not ad hoc to ignore them in that case. See also this great comment by Max Daniel. (Regardless, none of this is a positive argument for âmake up precise credences about crazy-train considerations and act on themâ.)
(ETA: The parent comment contains several important misunderstandings of my views, so I figured I should clarify here. Hence my long comments â sorry about that.)
Thanks for this, Ryan! Iâll reply to your main points here, and clear up some less central yet important points in another comment.
Hereâs what I think youâre saying (sorry the numbering clashes with the numbering in your comment, couldnât figure out how to change this):
The best representations of our actual degrees of belief given our evidence, intuitions, etc. â what you call the âterminally correctâ credences â should be precise.[1]
In practice, the strategy that maximizes EV w.r.t. our terminally correct credences wonât be âmake decisions by actually writing down a precise distribution and trying to maximize EV w.r.t. that distributionâ. This is because there are empirical features of our situation that hinder us from executing that strategy ideally.
I (Anthony) am mistakenly inferring from (2) that (1) is false.
(In particular, any argument against (1) that relies on premises about the âempirical aspects of the current situationâ must be making that mistake.)
Is that right? If so:
I do disagree with (1), but for reasons that have nothing to do with (2). My case for imprecise credences is: âIn our empirical situation, any particular precise credence [or expected value] we might pick would be highly arbitraryâ (argued for in detail here). (So Iâm also not just saying âyou can have imprecise credences without getting money pumpedâ.)
Iâm not saying that âheuristicsâ based on imprecise credences âoutperformâ explicit EV max. I donât think that principles for belief formation can bottom out in âperformanceâ but should instead bottom out in non-pragmatic principles â one of which is (roughly) âif our available information is so ambiguous that picking one precise credence over another seems arbitrary, our credences should be impreciseâ.
However, when we use non-pragmatic principles to derive our beliefs, the appropriate beliefs (not the principles themselves) can and should depend on empirical features of our situation that directly bear on our epistemic state: E.g., we face lots of considerations about the plausibility of a given hypothesis, and we seem to have too little evidence (+ too weak constraints from e.g. indifference principles or Occamâs razor) to justify any particular precise weighing of these considerations.[2] Contra (3.a), I donât see how/âwhy the structure of our credences could/âshould be independent of very relevant empirical information like this.
Intuition pump: Even an âidealâ precise Bayesian doesnât actually terminally care about EV, they terminally care about the ex post value. But their empirical situation makes them uncertain what the ex post value of their action will be, so they represent their epistemic state with precise credences, and derive their preferences over actions from EV. This doesnât imply theyâre conflating terminal goals with empirical facts about how best to achieve them.
Separately, I havenât yet seen convincing positive cases for (1). What are the âreasonably compelling argumentsâ for precise credences + EV maximization? And (if applicable to you) what are your replies to my counterarguments to the usual arguments here[3] (also here and here, though in fairness to you, those were buried in a comment thread)?
So in particular, I think youâre not saying the terminally correct credences for us are the credences that our computationally unbounded counterparts would have. If you are saying that, please let me know and I can reply to that â FWIW, as argued here, itâs not clear a computationally unbounded agent would be justified in precise credences either.
This is true of pretty much any hypothesis we consider, not just hypotheses about especially distant stuff. This ~adds up to normality /â doesnât collapse into radical skepticism, because we have reasons to have varying degrees of imprecision in our credences, and our credences about mundane stuff will only have a small degree of imprecision (more here and here).
Quote: â[L]etâs revisit why we care about EV in the first place. A common answer: âCoherence theorems! If you canât be modeled as maximizing EU, youâre shooting yourself in the foot.â For our purposes, the biggest problem with this answer is: Suppose we act as if we maximize the expectation of some utility function. This doesnât imply we make our decisions by following the procedure âuse our impartial altruistic valuefunction to (somehow) assign a number to each hypothesis, and maximize the expectationâ.â (In that context, I was taking about assigning precise values to coarse-grained hypotheses, but the same applies to assigning precise credences to any hypothesis.)
We had a brief thread on this over on LW, but Iâm still keen to hear why you endorse using precise probability distributions to represent these intuitive adjustments/âestimates. I take many of titotalâs critiques in this post to be symptoms of precise Bayesianism gone wrong (not to say titotal would agree with me on that).
ETA: Which, to be clear, is a question I have for EAs in general, not just you. :)
^ Iâm also curious to hear from those who disagree-voted my comment why they disagree. This would be very helpful for my understanding of what peopleâs cruxes for (im)precision are.
I think philosophically, the right ultimate objective (if you were sufficiently enlightened etc) is something like actual EV maximization with precise Bayesianism (with the right decision theory and possibly with âtrue terminal preferenceâ deontological constraints, rather than just instrumental deontological constraints). There isnât any philosophical reason which absolutely forces you to do EV maximization in the same way that nothing forces you not to have a terminal preference for flailing on the floor, but I think there are reasonably compelling arguments that something like EV maximization is basically right. The fact that something doesnât necessarily get money pumped doesnât mean it is a good decision procedure, itâs easy for something to avoid necessarily getting money pumped.
There is another question about whether it is a better strategy in practice to actually do precise Bayesianism given that you agree with the prior bullet (as in, you agree that terminally you should do EV maximization with precise Bayesianism). I think this is a messy empirical question, but in the typical case, I do think itâs useful to act on your best estimates (subject to instrumental deontological/âintegrity constraints, things like unilateralists curse, and handling decision theory reasonably). My understanding is that your proposed policy would be something like ârepresent an interval of credences and only take âactionsâ if the action seems net good across your interval of credencesâ. I think that following this policy in general would lead to lower expected value, do I donât do it. I do think that you should put weight on unilateralists curse and robustness, but I think the weight varies by domain and can derived by properly incorporating model uncertainty into your estimates and being aware of downside. E.g., for actions which have high downside risk if they go wrong relative to the upside benefit, youâll end up being much less likely to take these actions due to various heuristics, incorporating model uncertainty, and deontology. (And I think these outperform intervals.)
A more basic point is that basically any interval which is supposed to include the plausible ranges of belief goes ~all the way from 0 to 1 which would naively be totally parallelizing such that youâd take no actions and do the default. (Starving to death? Itâs unclear what the default should be which makes this heuristic more confusing to apply.) E.g., are chicken welfare interventions good? My understanding is that you work around this by saying âwe ignore considerations which are further down the crazy train (e.g. simulations, long run future, etc) or otherwise seem more âspeculativeâ until weâre able to take literally any actions at all and then proceed at that stop on the trainâ. This seems extremely ad hoc and Iâm skeptical this is a good approach to decision making given that you accept the first bullet.
Iâm worried that in practice youâre conflating between these bullets. Your post on precise bayesianism seems to focus substantially on empirical aspects of the current situation (potential arguments for (2)), but in practice, my understanding is that you actually think the imprecision is terminally correct but partially motivated by observations of our empirical reality. But, I donât think I care about motivating my terminal philosophy based on what we observe in this way!
(Edit: TBC, I get that you understand the distinction between these things, your post discusses this distinction, I just think that you donât really make arguments against (1) except that implying other things are possible.)
Definitely not saying this! I donât think that (w.r.t. consequentialism at least) thereâs any privileged distinction between âactionsâ and âinactionâ, nor do I think Iâve ever implied this. My claim is: For any A and B, if itâs not the case that EV_p(A) > EV_p(B) for all p in the representor P,[1] and vice versa, then both A and B are permissible. This means that you have no reason to choose A over B or vice versa (again, w.r.t. consequentialism). Inaction isnât privileged, but neither is any particular action.
Now of course one needs to pick some act (âactionâ or otherwise) all things considered, but I explain my position on that here.
What do you mean by âproperly incorporatingâ? I think any answer here that doesnât admit indeterminacy/âimprecision will be arbitrary, as argued in my unawareness sequence.
Why do you think this? I argue here and here (see Q4 and links therein) why that need not be the case, especially when weâre forming beliefs relevant to local-scale goals.
Also definitely not saying this. (I explicitly push back on such ad hoc ignoring of crazy-train considerations here.) My position is: (1) W.r.t. impartial consequentialism we canât ignore any considerations. (2) But insofar as weâre making decisions based on ~immediate self-interest, parochial concern for others near to us, and non-consequentialist reasons, crazy-train considerations arenât normatively relevant â so itâs not ad hoc to ignore them in that case. See also this great comment by Max Daniel. (Regardless, none of this is a positive argument for âmake up precise credences about crazy-train considerations and act on themâ.)
Technically this should be weakened to âweak inequality for all p + strict inequality for at least one pâ.
(ETA: The parent comment contains several important misunderstandings of my views, so I figured I should clarify here. Hence my long comments â sorry about that.)
Thanks for this, Ryan! Iâll reply to your main points here, and clear up some less central yet important points in another comment.
Hereâs what I think youâre saying (sorry the numbering clashes with the numbering in your comment, couldnât figure out how to change this):
The best representations of our actual degrees of belief given our evidence, intuitions, etc. â what you call the âterminally correctâ credences â should be precise.[1]
In practice, the strategy that maximizes EV w.r.t. our terminally correct credences wonât be âmake decisions by actually writing down a precise distribution and trying to maximize EV w.r.t. that distributionâ. This is because there are empirical features of our situation that hinder us from executing that strategy ideally.
I (Anthony) am mistakenly inferring from (2) that (1) is false.
(In particular, any argument against (1) that relies on premises about the âempirical aspects of the current situationâ must be making that mistake.)
Is that right? If so:
I do disagree with (1), but for reasons that have nothing to do with (2). My case for imprecise credences is: âIn our empirical situation, any particular precise credence [or expected value] we might pick would be highly arbitraryâ (argued for in detail here). (So Iâm also not just saying âyou can have imprecise credences without getting money pumpedâ.)
Iâm not saying that âheuristicsâ based on imprecise credences âoutperformâ explicit EV max. I donât think that principles for belief formation can bottom out in âperformanceâ but should instead bottom out in non-pragmatic principles â one of which is (roughly) âif our available information is so ambiguous that picking one precise credence over another seems arbitrary, our credences should be impreciseâ.
However, when we use non-pragmatic principles to derive our beliefs, the appropriate beliefs (not the principles themselves) can and should depend on empirical features of our situation that directly bear on our epistemic state: E.g., we face lots of considerations about the plausibility of a given hypothesis, and we seem to have too little evidence (+ too weak constraints from e.g. indifference principles or Occamâs razor) to justify any particular precise weighing of these considerations.[2] Contra (3.a), I donât see how/âwhy the structure of our credences could/âshould be independent of very relevant empirical information like this.
Intuition pump: Even an âidealâ precise Bayesian doesnât actually terminally care about EV, they terminally care about the ex post value. But their empirical situation makes them uncertain what the ex post value of their action will be, so they represent their epistemic state with precise credences, and derive their preferences over actions from EV. This doesnât imply theyâre conflating terminal goals with empirical facts about how best to achieve them.
Separately, I havenât yet seen convincing positive cases for (1). What are the âreasonably compelling argumentsâ for precise credences + EV maximization? And (if applicable to you) what are your replies to my counterarguments to the usual arguments here[3] (also here and here, though in fairness to you, those were buried in a comment thread)?
So in particular, I think youâre not saying the terminally correct credences for us are the credences that our computationally unbounded counterparts would have. If you are saying that, please let me know and I can reply to that â FWIW, as argued here, itâs not clear a computationally unbounded agent would be justified in precise credences either.
This is true of pretty much any hypothesis we consider, not just hypotheses about especially distant stuff. This ~adds up to normality /â doesnât collapse into radical skepticism, because we have reasons to have varying degrees of imprecision in our credences, and our credences about mundane stuff will only have a small degree of imprecision (more here and here).
Quote: â[L]etâs revisit why we care about EV in the first place. A common answer: âCoherence theorems! If you canât be modeled as maximizing EU, youâre shooting yourself in the foot.â For our purposes, the biggest problem with this answer is: Suppose we act as if we maximize the expectation of some utility function. This doesnât imply we make our decisions by following the procedure âuse our impartial altruistic value function to (somehow) assign a number to each hypothesis, and maximize the expectationâ.â (In that context, I was taking about assigning precise values to coarse-grained hypotheses, but the same applies to assigning precise credences to any hypothesis.)