If you arenât (ever) using subjective probabilities to guide decisions, then what would you use instead and why? If youâre sometimes using subjective probabilities, how do you decide when to and when not to, and why?
If thatâs what subjective probabilities fundamentally mean, then it doesnât seem necessarily absurd to assign zero probability to something that is concievable. It at least doesnât violate any of Savageâs axioms.
Unbounded utilities do violate Savageâs axioms, though, I think because of St. Petersburg-like lotteries. Savageâs axioms, because of completeness (you have to consider all functions from states to outcomes, so all lotteries), force your utility function and probability function to act in certain ways even over lotteries you would assign 0 probability to ever countering. But you can drop the completeness axiom and assume away St. Petersburg-like lotteries, too. See also Touletâs An Axiomatic Model of Unbounded Utility Functions (which I only just found and havenât read).
I am comfortable using subjective probabilities to guide decisions, in the sense that I am happy with trying to assign to every possible event a real number between 0 and 1, which will describe how I will act when faced with gambles (I will maximize expected utility, if those numbers are interpreted as probabilities).
But the meaning of these numbers is that they describe my decision-making behaviour, not that they quantify a degree of belief. I am rejecting the use of subjective probabilities in that context, if it is removed from the context of decisions. I am rejecting the whole concept of a âdegree of beliefâ, or of event A being âmore likelyâ than event B. Or at least, I am saying there is no meaning in those statements that goes beyond the meaning: âI will choose to receive a prize if event A happens, rather than if event B happens, if forced to chooseâ.
And if thatâs all that probabilities mean, then it doesnât seem necessarily wrong to assign probability zero to something that is concievable. I am simply describing how I will make decisions. In the Pascal mugger context: I would choose the chance of a prize in the event that I flip 1000 heads in a row on a fair coin, over the chance of a prize in the event that the mugger is correct.
Thatâs still a potentially counter-intuitive conclusion to end up at, but itâs a bullet Iâm comfortable biting. And I feel much happier doing this than I do if you define subjective probabilities in terms of degrees of belief. I believe this language obscures what subjective probabilities fundamentally are, and this, previously, made me needlessly worried that by assigning extreme probabilities, I was making some kind of grave epistemological error. In fact Iâm just describing my decisions.
Savageâs axioms donât rule out unbounded expected utility, I donât think. This is from Savageâs book, âFoundations of Statisticsâ, Chapter 5, âUtilityâ, âThe extension of utility to more general actsâ:
âIf the utility of consequences is unbounded, say from above, then, even in the presence of P1-7, acts (though not gambles) of infinite utility can easily be constructed. My personal feeling is that, theological questions aside, there are no acts of infinite or minus infinite utility, and that one might reasonable so postulate, which would amount to assuming utility to be bounded.â
The distinction between âactsâ and âgamblesâ is I think just that gambles are acts with a finite number of possible consequences (which obviously stops you constructing infinite expected value), but the postulates themselves donât rule out infinite utility acts.
Iâm obviously disagreeing with the Savageâs final remark in this post. Iâm saying that you could also shift the âno acts of infinite or minus infinite utilityâ constraint away from the utility function, and onto the probabilities themselves.
I think this doesnât really answer my question or is circular. I donât think that you decide how to act based on probabilities that come from how you decide to act, but that seems to be what youâre saying if I interpet your response as an answer to my question. It might also justify any course of action, possibly even if you fix the utility function (I think the subjective probabilities would need to depend on things in very weird ways, though). I think you still want to be able to justify specific acts, and I want to know how youâll do this.
Maybe we can make this more explicit with an example. How do you decide which causes to prioritize? Or, pick an intervention, and how would you decide whether it is net positive or net negative? And do so without assigning probabilities as degrees of belief. How else are you going to come up with those probabilities? Or are you giving up probabilities as part of your procedure?
On Savageâs axioms, if your state space is infinite and your utility function is unbounded, then completeness requires the axioms to hold over acts that would have infinite expected utility, even if none is ever accessible to you in practice, and I think that would violate other axioms (the sure thing principle; if not Savageâs version, one that would be similarly irrational to violate; see https://ââonlinelibrary.wiley.com/ââdoi/ââfull/ââ10.1111/ââphpr.12704 ). If your state space is finite and no outcome has infinite actual utility, then that seems to work, but Iâm not sure youâd want to commit to a finite state space.
I still donât think the position Iâm trying to defend is circular. Iâll have a go at explaining why.
Iâll start with aswering your question: in practice the way I would come up with probabilities to assess a charitable intervention is the same as the way you probably would. Iâd look at the available evidence and update my priors in a way that at least tries to approximate the principle expressed in Bayesâ theorem. Savageâs axioms imply that my decision-describing-numbers between 0 and 1 have to obey the usual laws of probability theory, and that includes Bayesâ theorem. If there is any difference between our positions, it will only be in how we should pick our priors. You pick those before you look at any evidence at all. How should you do that?
Savageâs axioms donât tell you how to pick your priors. But actually I donât know of any other principle that does either. If youâre trying to quantify âdegrees of beliefâ in an abstract sense, I think youâre sort of doomed (this is the problem of induction). My question for you is, how do you do that?
But we do have to make decisions. I want my decisions to be constrained by certain rational sounding axioms (like the sure thing principle), but I donât think I want to place many more constraints on myself than that. Even those fairly weak constraints turn out to imply that there are some numbers, which you can call subjective probabilities, that I need to start out with as priors over states of the world, and which I will then update in the usual Bayesian way. But there is very little constraint in how I pick those numbers. They have to obey the laws of probability theory, but thatâs quite a weak constraint. It doesnât by itself imply that I have to assign non-zero probability to things which are concievable (e.g. if you pick a real number at random from the uniform distribution between 0 and 1, every possible outcome has probability 0).
So this is the way Iâm thinking about the whole problem of forming beliefs and making decisions. Iâm asking the question:
âł I want to make decisions in a way that is consistent with certain rational seeming properties, what does that mean I must do, and what, if anything, is left unconstrained?â
I think I must make decisions in a Bayesian-expected-utility-maximising sort of way, but I donât think that I have to assign a non-zero probability to every concievable event. In fact, if I make one of my desired properties be that Iâm not susceptible to infinity threatening Pascal muggers, then I shouldnât assign non-zero probability to situations that would allow me to influence infinite utility.
FWIW, I think most of us go with our guts to assign probabilities most of the time, rather than formally picking priors, likelihoods and updating based on evidence. I tend to use ranges of probabilities and do sensitivity analysis instead of committing to precise probabilities, because precise probabilities also seem epistemically unjustified to me. I use reference classes sometimes.
If you arenât (ever) using subjective probabilities to guide decisions, then what would you use instead and why? If youâre sometimes using subjective probabilities, how do you decide when to and when not to, and why?
Unbounded utilities do violate Savageâs axioms, though, I think because of St. Petersburg-like lotteries. Savageâs axioms, because of completeness (you have to consider all functions from states to outcomes, so all lotteries), force your utility function and probability function to act in certain ways even over lotteries you would assign 0 probability to ever countering. But you can drop the completeness axiom and assume away St. Petersburg-like lotteries, too. See also Touletâs An Axiomatic Model of Unbounded Utility Functions (which I only just found and havenât read).
I am comfortable using subjective probabilities to guide decisions, in the sense that I am happy with trying to assign to every possible event a real number between 0 and 1, which will describe how I will act when faced with gambles (I will maximize expected utility, if those numbers are interpreted as probabilities).
But the meaning of these numbers is that they describe my decision-making behaviour, not that they quantify a degree of belief. I am rejecting the use of subjective probabilities in that context, if it is removed from the context of decisions. I am rejecting the whole concept of a âdegree of beliefâ, or of event A being âmore likelyâ than event B. Or at least, I am saying there is no meaning in those statements that goes beyond the meaning: âI will choose to receive a prize if event A happens, rather than if event B happens, if forced to chooseâ.
And if thatâs all that probabilities mean, then it doesnât seem necessarily wrong to assign probability zero to something that is concievable. I am simply describing how I will make decisions. In the Pascal mugger context: I would choose the chance of a prize in the event that I flip 1000 heads in a row on a fair coin, over the chance of a prize in the event that the mugger is correct.
Thatâs still a potentially counter-intuitive conclusion to end up at, but itâs a bullet Iâm comfortable biting. And I feel much happier doing this than I do if you define subjective probabilities in terms of degrees of belief. I believe this language obscures what subjective probabilities fundamentally are, and this, previously, made me needlessly worried that by assigning extreme probabilities, I was making some kind of grave epistemological error. In fact Iâm just describing my decisions.
Savageâs axioms donât rule out unbounded expected utility, I donât think. This is from Savageâs book, âFoundations of Statisticsâ, Chapter 5, âUtilityâ, âThe extension of utility to more general actsâ:
âIf the utility of consequences is unbounded, say from above, then, even in the presence of P1-7, acts (though not gambles) of infinite utility can easily be constructed. My personal feeling is that, theological questions aside, there are no acts of infinite or minus infinite utility, and that one might reasonable so postulate, which would amount to assuming utility to be bounded.â
The distinction between âactsâ and âgamblesâ is I think just that gambles are acts with a finite number of possible consequences (which obviously stops you constructing infinite expected value), but the postulates themselves donât rule out infinite utility acts.
Iâm obviously disagreeing with the Savageâs final remark in this post. Iâm saying that you could also shift the âno acts of infinite or minus infinite utilityâ constraint away from the utility function, and onto the probabilities themselves.
I think this doesnât really answer my question or is circular. I donât think that you decide how to act based on probabilities that come from how you decide to act, but that seems to be what youâre saying if I interpet your response as an answer to my question. It might also justify any course of action, possibly even if you fix the utility function (I think the subjective probabilities would need to depend on things in very weird ways, though). I think you still want to be able to justify specific acts, and I want to know how youâll do this.
Maybe we can make this more explicit with an example. How do you decide which causes to prioritize? Or, pick an intervention, and how would you decide whether it is net positive or net negative? And do so without assigning probabilities as degrees of belief. How else are you going to come up with those probabilities? Or are you giving up probabilities as part of your procedure?
On Savageâs axioms, if your state space is infinite and your utility function is unbounded, then completeness requires the axioms to hold over acts that would have infinite expected utility, even if none is ever accessible to you in practice, and I think that would violate other axioms (the sure thing principle; if not Savageâs version, one that would be similarly irrational to violate; see https://ââonlinelibrary.wiley.com/ââdoi/ââfull/ââ10.1111/ââphpr.12704 ). If your state space is finite and no outcome has infinite actual utility, then that seems to work, but Iâm not sure youâd want to commit to a finite state space.
I still donât think the position Iâm trying to defend is circular. Iâll have a go at explaining why.
Iâll start with aswering your question: in practice the way I would come up with probabilities to assess a charitable intervention is the same as the way you probably would. Iâd look at the available evidence and update my priors in a way that at least tries to approximate the principle expressed in Bayesâ theorem. Savageâs axioms imply that my decision-describing-numbers between 0 and 1 have to obey the usual laws of probability theory, and that includes Bayesâ theorem. If there is any difference between our positions, it will only be in how we should pick our priors. You pick those before you look at any evidence at all. How should you do that?
Savageâs axioms donât tell you how to pick your priors. But actually I donât know of any other principle that does either. If youâre trying to quantify âdegrees of beliefâ in an abstract sense, I think youâre sort of doomed (this is the problem of induction). My question for you is, how do you do that?
But we do have to make decisions. I want my decisions to be constrained by certain rational sounding axioms (like the sure thing principle), but I donât think I want to place many more constraints on myself than that. Even those fairly weak constraints turn out to imply that there are some numbers, which you can call subjective probabilities, that I need to start out with as priors over states of the world, and which I will then update in the usual Bayesian way. But there is very little constraint in how I pick those numbers. They have to obey the laws of probability theory, but thatâs quite a weak constraint. It doesnât by itself imply that I have to assign non-zero probability to things which are concievable (e.g. if you pick a real number at random from the uniform distribution between 0 and 1, every possible outcome has probability 0).
So this is the way Iâm thinking about the whole problem of forming beliefs and making decisions. Iâm asking the question:
âł I want to make decisions in a way that is consistent with certain rational seeming properties, what does that mean I must do, and what, if anything, is left unconstrained?â
I think I must make decisions in a Bayesian-expected-utility-maximising sort of way, but I donât think that I have to assign a non-zero probability to every concievable event. In fact, if I make one of my desired properties be that Iâm not susceptible to infinity threatening Pascal muggers, then I shouldnât assign non-zero probability to situations that would allow me to influence infinite utility.
I donât think there is anything circular here.
Ok, this makes more sense to me.
FWIW, I think most of us go with our guts to assign probabilities most of the time, rather than formally picking priors, likelihoods and updating based on evidence. I tend to use ranges of probabilities and do sensitivity analysis instead of committing to precise probabilities, because precise probabilities also seem epistemically unjustified to me. I use reference classes sometimes.