If you aren’t (ever) using subjective probabilities to guide decisions, then what would you use instead and why? If you’re sometimes using subjective probabilities, how do you decide when to and when not to, and why?
If that’s what subjective probabilities fundamentally mean, then it doesn’t seem necessarily absurd to assign zero probability to something that is concievable. It at least doesn’t violate any of Savage’s axioms.
Unbounded utilities do violate Savage’s axioms, though, I think because of St. Petersburg-like lotteries. Savage’s axioms, because of completeness (you have to consider all functions from states to outcomes, so all lotteries), force your utility function and probability function to act in certain ways even over lotteries you would assign 0 probability to ever countering. But you can drop the completeness axiom and assume away St. Petersburg-like lotteries, too. See also Toulet’s An Axiomatic Model of Unbounded Utility Functions (which I only just found and haven’t read).
I am comfortable using subjective probabilities to guide decisions, in the sense that I am happy with trying to assign to every possible event a real number between 0 and 1, which will describe how I will act when faced with gambles (I will maximize expected utility, if those numbers are interpreted as probabilities).
But the meaning of these numbers is that they describe my decision-making behaviour, not that they quantify a degree of belief. I am rejecting the use of subjective probabilities in that context, if it is removed from the context of decisions. I am rejecting the whole concept of a ‘degree of belief’, or of event A being ‘more likely’ than event B. Or at least, I am saying there is no meaning in those statements that goes beyond the meaning: ‘I will choose to receive a prize if event A happens, rather than if event B happens, if forced to choose’.
And if that’s all that probabilities mean, then it doesn’t seem necessarily wrong to assign probability zero to something that is concievable. I am simply describing how I will make decisions. In the Pascal mugger context: I would choose the chance of a prize in the event that I flip 1000 heads in a row on a fair coin, over the chance of a prize in the event that the mugger is correct.
That’s still a potentially counter-intuitive conclusion to end up at, but it’s a bullet I’m comfortable biting. And I feel much happier doing this than I do if you define subjective probabilities in terms of degrees of belief. I believe this language obscures what subjective probabilities fundamentally are, and this, previously, made me needlessly worried that by assigning extreme probabilities, I was making some kind of grave epistemological error. In fact I’m just describing my decisions.
Savage’s axioms don’t rule out unbounded expected utility, I don’t think. This is from Savage’s book, ‘Foundations of Statistics’, Chapter 5, ‘Utility’, ‘The extension of utility to more general acts’:
”If the utility of consequences is unbounded, say from above, then, even in the presence of P1-7, acts (though not gambles) of infinite utility can easily be constructed. My personal feeling is that, theological questions aside, there are no acts of infinite or minus infinite utility, and that one might reasonable so postulate, which would amount to assuming utility to be bounded.”
The distinction between ‘acts’ and ‘gambles’ is I think just that gambles are acts with a finite number of possible consequences (which obviously stops you constructing infinite expected value), but the postulates themselves don’t rule out infinite utility acts.
I’m obviously disagreeing with the Savage’s final remark in this post. I’m saying that you could also shift the ‘no acts of infinite or minus infinite utility’ constraint away from the utility function, and onto the probabilities themselves.
I think this doesn’t really answer my question or is circular. I don’t think that you decide how to act based on probabilities that come from how you decide to act, but that seems to be what you’re saying if I interpet your response as an answer to my question. It might also justify any course of action, possibly even if you fix the utility function (I think the subjective probabilities would need to depend on things in very weird ways, though). I think you still want to be able to justify specific acts, and I want to know how you’ll do this.
Maybe we can make this more explicit with an example. How do you decide which causes to prioritize? Or, pick an intervention, and how would you decide whether it is net positive or net negative? And do so without assigning probabilities as degrees of belief. How else are you going to come up with those probabilities? Or are you giving up probabilities as part of your procedure?
On Savage’s axioms, if your state space is infinite and your utility function is unbounded, then completeness requires the axioms to hold over acts that would have infinite expected utility, even if none is ever accessible to you in practice, and I think that would violate other axioms (the sure thing principle; if not Savage’s version, one that would be similarly irrational to violate; see https://onlinelibrary.wiley.com/doi/full/10.1111/phpr.12704 ). If your state space is finite and no outcome has infinite actual utility, then that seems to work, but I’m not sure you’d want to commit to a finite state space.
I still don’t think the position I’m trying to defend is circular. I’ll have a go at explaining why.
I’ll start with aswering your question: in practice the way I would come up with probabilities to assess a charitable intervention is the same as the way you probably would. I’d look at the available evidence and update my priors in a way that at least tries to approximate the principle expressed in Bayes’ theorem. Savage’s axioms imply that my decision-describing-numbers between 0 and 1 have to obey the usual laws of probability theory, and that includes Bayes’ theorem. If there is any difference between our positions, it will only be in how we should pick our priors. You pick those before you look at any evidence at all. How should you do that?
Savage’s axioms don’t tell you how to pick your priors. But actually I don’t know of any other principle that does either. If you’re trying to quantify ‘degrees of belief’ in an abstract sense, I think you’re sort of doomed (this is the problem of induction). My question for you is, how do you do that?
But we do have to make decisions. I want my decisions to be constrained by certain rational sounding axioms (like the sure thing principle), but I don’t think I want to place many more constraints on myself than that. Even those fairly weak constraints turn out to imply that there are some numbers, which you can call subjective probabilities, that I need to start out with as priors over states of the world, and which I will then update in the usual Bayesian way. But there is very little constraint in how I pick those numbers. They have to obey the laws of probability theory, but that’s quite a weak constraint. It doesn’t by itself imply that I have to assign non-zero probability to things which are concievable (e.g. if you pick a real number at random from the uniform distribution between 0 and 1, every possible outcome has probability 0).
So this is the way I’m thinking about the whole problem of forming beliefs and making decisions. I’m asking the question:
″ I want to make decisions in a way that is consistent with certain rational seeming properties, what does that mean I must do, and what, if anything, is left unconstrained?”
I think I must make decisions in a Bayesian-expected-utility-maximising sort of way, but I don’t think that I have to assign a non-zero probability to every concievable event. In fact, if I make one of my desired properties be that I’m not susceptible to infinity threatening Pascal muggers, then I shouldn’t assign non-zero probability to situations that would allow me to influence infinite utility.
FWIW, I think most of us go with our guts to assign probabilities most of the time, rather than formally picking priors, likelihoods and updating based on evidence. I tend to use ranges of probabilities and do sensitivity analysis instead of committing to precise probabilities, because precise probabilities also seem epistemically unjustified to me. I use reference classes sometimes.
If you aren’t (ever) using subjective probabilities to guide decisions, then what would you use instead and why? If you’re sometimes using subjective probabilities, how do you decide when to and when not to, and why?
Unbounded utilities do violate Savage’s axioms, though, I think because of St. Petersburg-like lotteries. Savage’s axioms, because of completeness (you have to consider all functions from states to outcomes, so all lotteries), force your utility function and probability function to act in certain ways even over lotteries you would assign 0 probability to ever countering. But you can drop the completeness axiom and assume away St. Petersburg-like lotteries, too. See also Toulet’s An Axiomatic Model of Unbounded Utility Functions (which I only just found and haven’t read).
I am comfortable using subjective probabilities to guide decisions, in the sense that I am happy with trying to assign to every possible event a real number between 0 and 1, which will describe how I will act when faced with gambles (I will maximize expected utility, if those numbers are interpreted as probabilities).
But the meaning of these numbers is that they describe my decision-making behaviour, not that they quantify a degree of belief. I am rejecting the use of subjective probabilities in that context, if it is removed from the context of decisions. I am rejecting the whole concept of a ‘degree of belief’, or of event A being ‘more likely’ than event B. Or at least, I am saying there is no meaning in those statements that goes beyond the meaning: ‘I will choose to receive a prize if event A happens, rather than if event B happens, if forced to choose’.
And if that’s all that probabilities mean, then it doesn’t seem necessarily wrong to assign probability zero to something that is concievable. I am simply describing how I will make decisions. In the Pascal mugger context: I would choose the chance of a prize in the event that I flip 1000 heads in a row on a fair coin, over the chance of a prize in the event that the mugger is correct.
That’s still a potentially counter-intuitive conclusion to end up at, but it’s a bullet I’m comfortable biting. And I feel much happier doing this than I do if you define subjective probabilities in terms of degrees of belief. I believe this language obscures what subjective probabilities fundamentally are, and this, previously, made me needlessly worried that by assigning extreme probabilities, I was making some kind of grave epistemological error. In fact I’m just describing my decisions.
Savage’s axioms don’t rule out unbounded expected utility, I don’t think. This is from Savage’s book, ‘Foundations of Statistics’, Chapter 5, ‘Utility’, ‘The extension of utility to more general acts’:
”If the utility of consequences is unbounded, say from above, then, even in the presence of P1-7, acts (though not gambles) of infinite utility can easily be constructed. My personal feeling is that, theological questions aside, there are no acts of infinite or minus infinite utility, and that one might reasonable so postulate, which would amount to assuming utility to be bounded.”
The distinction between ‘acts’ and ‘gambles’ is I think just that gambles are acts with a finite number of possible consequences (which obviously stops you constructing infinite expected value), but the postulates themselves don’t rule out infinite utility acts.
I’m obviously disagreeing with the Savage’s final remark in this post. I’m saying that you could also shift the ‘no acts of infinite or minus infinite utility’ constraint away from the utility function, and onto the probabilities themselves.
I think this doesn’t really answer my question or is circular. I don’t think that you decide how to act based on probabilities that come from how you decide to act, but that seems to be what you’re saying if I interpet your response as an answer to my question. It might also justify any course of action, possibly even if you fix the utility function (I think the subjective probabilities would need to depend on things in very weird ways, though). I think you still want to be able to justify specific acts, and I want to know how you’ll do this.
Maybe we can make this more explicit with an example. How do you decide which causes to prioritize? Or, pick an intervention, and how would you decide whether it is net positive or net negative? And do so without assigning probabilities as degrees of belief. How else are you going to come up with those probabilities? Or are you giving up probabilities as part of your procedure?
On Savage’s axioms, if your state space is infinite and your utility function is unbounded, then completeness requires the axioms to hold over acts that would have infinite expected utility, even if none is ever accessible to you in practice, and I think that would violate other axioms (the sure thing principle; if not Savage’s version, one that would be similarly irrational to violate; see https://onlinelibrary.wiley.com/doi/full/10.1111/phpr.12704 ). If your state space is finite and no outcome has infinite actual utility, then that seems to work, but I’m not sure you’d want to commit to a finite state space.
I still don’t think the position I’m trying to defend is circular. I’ll have a go at explaining why.
I’ll start with aswering your question: in practice the way I would come up with probabilities to assess a charitable intervention is the same as the way you probably would. I’d look at the available evidence and update my priors in a way that at least tries to approximate the principle expressed in Bayes’ theorem. Savage’s axioms imply that my decision-describing-numbers between 0 and 1 have to obey the usual laws of probability theory, and that includes Bayes’ theorem. If there is any difference between our positions, it will only be in how we should pick our priors. You pick those before you look at any evidence at all. How should you do that?
Savage’s axioms don’t tell you how to pick your priors. But actually I don’t know of any other principle that does either. If you’re trying to quantify ‘degrees of belief’ in an abstract sense, I think you’re sort of doomed (this is the problem of induction). My question for you is, how do you do that?
But we do have to make decisions. I want my decisions to be constrained by certain rational sounding axioms (like the sure thing principle), but I don’t think I want to place many more constraints on myself than that. Even those fairly weak constraints turn out to imply that there are some numbers, which you can call subjective probabilities, that I need to start out with as priors over states of the world, and which I will then update in the usual Bayesian way. But there is very little constraint in how I pick those numbers. They have to obey the laws of probability theory, but that’s quite a weak constraint. It doesn’t by itself imply that I have to assign non-zero probability to things which are concievable (e.g. if you pick a real number at random from the uniform distribution between 0 and 1, every possible outcome has probability 0).
So this is the way I’m thinking about the whole problem of forming beliefs and making decisions. I’m asking the question:
″ I want to make decisions in a way that is consistent with certain rational seeming properties, what does that mean I must do, and what, if anything, is left unconstrained?”
I think I must make decisions in a Bayesian-expected-utility-maximising sort of way, but I don’t think that I have to assign a non-zero probability to every concievable event. In fact, if I make one of my desired properties be that I’m not susceptible to infinity threatening Pascal muggers, then I shouldn’t assign non-zero probability to situations that would allow me to influence infinite utility.
I don’t think there is anything circular here.
Ok, this makes more sense to me.
FWIW, I think most of us go with our guts to assign probabilities most of the time, rather than formally picking priors, likelihoods and updating based on evidence. I tend to use ranges of probabilities and do sensitivity analysis instead of committing to precise probabilities, because precise probabilities also seem epistemically unjustified to me. I use reference classes sometimes.