I think this doesnât really answer my question or is circular. I donât think that you decide how to act based on probabilities that come from how you decide to act, but that seems to be what youâre saying if I interpet your response as an answer to my question. It might also justify any course of action, possibly even if you fix the utility function (I think the subjective probabilities would need to depend on things in very weird ways, though). I think you still want to be able to justify specific acts, and I want to know how youâll do this.
Maybe we can make this more explicit with an example. How do you decide which causes to prioritize? Or, pick an intervention, and how would you decide whether it is net positive or net negative? And do so without assigning probabilities as degrees of belief. How else are you going to come up with those probabilities? Or are you giving up probabilities as part of your procedure?
On Savageâs axioms, if your state space is infinite and your utility function is unbounded, then completeness requires the axioms to hold over acts that would have infinite expected utility, even if none is ever accessible to you in practice, and I think that would violate other axioms (the sure thing principle; if not Savageâs version, one that would be similarly irrational to violate; see https://ââonlinelibrary.wiley.com/ââdoi/ââfull/ââ10.1111/ââphpr.12704 ). If your state space is finite and no outcome has infinite actual utility, then that seems to work, but Iâm not sure youâd want to commit to a finite state space.
I still donât think the position Iâm trying to defend is circular. Iâll have a go at explaining why.
Iâll start with aswering your question: in practice the way I would come up with probabilities to assess a charitable intervention is the same as the way you probably would. Iâd look at the available evidence and update my priors in a way that at least tries to approximate the principle expressed in Bayesâ theorem. Savageâs axioms imply that my decision-describing-numbers between 0 and 1 have to obey the usual laws of probability theory, and that includes Bayesâ theorem. If there is any difference between our positions, it will only be in how we should pick our priors. You pick those before you look at any evidence at all. How should you do that?
Savageâs axioms donât tell you how to pick your priors. But actually I donât know of any other principle that does either. If youâre trying to quantify âdegrees of beliefâ in an abstract sense, I think youâre sort of doomed (this is the problem of induction). My question for you is, how do you do that?
But we do have to make decisions. I want my decisions to be constrained by certain rational sounding axioms (like the sure thing principle), but I donât think I want to place many more constraints on myself than that. Even those fairly weak constraints turn out to imply that there are some numbers, which you can call subjective probabilities, that I need to start out with as priors over states of the world, and which I will then update in the usual Bayesian way. But there is very little constraint in how I pick those numbers. They have to obey the laws of probability theory, but thatâs quite a weak constraint. It doesnât by itself imply that I have to assign non-zero probability to things which are concievable (e.g. if you pick a real number at random from the uniform distribution between 0 and 1, every possible outcome has probability 0).
So this is the way Iâm thinking about the whole problem of forming beliefs and making decisions. Iâm asking the question:
âł I want to make decisions in a way that is consistent with certain rational seeming properties, what does that mean I must do, and what, if anything, is left unconstrained?â
I think I must make decisions in a Bayesian-expected-utility-maximising sort of way, but I donât think that I have to assign a non-zero probability to every concievable event. In fact, if I make one of my desired properties be that Iâm not susceptible to infinity threatening Pascal muggers, then I shouldnât assign non-zero probability to situations that would allow me to influence infinite utility.
FWIW, I think most of us go with our guts to assign probabilities most of the time, rather than formally picking priors, likelihoods and updating based on evidence. I tend to use ranges of probabilities and do sensitivity analysis instead of committing to precise probabilities, because precise probabilities also seem epistemically unjustified to me. I use reference classes sometimes.
I think this doesnât really answer my question or is circular. I donât think that you decide how to act based on probabilities that come from how you decide to act, but that seems to be what youâre saying if I interpet your response as an answer to my question. It might also justify any course of action, possibly even if you fix the utility function (I think the subjective probabilities would need to depend on things in very weird ways, though). I think you still want to be able to justify specific acts, and I want to know how youâll do this.
Maybe we can make this more explicit with an example. How do you decide which causes to prioritize? Or, pick an intervention, and how would you decide whether it is net positive or net negative? And do so without assigning probabilities as degrees of belief. How else are you going to come up with those probabilities? Or are you giving up probabilities as part of your procedure?
On Savageâs axioms, if your state space is infinite and your utility function is unbounded, then completeness requires the axioms to hold over acts that would have infinite expected utility, even if none is ever accessible to you in practice, and I think that would violate other axioms (the sure thing principle; if not Savageâs version, one that would be similarly irrational to violate; see https://ââonlinelibrary.wiley.com/ââdoi/ââfull/ââ10.1111/ââphpr.12704 ). If your state space is finite and no outcome has infinite actual utility, then that seems to work, but Iâm not sure youâd want to commit to a finite state space.
I still donât think the position Iâm trying to defend is circular. Iâll have a go at explaining why.
Iâll start with aswering your question: in practice the way I would come up with probabilities to assess a charitable intervention is the same as the way you probably would. Iâd look at the available evidence and update my priors in a way that at least tries to approximate the principle expressed in Bayesâ theorem. Savageâs axioms imply that my decision-describing-numbers between 0 and 1 have to obey the usual laws of probability theory, and that includes Bayesâ theorem. If there is any difference between our positions, it will only be in how we should pick our priors. You pick those before you look at any evidence at all. How should you do that?
Savageâs axioms donât tell you how to pick your priors. But actually I donât know of any other principle that does either. If youâre trying to quantify âdegrees of beliefâ in an abstract sense, I think youâre sort of doomed (this is the problem of induction). My question for you is, how do you do that?
But we do have to make decisions. I want my decisions to be constrained by certain rational sounding axioms (like the sure thing principle), but I donât think I want to place many more constraints on myself than that. Even those fairly weak constraints turn out to imply that there are some numbers, which you can call subjective probabilities, that I need to start out with as priors over states of the world, and which I will then update in the usual Bayesian way. But there is very little constraint in how I pick those numbers. They have to obey the laws of probability theory, but thatâs quite a weak constraint. It doesnât by itself imply that I have to assign non-zero probability to things which are concievable (e.g. if you pick a real number at random from the uniform distribution between 0 and 1, every possible outcome has probability 0).
So this is the way Iâm thinking about the whole problem of forming beliefs and making decisions. Iâm asking the question:
âł I want to make decisions in a way that is consistent with certain rational seeming properties, what does that mean I must do, and what, if anything, is left unconstrained?â
I think I must make decisions in a Bayesian-expected-utility-maximising sort of way, but I donât think that I have to assign a non-zero probability to every concievable event. In fact, if I make one of my desired properties be that Iâm not susceptible to infinity threatening Pascal muggers, then I shouldnât assign non-zero probability to situations that would allow me to influence infinite utility.
I donât think there is anything circular here.
Ok, this makes more sense to me.
FWIW, I think most of us go with our guts to assign probabilities most of the time, rather than formally picking priors, likelihoods and updating based on evidence. I tend to use ranges of probabilities and do sensitivity analysis instead of committing to precise probabilities, because precise probabilities also seem epistemically unjustified to me. I use reference classes sometimes.