I still donāt think the position Iām trying to defend is circular. Iāll have a go at explaining why.
Iāll start with aswering your question: in practice the way I would come up with probabilities to assess a charitable intervention is the same as the way you probably would. Iād look at the available evidence and update my priors in a way that at least tries to approximate the principle expressed in Bayesā theorem. Savageās axioms imply that my decision-describing-numbers between 0 and 1 have to obey the usual laws of probability theory, and that includes Bayesā theorem. If there is any difference between our positions, it will only be in how we should pick our priors. You pick those before you look at any evidence at all. How should you do that?
Savageās axioms donāt tell you how to pick your priors. But actually I donāt know of any other principle that does either. If youāre trying to quantify ādegrees of beliefā in an abstract sense, I think youāre sort of doomed (this is the problem of induction). My question for you is, how do you do that?
But we do have to make decisions. I want my decisions to be constrained by certain rational sounding axioms (like the sure thing principle), but I donāt think I want to place many more constraints on myself than that. Even those fairly weak constraints turn out to imply that there are some numbers, which you can call subjective probabilities, that I need to start out with as priors over states of the world, and which I will then update in the usual Bayesian way. But there is very little constraint in how I pick those numbers. They have to obey the laws of probability theory, but thatās quite a weak constraint. It doesnāt by itself imply that I have to assign non-zero probability to things which are concievable (e.g. if you pick a real number at random from the uniform distribution between 0 and 1, every possible outcome has probability 0).
So this is the way Iām thinking about the whole problem of forming beliefs and making decisions. Iām asking the question:
ā³ I want to make decisions in a way that is consistent with certain rational seeming properties, what does that mean I must do, and what, if anything, is left unconstrained?ā
I think I must make decisions in a Bayesian-expected-utility-maximising sort of way, but I donāt think that I have to assign a non-zero probability to every concievable event. In fact, if I make one of my desired properties be that Iām not susceptible to infinity threatening Pascal muggers, then I shouldnāt assign non-zero probability to situations that would allow me to influence infinite utility.
FWIW, I think most of us go with our guts to assign probabilities most of the time, rather than formally picking priors, likelihoods and updating based on evidence. I tend to use ranges of probabilities and do sensitivity analysis instead of committing to precise probabilities, because precise probabilities also seem epistemically unjustified to me. I use reference classes sometimes.
I still donāt think the position Iām trying to defend is circular. Iāll have a go at explaining why.
Iāll start with aswering your question: in practice the way I would come up with probabilities to assess a charitable intervention is the same as the way you probably would. Iād look at the available evidence and update my priors in a way that at least tries to approximate the principle expressed in Bayesā theorem. Savageās axioms imply that my decision-describing-numbers between 0 and 1 have to obey the usual laws of probability theory, and that includes Bayesā theorem. If there is any difference between our positions, it will only be in how we should pick our priors. You pick those before you look at any evidence at all. How should you do that?
Savageās axioms donāt tell you how to pick your priors. But actually I donāt know of any other principle that does either. If youāre trying to quantify ādegrees of beliefā in an abstract sense, I think youāre sort of doomed (this is the problem of induction). My question for you is, how do you do that?
But we do have to make decisions. I want my decisions to be constrained by certain rational sounding axioms (like the sure thing principle), but I donāt think I want to place many more constraints on myself than that. Even those fairly weak constraints turn out to imply that there are some numbers, which you can call subjective probabilities, that I need to start out with as priors over states of the world, and which I will then update in the usual Bayesian way. But there is very little constraint in how I pick those numbers. They have to obey the laws of probability theory, but thatās quite a weak constraint. It doesnāt by itself imply that I have to assign non-zero probability to things which are concievable (e.g. if you pick a real number at random from the uniform distribution between 0 and 1, every possible outcome has probability 0).
So this is the way Iām thinking about the whole problem of forming beliefs and making decisions. Iām asking the question:
ā³ I want to make decisions in a way that is consistent with certain rational seeming properties, what does that mean I must do, and what, if anything, is left unconstrained?ā
I think I must make decisions in a Bayesian-expected-utility-maximising sort of way, but I donāt think that I have to assign a non-zero probability to every concievable event. In fact, if I make one of my desired properties be that Iām not susceptible to infinity threatening Pascal muggers, then I shouldnāt assign non-zero probability to situations that would allow me to influence infinite utility.
I donāt think there is anything circular here.
Ok, this makes more sense to me.
FWIW, I think most of us go with our guts to assign probabilities most of the time, rather than formally picking priors, likelihoods and updating based on evidence. I tend to use ranges of probabilities and do sensitivity analysis instead of committing to precise probabilities, because precise probabilities also seem epistemically unjustified to me. I use reference classes sometimes.