I mentioned this deeper in this thread, but I think precise probabilities are epistemically unjustifiable. Why not 1% higher or 1% lower? If you can’t answer that question, then you’re kind of pulling numbers out of your ass. In general, at some point, you have to make a 100% commitment to a given model (even a complex one with submodels) to have sharp probabilities, and then there’s a burden of proof to justify exactly that model.
Eg if you have X% credence in a theory that produces 30% and Y% credence in a theory that produces 50%, then your actual probability is just a weighted sum.
Then you have to justify X% and Y% exactly, which seems impossible; you need to go further up the chain until you hit an unjustified commitment, or until you hit a universal prior, and there are actually multiple possible universal priors and no way to justify the choice of one specific one. If you try all universal priors from a justified set of them, you’ll get ranges of probabilities.
(This isn’t based on my own reading of the literature; I’m not that familiar with it, so maybe this is wrong.)
I do think everything eventually starts from your ass. Often you make some assumptions, collect evidence (and iterate between these first two) and then apply a model, so the numbers don’t directly come from your ass.
If I said that the probability of human extinction in the next 10 seconds was 50% based on a uniform prior, you would have a sense that this is worse than a number you could come up with based on assumptions and observations, and it feels like it came more directly from the ass. (And it would be extremely suspicious, since you could ask the same for 5 seconds, 20 seconds, and a million years. Why did 10 seconds get the uniform prior?)
I’d rather my choices of actions be in some sense robust to assumptions (and priors, e.g. the reference class problem) that I feel are most unjustified, e.g. using a sensitivity analysis, as I’m often not willing to commit to putting a prior over those assumptions, precisely because it’s way too arbitrary and unjustified. I might be willing to put ranges of probabilities. I’m not sure there’s been a satisfactory formal characterization of robustness, though. (This is basically cluster thinking.)
Each time you make an assumption, you’re pulling something out of your ass, but if you check competing assumptions, that’s less arbitrary to me.
I mentioned this deeper in this thread, but I think precise probabilities are epistemically unjustifiable. Why not 1% higher or 1% lower? If you can’t answer that question, then you’re kind of pulling numbers out of your ass. In general, at some point, you have to make a 100% commitment to a given model (even a complex one with submodels) to have sharp probabilities, and then there’s a burden of proof to justify exactly that model.
Then you have to justify X% and Y% exactly, which seems impossible; you need to go further up the chain until you hit an unjustified commitment, or until you hit a universal prior, and there are actually multiple possible universal priors and no way to justify the choice of one specific one. If you try all universal priors from a justified set of them, you’ll get ranges of probabilities.
(This isn’t based on my own reading of the literature; I’m not that familiar with it, so maybe this is wrong.)
Wait what do you think probabilities are, if you’re not talking, ultimately, about numbers out of your ass?
I do think everything eventually starts from your ass. Often you make some assumptions, collect evidence (and iterate between these first two) and then apply a model, so the numbers don’t directly come from your ass.
If I said that the probability of human extinction in the next 10 seconds was 50% based on a uniform prior, you would have a sense that this is worse than a number you could come up with based on assumptions and observations, and it feels like it came more directly from the ass. (And it would be extremely suspicious, since you could ask the same for 5 seconds, 20 seconds, and a million years. Why did 10 seconds get the uniform prior?)
I’d rather my choices of actions be in some sense robust to assumptions (and priors, e.g. the reference class problem) that I feel are most unjustified, e.g. using a sensitivity analysis, as I’m often not willing to commit to putting a prior over those assumptions, precisely because it’s way too arbitrary and unjustified. I might be willing to put ranges of probabilities. I’m not sure there’s been a satisfactory formal characterization of robustness, though. (This is basically cluster thinking.)
Each time you make an assumption, you’re pulling something out of your ass, but if you check competing assumptions, that’s less arbitrary to me.