Which distribution would you use? Why the particular weights you’ve chosen and not slightly different ones?
I think you just have to make your distribution uninformative enough that reasonable differences in the weights don’t change your overall conclusion. If they do, then I would concede that the solution to your specific question really is clueless. Otherwise, you can probably find a response.
come up with a probability distribution for the fraction of heads over 1,000,000 flips.
Rather than thinking of directly of appropriate distribution for the 1,000,000 flips, I’d think of a distribution to model p itself. Then you can run simulations based on the distribution of p to calculate the distribution of the fraction of 1000,000 flips. p∈(0.5,1.0], and then we need to select a distribution for p over that range.
There is no one correct probability distribution for p because any probability is just an expression of our belief, so you may use whatever probability distribution genuinely reflects your prior belief. A uniform distribution is a reasonable start. Perhaps you really are clueless about p, in which case, yes, there’s a certain amount of subjectivity about your choice. But prior beliefs are always inherently subjective, because they simply describe your belief about the state of the world as you know it now. The fact you might have to select a distribution, or set of distributions with some weighted average, is merely an expression of your uncertainty. This in itself, I think, doesn’t stop you from trying to estimate the result.
I think this expresses within Bayesian terms the philosophical idea that we can only make moral choices based on information available at the time; one can’t be held morally responsible for mistakes made on the basis of the information we didn’t have.
Perhaps you disagree with me that a uniform distribution is the best choice. You reason thus: “we have some idea about the properties of coins in general. It’s difficult to make a coin that is 100% biased towards heads. So that seems unlikely”. So we could pick a distribution that better reflects your prior belief. Perhaps a suitable choice might be Beta(2,2) with a truncation at 0.5, which will give the greatest likelihood of p just above 0.5, and a declining likelihood down to 1.0.
Maybe you and i just can’t agree after all that there is still no consistent and reasonable prior choice you can make, and not even any compromise. And let’s say we both run simulations using our own priors and find entirely different results and we can’t agree on any suitable weighting between them. In that case, yes, I can see you have cluelessness. I don’t think it follows that, if we went through the same process for estimating the longtermist moral worth of malaria bednet distribution, we must have intractable complex cluelessness about specific problems like malaria bednet distribution. I think I can admit that perhaps, right now, in our current belief state, we are genuinely clueless, but it seems that there is some work that can be done that might eliminate the cluelessness.
I think you just have to make your distribution uninformative enough that reasonable differences in the weights don’t change your overall conclusion. If they do, then I would concede that the solution to your specific question really is clueless. Otherwise, you can probably find a response.
Rather than thinking of directly of appropriate distribution for the 1,000,000 flips, I’d think of a distribution to model p itself. Then you can run simulations based on the distribution of p to calculate the distribution of the fraction of 1000,000 flips. p∈(0.5,1.0], and then we need to select a distribution for p over that range.
There is no one correct probability distribution for p because any probability is just an expression of our belief, so you may use whatever probability distribution genuinely reflects your prior belief. A uniform distribution is a reasonable start. Perhaps you really are clueless about p, in which case, yes, there’s a certain amount of subjectivity about your choice. But prior beliefs are always inherently subjective, because they simply describe your belief about the state of the world as you know it now. The fact you might have to select a distribution, or set of distributions with some weighted average, is merely an expression of your uncertainty. This in itself, I think, doesn’t stop you from trying to estimate the result.
I think this expresses within Bayesian terms the philosophical idea that we can only make moral choices based on information available at the time; one can’t be held morally responsible for mistakes made on the basis of the information we didn’t have.
Perhaps you disagree with me that a uniform distribution is the best choice. You reason thus: “we have some idea about the properties of coins in general. It’s difficult to make a coin that is 100% biased towards heads. So that seems unlikely”. So we could pick a distribution that better reflects your prior belief. Perhaps a suitable choice might be Beta(2,2) with a truncation at 0.5, which will give the greatest likelihood of p just above 0.5, and a declining likelihood down to 1.0.
Maybe you and i just can’t agree after all that there is still no consistent and reasonable prior choice you can make, and not even any compromise. And let’s say we both run simulations using our own priors and find entirely different results and we can’t agree on any suitable weighting between them. In that case, yes, I can see you have cluelessness. I don’t think it follows that, if we went through the same process for estimating the longtermist moral worth of malaria bednet distribution, we must have intractable complex cluelessness about specific problems like malaria bednet distribution. I think I can admit that perhaps, right now, in our current belief state, we are genuinely clueless, but it seems that there is some work that can be done that might eliminate the cluelessness.