The event will either happen (1) or not (0). The 0.4% already reflects our uncertainty. In general, I don’t think it makes mathematical sense to discuss probabilities of probabilities.*
*although of course it can make sense to describe sensitivities of probabilities to new information coming in
It would be far far higher of course! With that many variables? Think about the uncertainty we ascribe to cost-effectiveness analysis with far less variables and far better evidence. Even calculating the error her would be close to impossible
95% confidence interval 0.1% to 50%? (Kind of joking here, but it might be in that range)
Confidence intervals over probabilities don’t make much sense to me. The probability itself is already the confidence interval over the binary domain [event happens, event doesn’t happen].
I guess to me the idea of confidence intervals over probabilities implies two different kinds of probabilities. E.g., a reducible flavor and an irreducible flavor. I don’t see what a two-tiered system of probability adds, exactly.
This was an extensive debate in the 1980s and 90s between Judea Pearl, Dempster-Schafer, and a few others. I think it’s trivially true, however, that in the probability centric view you espouse, it can be helpful to track second order uncertainty, and reducible versus irreducible uncertainty is critical for VoI analysis.
And for more on the debates about second-order probabilities and confidence intervals, and why Pearl says you don’t need them, you should just use a Bayesian Network, see his paper here: https://core.ac.uk/download/pdf/82281071.pdf
Model error higher than 1%?
Three questions for you that would help us improve our model:
What important error do you think is made by our model?
What modification would you propose to address the error?
What impact do you think your modification would have on the resultant forecast?
I think he’s asking if your margin of error is >.01
What is a margin of error, here, exactly?
The event will either happen (1) or not (0). The 0.4% already reflects our uncertainty. In general, I don’t think it makes mathematical sense to discuss probabilities of probabilities.*
*although of course it can make sense to describe sensitivities of probabilities to new information coming in
It would be far far higher of course! With that many variables? Think about the uncertainty we ascribe to cost-effectiveness analysis with far less variables and far better evidence. Even calculating the error her would be close to impossible
95% confidence interval 0.1% to 50%? (Kind of joking here, but it might be in that range)
Confidence intervals over probabilities don’t make much sense to me. The probability itself is already the confidence interval over the binary domain [event happens, event doesn’t happen].
I guess to me the idea of confidence intervals over probabilities implies two different kinds of probabilities. E.g., a reducible flavor and an irreducible flavor. I don’t see what a two-tiered system of probability adds, exactly.
This was an extensive debate in the 1980s and 90s between Judea Pearl, Dempster-Schafer, and a few others. I think it’s trivially true, however, that in the probability centric view you espouse, it can be helpful to track second order uncertainty, and reducible versus irreducible uncertainty is critical for VoI analysis.
What is Vol analysis?
Value of Information
Here’s my brief intro post about it:
https://forum.effectivealtruism.org/posts/8w2hNT5WtDMzoaGuy/when-to-find-more-information-a-short-explanation
And for more on the debates about second-order probabilities and confidence intervals, and why Pearl says you don’t need them, you should just use a Bayesian Network, see his paper here: https://core.ac.uk/download/pdf/82281071.pdf