e.g. from P(X) = 0.8, I may think in a week I will—most of the time—have notched this forecast slightly upwards, but less of the time notching it further downwards, and this averages out to E[P(X) [next week]] = 0.8.
I wish you had said this in the BLUF—it is the key insight, and the one that made me go from “Greg sounds totally wrong” to “Ohhh, he is totally right”
ETA: you did actually say this, but you said it in less simple language, which is why I missed it
I’m not sure but my guess of the argument of the OP is that:
Let’s say you are an unbiased forecaster. You get information as time passes. When you start with a 60% prediction that event X will happen, on average, the evidence you will receive will cause you to correctly revise your prediction towards 100%.
Scott Alexander noted curiosity about this behaviour; Eliezer Yudkowsky has confidently asserted it is an indicator of sub-par Bayesian updating.
I wish you had said this in the BLUF—it is the key insight, and the one that made me go from “Greg sounds totally wrong” to “Ohhh, he is totally right”
ETA: you did actually say this, but you said it in less simple language, which is why I missed it
I’m not sure but my guess of the argument of the OP is that:
Let’s say you are an unbiased forecaster. You get information as time passes. When you start with a 60% prediction that event X will happen, on average, the evidence you will receive will cause you to correctly revise your prediction towards 100%.
<Eyes emoji>