Imagine if we could say, “When Metaculus predicts something with 80% certainty, it happens between X and Y% of the time,” or “On average, Metaculus forecasts are off by X%”.
Fyi, the Metaculus track record—the “Community Prediction calibration” part, specifically—lets us do this already. When Metaculus predicts something with 80% certainty, for example, it happens around 82% of the time:
Thank you for the response! I should have been a bit clearer: This is what inspired me to write this, but I still need 3-5 sentences to explain to a policymaker what they are looking at when you show them this kind of calibration graph. I am looking for something even shorter than that.
Fyi, the Metaculus track record—the “Community Prediction calibration” part, specifically—lets us do this already. When Metaculus predicts something with 80% certainty, for example, it happens around 82% of the time:
Thank you for the response! I should have been a bit clearer: This is what inspired me to write this, but I still need 3-5 sentences to explain to a policymaker what they are looking at when you show them this kind of calibration graph. I am looking for something even shorter than that.