E[Brier] is the Brier score you would expect if the predictors were perfectly calibrated. It is quite similar across subgroups, as is average question duration. I include it to check if subgroups were typically making more or less confident predictions on average—more confident predictions would have a lower E[Brier].
How can you come up with a number for this? Surely a perfect predictor would have a Brier of 0? (I’m definitely wrong but I’d like someone to explain)
“Perfectly calibrated”, not “perfect”. So if all of their predictions were correct, I.e. 20% of their 20% predictions came true etc.
So in this case, someone making all 90% predictions will have an expected score of 0.9×0.1^2 + 0.1×0.9^2 =0.09, while someone making all 80% predictions will have an expected score of 0.8×0.2^2 + 0.2×0.8^2=0.16
In general a lower expected score means your typical prediction was more confident.
How can you come up with a number for this? Surely a perfect predictor would have a Brier of 0? (I’m definitely wrong but I’d like someone to explain)
“Perfectly calibrated”, not “perfect”. So if all of their predictions were correct, I.e. 20% of their 20% predictions came true etc.
So in this case, someone making all 90% predictions will have an expected score of 0.9×0.1^2 + 0.1×0.9^2 =0.09, while someone making all 80% predictions will have an expected score of 0.8×0.2^2 + 0.2×0.8^2=0.16
In general a lower expected score means your typical prediction was more confident.