but also the average predictor improving their ability also fixed that underconfidence
What do mean by this?
I mean in the past people were underconfident (so extremizing would make their predictions better). Since then they’ve stopped being underconfident. My assumption is that this is because the average predictor is now more skilled or because more predictors improves the quality of the average.
Doesn’t that mean that it should be less accurate, given the bias towards questions resolving positively?
The bias isn’t that more questions resolve positively than users expect. The bias is that users expect more questions to resolve positive than actually resolve positive. Shifting probabilities lower fixes this.
Basically lots of questions on Metaculus are “Will X happen?” where X is some interesting event people are talking about, but the base rate is perhaps low. People tend to overestimate the probability of X relative to what actually occurs.
The bias isn’t that more questions resolve positively than users expect. The bias is that users expect more questions to resolve positive than actually resolve positive.
I mean in the past people were underconfident (so extremizing would make their predictions better). Since then they’ve stopped being underconfident. My assumption is that this is because the average predictor is now more skilled or because more predictors improves the quality of the average.
Gotcha!
The bias isn’t that more questions resolve positively than users expect.
I mean in the past people were underconfident (so extremizing would make their predictions better). Since then they’ve stopped being underconfident. My assumption is that this is because the average predictor is now more skilled or because more predictors improves the quality of the average.
The bias isn’t that more questions resolve positively than users expect. The bias is that users expect more questions to resolve positive than actually resolve positive. Shifting probabilities lower fixes this.
Basically lots of questions on Metaculus are “Will X happen?” where X is some interesting event people are talking about, but the base rate is perhaps low. People tend to overestimate the probability of X relative to what actually occurs.
I don’t get what the difference between these is.
“more questions resolve positively than users expect”
Users expect 50 to resolve positively, but actually 60 resolve positive.
“users expect more questions to resolve positive than actually resolve positive”
Users expect 50 to resolve positive, but actually 40 resolve positive.
I have now editted the original comment to be clearer?
Cheers
Gotcha!
Oh I see!