Can you explain more why the bootstrapping approach doesn’t give a causal effect (or something pretty close to one) here? The aggregate approach is clearly confounded since questions with more answers are likely easier. But once you condition on the question and directly control the number of forecasters via bootstrapping different sample sizes, it doesn’t seem like there are any potential unobserved confounders remaining (other than the time issue Nikos mentioned). I don’t see what a natural experiment or RCT would provide above the bootstrapping approach.
Can you explain more why the bootstrapping approach doesn’t give a causal effect (or something pretty close to one) here? The aggregate approach is clearly confounded since questions with more answers are likely easier. But once you condition on the question and directly control the number of forecasters via bootstrapping different sample sizes, it doesn’t seem like there are any potential unobserved confounders remaining (other than the time issue Nikos mentioned). I don’t see what a natural experiment or RCT would provide above the bootstrapping approach.
Predictors on Metaculus seeing the prior prediction history seems like a plausible confounder to me, but I’m not sure it would change the result.
Yes, this is the main difference compared to forecasters being randomly assigned to a question.