1) The variances of the two distributions need to be standardized.
Less of a big deal, we we’d generally hope for and aim that our estimates of expected value are drawn from a similar distribution to the actual expected values—if it’s not, our estimates are systemically wrong somehow.
Our estimates could be systematically wrong in that they represent, for instance, before-regression estimates. We don’t even know the true distribution, and the generating mechanism for the estimates is sufficiently different that I wouldn’t feel confident that the variances should look similar.
Our estimates could be systematically wrong in that they represent, for instance, before-regression estimates.
If you assume the estimates are unbiased, as Gregory does, then before-regression estimates are not systematically wrong; they merely have variance.
Gregory isn’t even claiming to solve the biased-estimate case, which (IMO) is wise since the addition of bias (or arbitrary distributions, or arbitrary copulae between the estimate and true distribution) would drastically increase the number of model parameters, perhaps beyond the optimum point on the model uncertainty—parameter uncertainty trade-off.
I agree that the language in this post makes the divergence of the toy model from the true model seem smaller than it is, but I don’t think I’d call that a “serious technical problem!”
Even without bias, you need to know the ratio of the standard deviations of the distribution of true values and the distribution of estimates. The post assumes they are equal, which I wasn’t happy about (though I realise now that the fix for assuming they’re not equal is not that hard).
You’re right about the strength of the criticism, I should have edited that sentence and will do so now. I had weakened my claim about the strength of criticism in emails with Greg, but should have done so here too.
Our estimates could be systematically wrong in that they represent, for instance, before-regression estimates. We don’t even know the true distribution, and the generating mechanism for the estimates is sufficiently different that I wouldn’t feel confident that the variances should look similar.
If you assume the estimates are unbiased, as Gregory does, then before-regression estimates are not systematically wrong; they merely have variance.
Gregory isn’t even claiming to solve the biased-estimate case, which (IMO) is wise since the addition of bias (or arbitrary distributions, or arbitrary copulae between the estimate and true distribution) would drastically increase the number of model parameters, perhaps beyond the optimum point on the model uncertainty—parameter uncertainty trade-off.
I agree that the language in this post makes the divergence of the toy model from the true model seem smaller than it is, but I don’t think I’d call that a “serious technical problem!”
Even without bias, you need to know the ratio of the standard deviations of the distribution of true values and the distribution of estimates. The post assumes they are equal, which I wasn’t happy about (though I realise now that the fix for assuming they’re not equal is not that hard).
You’re right about the strength of the criticism, I should have edited that sentence and will do so now. I had weakened my claim about the strength of criticism in emails with Greg, but should have done so here too.