I like Askellās talk and think this is an important point. Though when making the point without the full context of the talk, it also seems worth noting that:
As an empirical matter, oneās naive/āearly/āquick analyses of how good (or cost-effective, or whatever) something is seem to often be overly optimistic.
Though I donāt know precisely why this is the case, and I imagine it varies by domain.
Additionally, thereās the optimizerās curse. This is essentially a reason why one is likelier to be overestimating the value of something if one thinks that thing is unusually good. The curse is larger the more uncertainty one has.
For both reasons, if you see X and Y as unusually good, but you have less evidence re X, then that should update you towards thinking youāre being (more) overly optimistic about X, and thus that Y is actually better.
I think your comment is completely valid if we imagine that the two options ālook to be equally goodā even after adjusting for these tendencies. But I think people often donāt adjust for these tendencies, so it seems worth making it explicit.
(Also, even if X does currently seem somewhat less good than Y, that can be outweighed by the value of information consideration such that itās worth further investigation of X rather than Y anyway.)
As an empirical matter, oneās naive/āearly/āquick analyses of how good (or cost-effective, or whatever) something is seem to often be overly optimistic.
One possible reason is completely rational: if weāre estimating expected value of an intervention with a 1% chance to be highly valuable, then 99% of the time we realize the moonshot wonāt work and revise the expected value downward.
That definitely can happen, and makes me realise my comment wasnāt sufficiently precise.
An expected value estimate can be reasonable even if thereās a 99% chance it would be revised downwards given more info, if thereās also a 1% chance it would be revised upwards by enough to offset the potential downward revisions. If an estimator makes such an estimate and is well-calibrated, I wouldnāt say theyāre making a mistake, and thus probably wouldnāt say theyāre being āoverly optimisticā.
The claim I was making was that oneās naive/āearly/āquick analyses of how good (or cost-effective, or whatever) tend to not be well-calibrated, systematically erring towards optimism in a way that means that itās best to adjust the expected value downwards to account for this (unless one has already made such an adjustment).
But Iām not actually sure how true that claim is (Iām just basing it on my memory of GiveWell posts I read in the past). Maybe most things that look like that situation are either actually the optimiserās curse or actually the sort of situation you describe.
I like Askellās talk and think this is an important point. Though when making the point without the full context of the talk, it also seems worth noting that:
As an empirical matter, oneās naive/āearly/āquick analyses of how good (or cost-effective, or whatever) something is seem to often be overly optimistic.
Though I donāt know precisely why this is the case, and I imagine it varies by domain.
See also Why We Canāt Take Expected Value Estimates Literally (Even When Theyāre Unbiased)
Additionally, thereās the optimizerās curse. This is essentially a reason why one is likelier to be overestimating the value of something if one thinks that thing is unusually good. The curse is larger the more uncertainty one has.
For both reasons, if you see X and Y as unusually good, but you have less evidence re X, then that should update you towards thinking youāre being (more) overly optimistic about X, and thus that Y is actually better.
I think your comment is completely valid if we imagine that the two options ālook to be equally goodā even after adjusting for these tendencies. But I think people often donāt adjust for these tendencies, so it seems worth making it explicit.
(Also, even if X does currently seem somewhat less good than Y, that can be outweighed by the value of information consideration such that itās worth further investigation of X rather than Y anyway.)
One possible reason is completely rational: if weāre estimating expected value of an intervention with a 1% chance to be highly valuable, then 99% of the time we realize the moonshot wonāt work and revise the expected value downward.
That definitely can happen, and makes me realise my comment wasnāt sufficiently precise.
An expected value estimate can be reasonable even if thereās a 99% chance it would be revised downwards given more info, if thereās also a 1% chance it would be revised upwards by enough to offset the potential downward revisions. If an estimator makes such an estimate and is well-calibrated, I wouldnāt say theyāre making a mistake, and thus probably wouldnāt say theyāre being āoverly optimisticā.
The claim I was making was that oneās naive/āearly/āquick analyses of how good (or cost-effective, or whatever) tend to not be well-calibrated, systematically erring towards optimism in a way that means that itās best to adjust the expected value downwards to account for this (unless one has already made such an adjustment).
But Iām not actually sure how true that claim is (Iām just basing it on my memory of GiveWell posts I read in the past). Maybe most things that look like that situation are either actually the optimiserās curse or actually the sort of situation you describe.