I like Askellâs talk and think this is an important point. Though when making the point without the full context of the talk, it also seems worth noting that:
As an empirical matter, oneâs naive/âearly/âquick analyses of how good (or cost-effective, or whatever) something is seem to often be overly optimistic.
Though I donât know precisely why this is the case, and I imagine it varies by domain.
Additionally, thereâs the optimizerâs curse. This is essentially a reason why one is likelier to be overestimating the value of something if one thinks that thing is unusually good. The curse is larger the more uncertainty one has.
For both reasons, if you see X and Y as unusually good, but you have less evidence re X, then that should update you towards thinking youâre being (more) overly optimistic about X, and thus that Y is actually better.
I think your comment is completely valid if we imagine that the two options âlook to be equally goodâ even after adjusting for these tendencies. But I think people often donât adjust for these tendencies, so it seems worth making it explicit.
(Also, even if X does currently seem somewhat less good than Y, that can be outweighed by the value of information consideration such that itâs worth further investigation of X rather than Y anyway.)
As an empirical matter, oneâs naive/âearly/âquick analyses of how good (or cost-effective, or whatever) something is seem to often be overly optimistic.
One possible reason is completely rational: if weâre estimating expected value of an intervention with a 1% chance to be highly valuable, then 99% of the time we realize the moonshot wonât work and revise the expected value downward.
That definitely can happen, and makes me realise my comment wasnât sufficiently precise.
An expected value estimate can be reasonable even if thereâs a 99% chance it would be revised downwards given more info, if thereâs also a 1% chance it would be revised upwards by enough to offset the potential downward revisions. If an estimator makes such an estimate and is well-calibrated, I wouldnât say theyâre making a mistake, and thus probably wouldnât say theyâre being âoverly optimisticâ.
The claim I was making was that oneâs naive/âearly/âquick analyses of how good (or cost-effective, or whatever) tend to not be well-calibrated, systematically erring towards optimism in a way that means that itâs best to adjust the expected value downwards to account for this (unless one has already made such an adjustment).
But Iâm not actually sure how true that claim is (Iâm just basing it on my memory of GiveWell posts I read in the past). Maybe most things that look like that situation are either actually the optimiserâs curse or actually the sort of situation you describe.
I like Askellâs talk and think this is an important point. Though when making the point without the full context of the talk, it also seems worth noting that:
As an empirical matter, oneâs naive/âearly/âquick analyses of how good (or cost-effective, or whatever) something is seem to often be overly optimistic.
Though I donât know precisely why this is the case, and I imagine it varies by domain.
See also Why We Canât Take Expected Value Estimates Literally (Even When Theyâre Unbiased)
Additionally, thereâs the optimizerâs curse. This is essentially a reason why one is likelier to be overestimating the value of something if one thinks that thing is unusually good. The curse is larger the more uncertainty one has.
For both reasons, if you see X and Y as unusually good, but you have less evidence re X, then that should update you towards thinking youâre being (more) overly optimistic about X, and thus that Y is actually better.
I think your comment is completely valid if we imagine that the two options âlook to be equally goodâ even after adjusting for these tendencies. But I think people often donât adjust for these tendencies, so it seems worth making it explicit.
(Also, even if X does currently seem somewhat less good than Y, that can be outweighed by the value of information consideration such that itâs worth further investigation of X rather than Y anyway.)
One possible reason is completely rational: if weâre estimating expected value of an intervention with a 1% chance to be highly valuable, then 99% of the time we realize the moonshot wonât work and revise the expected value downward.
That definitely can happen, and makes me realise my comment wasnât sufficiently precise.
An expected value estimate can be reasonable even if thereâs a 99% chance it would be revised downwards given more info, if thereâs also a 1% chance it would be revised upwards by enough to offset the potential downward revisions. If an estimator makes such an estimate and is well-calibrated, I wouldnât say theyâre making a mistake, and thus probably wouldnât say theyâre being âoverly optimisticâ.
The claim I was making was that oneâs naive/âearly/âquick analyses of how good (or cost-effective, or whatever) tend to not be well-calibrated, systematically erring towards optimism in a way that means that itâs best to adjust the expected value downwards to account for this (unless one has already made such an adjustment).
But Iâm not actually sure how true that claim is (Iâm just basing it on my memory of GiveWell posts I read in the past). Maybe most things that look like that situation are either actually the optimiserâs curse or actually the sort of situation you describe.