they argue that, among others, innovation and research is best achieved by aiming for whatâs interesting and not what makes progress on a more concrete objective.
I donât yet have a confident stance on how often/âstrongly I should, when picking and pursuing research directions, have relatively explciit, concrete plans for how my research would improve the world in mind. (It sounds like youâre in a similar boat.) But in thinking about that question, I found Galefâs post useful. I also found some of the links I collected here usefulâperhaps especially How to do research that matters and (the answers provided to) Do research organisations make theory of change diagrams? Should they?
[I]f the expected concrete value of two interventions is similar, we should generally favor investing in interventions that have less evidence supporting them.
I like Askellâs talk and think this is an important point. Though when making the point without the full context of the talk, it also seems worth noting that:
As an empirical matter, oneâs naive/âearly/âquick analyses of how good (or cost-effective, or whatever) something is seem to often be overly optimistic.
Though I donât know precisely why this is the case, and I imagine it varies by domain.
Additionally, thereâs the optimizerâs curse. This is essentially a reason why one is likelier to be overestimating the value of something if one thinks that thing is unusually good. The curse is larger the more uncertainty one has.
For both reasons, if you see X and Y as unusually good, but you have less evidence re X, then that should update you towards thinking youâre being (more) overly optimistic about X, and thus that Y is actually better.
I think your comment is completely valid if we imagine that the two options âlook to be equally goodâ even after adjusting for these tendencies. But I think people often donât adjust for these tendencies, so it seems worth making it explicit.
(Also, even if X does currently seem somewhat less good than Y, that can be outweighed by the value of information consideration such that itâs worth further investigation of X rather than Y anyway.)
As an empirical matter, oneâs naive/âearly/âquick analyses of how good (or cost-effective, or whatever) something is seem to often be overly optimistic.
One possible reason is completely rational: if weâre estimating expected value of an intervention with a 1% chance to be highly valuable, then 99% of the time we realize the moonshot wonât work and revise the expected value downward.
That definitely can happen, and makes me realise my comment wasnât sufficiently precise.
An expected value estimate can be reasonable even if thereâs a 99% chance it would be revised downwards given more info, if thereâs also a 1% chance it would be revised upwards by enough to offset the potential downward revisions. If an estimator makes such an estimate and is well-calibrated, I wouldnât say theyâre making a mistake, and thus probably wouldnât say theyâre being âoverly optimisticâ.
The claim I was making was that oneâs naive/âearly/âquick analyses of how good (or cost-effective, or whatever) tend to not be well-calibrated, systematically erring towards optimism in a way that means that itâs best to adjust the expected value downwards to account for this (unless one has already made such an adjustment).
But Iâm not actually sure how true that claim is (Iâm just basing it on my memory of GiveWell posts I read in the past). Maybe most things that look like that situation are either actually the optimiserâs curse or actually the sort of situation you describe.
[Maybe a tangent]
This reminds me of Julia Galefâs Can we intentionally improve the world? Planners vs. Hayekians.
I donât yet have a confident stance on how often/âstrongly I should, when picking and pursuing research directions, have relatively explciit, concrete plans for how my research would improve the world in mind. (It sounds like youâre in a similar boat.) But in thinking about that question, I found Galefâs post useful. I also found some of the links I collected here usefulâperhaps especially How to do research that matters and (the answers provided to) Do research organisations make theory of change diagrams? Should they?
Also related is the idea that the moral value of additional information is high when there is relatively low resilience in your credence that the current intervention is best. This leads to the (to me) rather unintuitive conclusion that if you have two research paths that both look to be equally good to look into for potentially improving the world, then, ceteris paribus, it may be better to invest in the research path for which you have less evidence that it is a good research path to follow. From Amanda Askell in the link:
I like Askellâs talk and think this is an important point. Though when making the point without the full context of the talk, it also seems worth noting that:
As an empirical matter, oneâs naive/âearly/âquick analyses of how good (or cost-effective, or whatever) something is seem to often be overly optimistic.
Though I donât know precisely why this is the case, and I imagine it varies by domain.
See also Why We Canât Take Expected Value Estimates Literally (Even When Theyâre Unbiased)
Additionally, thereâs the optimizerâs curse. This is essentially a reason why one is likelier to be overestimating the value of something if one thinks that thing is unusually good. The curse is larger the more uncertainty one has.
For both reasons, if you see X and Y as unusually good, but you have less evidence re X, then that should update you towards thinking youâre being (more) overly optimistic about X, and thus that Y is actually better.
I think your comment is completely valid if we imagine that the two options âlook to be equally goodâ even after adjusting for these tendencies. But I think people often donât adjust for these tendencies, so it seems worth making it explicit.
(Also, even if X does currently seem somewhat less good than Y, that can be outweighed by the value of information consideration such that itâs worth further investigation of X rather than Y anyway.)
One possible reason is completely rational: if weâre estimating expected value of an intervention with a 1% chance to be highly valuable, then 99% of the time we realize the moonshot wonât work and revise the expected value downward.
That definitely can happen, and makes me realise my comment wasnât sufficiently precise.
An expected value estimate can be reasonable even if thereâs a 99% chance it would be revised downwards given more info, if thereâs also a 1% chance it would be revised upwards by enough to offset the potential downward revisions. If an estimator makes such an estimate and is well-calibrated, I wouldnât say theyâre making a mistake, and thus probably wouldnât say theyâre being âoverly optimisticâ.
The claim I was making was that oneâs naive/âearly/âquick analyses of how good (or cost-effective, or whatever) tend to not be well-calibrated, systematically erring towards optimism in a way that means that itâs best to adjust the expected value downwards to account for this (unless one has already made such an adjustment).
But Iâm not actually sure how true that claim is (Iâm just basing it on my memory of GiveWell posts I read in the past). Maybe most things that look like that situation are either actually the optimiserâs curse or actually the sort of situation you describe.