Just to clarify, one should definitely expect cost-effectiveness estimates to drop as you put more time into them, and I don’t expect this cause area to be literally 1000x GiveWell. Headline cost-effectiveness always drops, from past experience, and it’s just optimizer’s curse where over (or under) performance comes partly from the cause area being genuinely better (or worse) but also partly from random error that you fix at deeper research stages. To be honest, I’ve come around to the view that publishing shallow reports—which are really just meant for internal prioritization—probably isn’t useful, insofar as it can be misleading.
As an example of how we more aggressive discount at deeper research stages, consider our intermediate hypertension report—there was a fairly large drop from around 300x to 80x GiveWell, driven by (among other things): (a) taking into accounting speeding up effects, (b) downgrading confidence in advocacy success rates, (c) updating for more conservative costing, and (d) doing GiveWell style epistemological discounts (e.g. taking into account a conservative null hypothesis prior, or discounting for publication bias/​endogeneity/​selection bias etc.)
As for what our priors should be with respect to whether a cause can really be 100x GiveWell—I would say there’s a reasonable case for this, if: (a) One targets NCDs and other diseases that grow with economic growth (instead of being solved by countries getting richer, and improving sanitation/​nutrition/​healthcare systems etc). (b) There are good policy interventions available, because it really does matter that: (i) a government has enormous scale/​impact; (ii) their spending is (counterfactually) relative to EA money that would have gone to AMF and the like; and (iii) policy tends to be sticky, and so the impact lasts in a way that distributing malaria nets or treating depression may not.
Just to clarify, one should definitely expect cost-effectiveness estimates to drop as you put more time into them, and I don’t expect this cause area to be literally 1000x GiveWell. Headline cost-effectiveness always drops, from past experience, and it’s just optimizer’s curse where over (or under) performance comes partly from the cause area being genuinely better (or worse) but also partly from random error that you fix at deeper research stages. To be honest, I’ve come around to the view that publishing shallow reports—which are really just meant for internal prioritization—probably isn’t useful, insofar as it can be misleading.
As an example of how we more aggressive discount at deeper research stages, consider our intermediate hypertension report—there was a fairly large drop from around 300x to 80x GiveWell, driven by (among other things): (a) taking into accounting speeding up effects, (b) downgrading confidence in advocacy success rates, (c) updating for more conservative costing, and (d) doing GiveWell style epistemological discounts (e.g. taking into account a conservative null hypothesis prior, or discounting for publication bias/​endogeneity/​selection bias etc.)
As for what our priors should be with respect to whether a cause can really be 100x GiveWell—I would say there’s a reasonable case for this, if: (a) One targets NCDs and other diseases that grow with economic growth (instead of being solved by countries getting richer, and improving sanitation/​nutrition/​healthcare systems etc). (b) There are good policy interventions available, because it really does matter that: (i) a government has enormous scale/​impact; (ii) their spending is (counterfactually) relative to EA money that would have gone to AMF and the like; and (iii) policy tends to be sticky, and so the impact lasts in a way that distributing malaria nets or treating depression may not.