[Link] The Optimizer’s Curse & Wrong-Way Reductions

This is a linkpost for https://​​con­fu­sopoly.com/​​2019/​​04/​​03/​​the-op­ti­miz­ers-curse-wrong-way-re­duc­tions/​​.


I spent about two and a half years as a re­search an­a­lyst at GiveWell. For most of my time there, I was the point per­son on GiveWell’s main cost-effec­tive­ness analy­ses. I’ve come to be­lieve there are se­ri­ous, un­der­ap­pre­ci­ated is­sues with the meth­ods the effec­tive al­tru­ism (EA) com­mu­nity at large uses to pri­ori­tize causes and pro­grams. While effec­tive al­tru­ists ap­proach pri­ori­ti­za­tion in a num­ber of differ­ent ways, most ap­proaches in­volve (a) roughly es­ti­mat­ing the pos­si­ble im­pacts fund­ing op­por­tu­ni­ties could have and (b) as­sess­ing the prob­a­bil­ity that pos­si­ble im­pacts will be re­al­ized if an op­por­tu­nity is funded.
I dis­cuss the phe­nomenon of the op­ti­mizer’s curse: when as­sess­ments of ac­tivi­ties’ im­pacts are un­cer­tain, en­gag­ing in the ac­tivi­ties that look most promis­ing will tend to have a smaller im­pact than an­ti­ci­pated. I ar­gue that the op­ti­mizer’s curse should be ex­tremely con­cern­ing when pri­ori­tiz­ing among fund­ing op­por­tu­ni­ties that in­volve sub­stan­tial, poorly un­der­stood un­cer­tainty. I fur­ther ar­gue that pro­posed Bayesian ap­proaches to avoid­ing the op­ti­mizer’s curse are of­ten un­re­al­is­tic. I main­tain that it is a mis­take to try and un­der­stand all un­cer­tainty in terms of pre­cise prob­a­bil­ity es­ti­mates.

I go into a lot more de­tail in the full post.


The value
is not of type