My super quick take is that 1. definitely sounds right and important to me, and I think it would have been good if we had discussed this more in the doc.
I think 2. points to the super important question (which I think we’ve mentioned somewhere under Further research) how typical performance/output metrics relate to what we ultimately care about in EA contexts, i.e. positive impact on well-being. At first glance I’d guess that sometimes these metrics ‘overstate’ heavy-tailedness of EA impact (for e.g. the reasons you mentioned), but sometimes they might also ‘understate’ them. For instance, the metrics might not ‘internalize’ all the effects on the world (e.g. ‘field building’ effects from early-stage efforts), or for some EA situations the ‘market’ may be even more winner-takes-most than usual (e.g. for some AI alignment efforts it only matters if you can influence DeepMind), or the ‘production function’ might have higher returns to talent than usual (e.g. perhaps founding a nonprofit or contributing valuable research to preparadigmatic fields is “extra hard” in a way not captured by standard metrics when compared to easier cases).
Thanks for these points!
My super quick take is that 1. definitely sounds right and important to me, and I think it would have been good if we had discussed this more in the doc.
I think 2. points to the super important question (which I think we’ve mentioned somewhere under Further research) how typical performance/output metrics relate to what we ultimately care about in EA contexts, i.e. positive impact on well-being. At first glance I’d guess that sometimes these metrics ‘overstate’ heavy-tailedness of EA impact (for e.g. the reasons you mentioned), but sometimes they might also ‘understate’ them. For instance, the metrics might not ‘internalize’ all the effects on the world (e.g. ‘field building’ effects from early-stage efforts), or for some EA situations the ‘market’ may be even more winner-takes-most than usual (e.g. for some AI alignment efforts it only matters if you can influence DeepMind), or the ‘production function’ might have higher returns to talent than usual (e.g. perhaps founding a nonprofit or contributing valuable research to preparadigmatic fields is “extra hard” in a way not captured by standard metrics when compared to easier cases).