I was planning to write almost exactly the same post! I think the leverage formula is a great gem hidden in WWOTF’s appendices.
Having said that, my understanding is it still doesn’t really capture S-curves, which seem to be at the root of your and all the other criticisms of neglectedness, and I would argue they apply almost everywhere. In addition to the examples you gave, global poverty interventions are only as high EV as they are because people did a bunch of work putting together the data that early Givewell/GWWC researchers collated (and their collation was only valuable because they then found people to donate based on it). Technical AI safety research might eventually become high value, but my (loose) understanding is it’s contributing very little if anything to current AI development. Marginal domestic animal welfare interventions seem to be pretty good, while marginal wild animal welfare interventions are still largely worthless. Climate change work might have reached diminishing marginal value, but even that still seems like a contested question, and that’s about the ‘least neglected’ area anyone might consider an EA cause.
It seems to be very hard either to define a mathematical ‘default’ for an S-curve or to think about counterfactually (how should we counterfactually account for the value of early contributors of an S-curve that ultimately takes off?). But IMO, these are problems to be solved and tradeoffs to be made, not a reason to keep applying neglectedness like there’s no better alternative.
I was planning to write almost exactly the same post! I think the leverage formula is a great gem hidden in WWOTF’s appendices.
Having said that, my understanding is it still doesn’t really capture S-curves, which seem to be at the root of your and all the other criticisms of neglectedness, and I would argue they apply almost everywhere. In addition to the examples you gave, global poverty interventions are only as high EV as they are because people did a bunch of work putting together the data that early Givewell/GWWC researchers collated (and their collation was only valuable because they then found people to donate based on it). Technical AI safety research might eventually become high value, but my (loose) understanding is it’s contributing very little if anything to current AI development. Marginal domestic animal welfare interventions seem to be pretty good, while marginal wild animal welfare interventions are still largely worthless. Climate change work might have reached diminishing marginal value, but even that still seems like a contested question, and that’s about the ‘least neglected’ area anyone might consider an EA cause.
It seems to be very hard either to define a mathematical ‘default’ for an S-curve or to think about counterfactually (how should we counterfactually account for the value of early contributors of an S-curve that ultimately takes off?). But IMO, these are problems to be solved and tradeoffs to be made, not a reason to keep applying neglectedness like there’s no better alternative.