I think it’s good to consider these kinds of hypotheticals, and I don’t know if I’ve seen it done very often (although someone may have just expressed similar ideas in different terms). I’m also a big fan of recognizing stairstep functions and the possibility of (locally) increasing marginal utility—although I don’t think I’ve typically labeled those kinds of scenarios as “high-hanging fruits” (which is not to say the label is wrong, I just don’t recall ever thinking of it like that). That being said, I did have a couple thoughts/points of commentary:
• There are a variety of valid reasons to be skeptical of such “high-hanging fruit” and favorable towards “low-hanging fruit”—a point which I think other people would be more qualified to explain, but which relates to e.g., the difficulty of finding reliable evidence that massive increases in effort would have increasing marginal returns (since that may mean it hasn’t been tested at the theorized scale for impact yet), the heuristic of “if it’s so obvious and big , why are we only now hearing about it” (when applicable/within reason of course), the 80-20/power law distribution principle combined with the fact that a decent amount of low-hanging fruit still seems ripe for picking (e.g., low-cost, high-impact health interventions that still have room for funding), and so on.
• I suspect that as the benefits of/need for mass mobilization become fairly obvious, the general public would be more willing to mobilize, making the EA community less uniquely necessary (although this definitely depends on the situation in question).
• The EA community may not be sufficiently (large*coordinated) to overcome the marginal impact barriers.
• Applying the previous points to the scenarios you lay out (which I recognize are not the only possible examples, but it still helps to illustrate): for hypothetical 2, regarding climate change, this seems to be a case where it seems very unlikely that the general public will fail to support such a promising intervention while the EA community would be the unique tipping point to reach massive impact; in hypothetical 1, regarding animal advocacy, such a situation seems fairly unlikely when considering the reasoning behind diminishing marginal returns and historical experience with animal advocacy (e.g., there are going to be people who are difficult to convince), and again it seems unlikely (albeit less unlikely) that EAs would be the unique tipping point for massive impact.
• Especially considering the previous points, I don’t know how widespread/bad the assumptions of diminishing marginal returns in the EA community actually are in practice relative to its inaccuracy. I think it probably is an area for improvement/refinement, but it’s always possible to overcorrect, and I suspect that when/where increasing marginal returns becomes fairly obvious EAs would do a half-decent job of recognizing it (at least relative to how well they would recognize it if they thought about these scenarios more often).
I think it’s good to consider these kinds of hypotheticals, and I don’t know if I’ve seen it done very often (although someone may have just expressed similar ideas in different terms). I’m also a big fan of recognizing stairstep functions and the possibility of (locally) increasing marginal utility—although I don’t think I’ve typically labeled those kinds of scenarios as “high-hanging fruits” (which is not to say the label is wrong, I just don’t recall ever thinking of it like that). That being said, I did have a couple thoughts/points of commentary:
• There are a variety of valid reasons to be skeptical of such “high-hanging fruit” and favorable towards “low-hanging fruit”—a point which I think other people would be more qualified to explain, but which relates to e.g., the difficulty of finding reliable evidence that massive increases in effort would have increasing marginal returns (since that may mean it hasn’t been tested at the theorized scale for impact yet), the heuristic of “if it’s so obvious and big , why are we only now hearing about it” (when applicable/within reason of course), the 80-20/power law distribution principle combined with the fact that a decent amount of low-hanging fruit still seems ripe for picking (e.g., low-cost, high-impact health interventions that still have room for funding), and so on.
• I suspect that as the benefits of/need for mass mobilization become fairly obvious, the general public would be more willing to mobilize, making the EA community less uniquely necessary (although this definitely depends on the situation in question).
• The EA community may not be sufficiently (large*coordinated) to overcome the marginal impact barriers.
• Applying the previous points to the scenarios you lay out (which I recognize are not the only possible examples, but it still helps to illustrate): for hypothetical 2, regarding climate change, this seems to be a case where it seems very unlikely that the general public will fail to support such a promising intervention while the EA community would be the unique tipping point to reach massive impact; in hypothetical 1, regarding animal advocacy, such a situation seems fairly unlikely when considering the reasoning behind diminishing marginal returns and historical experience with animal advocacy (e.g., there are going to be people who are difficult to convince), and again it seems unlikely (albeit less unlikely) that EAs would be the unique tipping point for massive impact.
• Especially considering the previous points, I don’t know how widespread/bad the assumptions of diminishing marginal returns in the EA community actually are in practice relative to its inaccuracy. I think it probably is an area for improvement/refinement, but it’s always possible to overcorrect, and I suspect that when/where increasing marginal returns becomes fairly obvious EAs would do a half-decent job of recognizing it (at least relative to how well they would recognize it if they thought about these scenarios more often).