So I disagree with what I think you mean by your claim that “There probably won’t be as astronomical differences in value within these cause areas (e.g. between different ways to improve near-term human welfare)”
For the record, on reflection, I actually don’t think this claim is important for my general argument, and I agree with you that it might not be true.
What really matters is if there are astronomical differences in (expected) value between the best interventions in each cause area.
In other words, in theory it shouldn’t matter if the top-tier shorttermist interventions are astronomically better than mid-tier shorttermist interventions, it just matters how the top-tier shorttermist interventions compare to the top-tier longtermist interventions.
I think this claim does matter in that it affects the opportunity costs of thinking about IBCs. (Though I agree that it doesn’t by itself make or break the case for thinking about IBCs.)
If the differences in expected impact (after further thought) between the superficially-plausibly-best interventions within the best cause area are similar to the differences in expected impact (after further thought) between cause areas, that makes it much less obvious that all/most EAs should have a high-level understanding of all/most cause areas. (Note that I said “much less obvious”, not “definitely false”.)
It’s still plausible that every EA should first learn about almost all IBCs, and then learn about almost all important within-cause considerations for the cause area they now prioritise. But it also seems plausible that they should cut off the between-cause prioritisation earlier in order to roll with their best-guess-at-that-point, and from then on just focus on doing great within that cause area, and trust that other community members will also be doing great in other cause areas. (This would be a sort of portfolio, multiplayer-thinking approach, as noted in one of my other comments.)
For the record, on reflection, I actually don’t think this claim is important for my general argument, and I agree with you that it might not be true.
What really matters is if there are astronomical differences in (expected) value between the best interventions in each cause area.
In other words, in theory it shouldn’t matter if the top-tier shorttermist interventions are astronomically better than mid-tier shorttermist interventions, it just matters how the top-tier shorttermist interventions compare to the top-tier longtermist interventions.
I think this claim does matter in that it affects the opportunity costs of thinking about IBCs. (Though I agree that it doesn’t by itself make or break the case for thinking about IBCs.)
If the differences in expected impact (after further thought) between the superficially-plausibly-best interventions within the best cause area are similar to the differences in expected impact (after further thought) between cause areas, that makes it much less obvious that all/most EAs should have a high-level understanding of all/most cause areas. (Note that I said “much less obvious”, not “definitely false”.)
It’s still plausible that every EA should first learn about almost all IBCs, and then learn about almost all important within-cause considerations for the cause area they now prioritise. But it also seems plausible that they should cut off the between-cause prioritisation earlier in order to roll with their best-guess-at-that-point, and from then on just focus on doing great within that cause area, and trust that other community members will also be doing great in other cause areas. (This would be a sort of portfolio, multiplayer-thinking approach, as noted in one of my other comments.)