I think this claim does matter in that it affects the opportunity costs of thinking about IBCs. (Though I agree that it doesnāt by itself make or break the case for thinking about IBCs.)
If the differences in expected impact (after further thought) between the superficially-plausibly-best interventions within the best cause area are similar to the differences in expected impact (after further thought) between cause areas, that makes it much less obvious that all/āmost EAs should have a high-level understanding of all/āmost cause areas. (Note that I said āmuch less obviousā, not ādefinitely falseā.)
Itās still plausible that every EA should first learn about almost all IBCs, and then learn about almost all important within-cause considerations for the cause area they now prioritise. But it also seems plausible that they should cut off the between-cause prioritisation earlier in order to roll with their best-guess-at-that-point, and from then on just focus on doing great within that cause area, and trust that other community members will also be doing great in other cause areas. (This would be a sort of portfolio, multiplayer-thinking approach, as noted in one of my other comments.)
I think this claim does matter in that it affects the opportunity costs of thinking about IBCs. (Though I agree that it doesnāt by itself make or break the case for thinking about IBCs.)
If the differences in expected impact (after further thought) between the superficially-plausibly-best interventions within the best cause area are similar to the differences in expected impact (after further thought) between cause areas, that makes it much less obvious that all/āmost EAs should have a high-level understanding of all/āmost cause areas. (Note that I said āmuch less obviousā, not ādefinitely falseā.)
Itās still plausible that every EA should first learn about almost all IBCs, and then learn about almost all important within-cause considerations for the cause area they now prioritise. But it also seems plausible that they should cut off the between-cause prioritisation earlier in order to roll with their best-guess-at-that-point, and from then on just focus on doing great within that cause area, and trust that other community members will also be doing great in other cause areas. (This would be a sort of portfolio, multiplayer-thinking approach, as noted in one of my other comments.)