Specifically, I wonder whether any longtermists (or any prominent ones) actually do argue that in expectation “some causes matter extraordinarily more than others—not just thousands of times more, but 10^30 or 10^40 times more”. They may instead argue that that may be true in reality, but not in expectation, due to our vast uncertainty about which causes would be the most valuable ones. (This seems to be Michael’s own position, given his final paragraph, and I think it’s roughly what Tomasik argues in the link provided there.)
you’d be less good at exploring your best guess than at exploring something else that’s plausibly similarly/more pressing, due to personal fit (e.g., you’d be much more suited to gaining insights in one area than the other)
your best guess has already been explored more than something else that’s plausibly similarly/more pressing (e.g., AI safety vs permanent totalitarianism), such that your/our credences about the latter are less robust
Your findings could then inform your future efforts, or future efforts by others.
On moral trade/coordination, these posts are also relevant:
Yeah, I was gonna say something similar.
Specifically, I wonder whether any longtermists (or any prominent ones) actually do argue that in expectation “some causes matter extraordinarily more than others—not just thousands of times more, but 10^30 or 10^40 times more”. They may instead argue that that may be true in reality, but not in expectation, due to our vast uncertainty about which causes would be the most valuable ones. (This seems to be Michael’s own position, given his final paragraph, and I think it’s roughly what Tomasik argues in the link provided there.)
And aside from the reasons you mentioned, an additional reason for not going all-in on one’s best guess when so very uncertain is that there may be a lot of information value in doing so, if:
you’d be less good at exploring your best guess than at exploring something else that’s plausibly similarly/more pressing, due to personal fit (e.g., you’d be much more suited to gaining insights in one area than the other)
your best guess has already been explored more than something else that’s plausibly similarly/more pressing (e.g., AI safety vs permanent totalitarianism), such that your/our credences about the latter are less robust
Your findings could then inform your future efforts, or future efforts by others.
On moral trade/coordination, these posts are also relevant:
https://80000hours.org/articles/coordination/
https://forum.effectivealtruism.org/tag/cooperation-and-coordination