If I understand correctly, you’re arguing that we either need to:
Put precise estimates on the consequences of what we do for net welfare across the cosmos, and maximize EV w.r.t. these estimates, or
Go with our gut … which is just implicitly putting precise estimates on the consequences of what we do for net welfare across the cosmos, and maximizing EV w.r.t. these estimates.
I think this is a false dichotomy,[1] even for those who are very confident in impartial consequentialism and risk-neutrality (as I am!). If (as suggested by titotal’s comment) you worry that precise estimates of net welfare conditional on different actions are themselves vibes-based, you have option 3: Suspend judgment on the consequences of what we do for net welfare across the cosmos, and instead make decisions for reasons other than “my [explicit or implicit] estimate of the effects of my action on net welfare says to do X.” (Coherence theorems don’t rule this out.)
What might those other reasons be? A big one is moral uncertainty: If you truly think impartial consequentialism doesn’t give you compelling reasons either way, because our estimates of net welfare are hopelessly arbitrary, it seems better to follow the verdicts of other moral views you put some weight on. Another alternative is to reflect more on what your reasons for action are exactly, if not “maximize EV w.r.t. vibes-based estimates.” You can ask yourself, what does it mean to make the world a better place impartially, under deep uncertainty? If you’ve only looked at altruistic prioritization from the perspective of options 1 or 2, and didn’t realize 3 was on the table, I find it pretty plausible that (as a kind of bedrock meta-normative principle) you ought to clarify the implications of option 3. Maybe you can find non-vibes-based decision procedures for impartial consequentialists. ETA: Ch. 5 of Bradley (2012) is an example of this kind of research, not to say I necessarily endorse his conclusions.
(Just to be clear, I totally agree with your claim that we shouldn’t dismiss shrimp welfare — I don’t think we’re clueless about that, though the tradeoffs with other animal causes might well be difficult.)
If I understand correctly, you’re arguing that we either need to:
Put precise estimates on the consequences of what we do for net welfare across the cosmos, and maximize EV w.r.t. these estimates, or
Go with our gut … which is just implicitly putting precise estimates on the consequences of what we do for net welfare across the cosmos, and maximizing EV w.r.t. these estimates.
I think this is a false dichotomy,[1] even for those who are very confident in impartial consequentialism and risk-neutrality (as I am!). If (as suggested by titotal’s comment) you worry that precise estimates of net welfare conditional on different actions are themselves vibes-based, you have option 3: Suspend judgment on the consequences of what we do for net welfare across the cosmos, and instead make decisions for reasons other than “my [explicit or implicit] estimate of the effects of my action on net welfare says to do X.” (Coherence theorems don’t rule this out.)
What might those other reasons be? A big one is moral uncertainty: If you truly think impartial consequentialism doesn’t give you compelling reasons either way, because our estimates of net welfare are hopelessly arbitrary, it seems better to follow the verdicts of other moral views you put some weight on. Another alternative is to reflect more on what your reasons for action are exactly, if not “maximize EV w.r.t. vibes-based estimates.” You can ask yourself, what does it mean to make the world a better place impartially, under deep uncertainty? If you’ve only looked at altruistic prioritization from the perspective of options 1 or 2, and didn’t realize 3 was on the table, I find it pretty plausible that (as a kind of bedrock meta-normative principle) you ought to clarify the implications of option 3. Maybe you can find non-vibes-based decision procedures for impartial consequentialists. ETA: Ch. 5 of Bradley (2012) is an example of this kind of research, not to say I necessarily endorse his conclusions.
(Just to be clear, I totally agree with your claim that we shouldn’t dismiss shrimp welfare — I don’t think we’re clueless about that, though the tradeoffs with other animal causes might well be difficult.)
This is also my reply to Michael’s comments here and here.