Strong upvoting because I think these are good, important points with an excellent TL;DR and title.
As Frank Jackson stressed in his paper on ‘Decision-Theoretic Consequentialism’, in certain risky cases we may know that a “safe” option will not maximize value, yet it may nonetheless maximize expected value (if the alternatives risk disaster), and is for that very reason the prudent and rational choice.
I suspect a lot of the ‘systemic change’ critique of donating to the Against Malaria Foundation is motivated by this kind of thinking. You’ll often hear people say something like, “Bed-nets alone will never eliminate poverty and injustice!” as if accepting that claim would entail that buying bed-nets is worse than taking action that has a (more plausible) shot at transforming the entire system. Maximising expected value does not always mean maximising the chance of a perfect world.
(I also think that sometimes the reasoning in these cases is something closer to rule consequentialism, which I have more sympathy for. And I’m sure sometimes they’re also using expected value, just plugging in different numbers.)
I get that cluelessness in the face of massive invisible long-term stakes can be angst-inducing.
The feeling I struggle with the most here is paralysis in the face of a seemingly relentless string of crucial considerations flipping the sign of the value of the path I’m on. (There’s a great line in the Zhuangzi that captures this nicely for me: “Confucius went along for sixty years and transformed sixty times. What he first considered right he later considered wrong. He could never know if what he presently considered right were not fifty-nine times wrong.”) Your arguments still work in such cases—there’s still no need for paralysis, but emotionally speaking it’s very tempting!
Strong upvoting because I think these are good, important points with an excellent TL;DR and title.
I suspect a lot of the ‘systemic change’ critique of donating to the Against Malaria Foundation is motivated by this kind of thinking. You’ll often hear people say something like, “Bed-nets alone will never eliminate poverty and injustice!” as if accepting that claim would entail that buying bed-nets is worse than taking action that has a (more plausible) shot at transforming the entire system. Maximising expected value does not always mean maximising the chance of a perfect world.
(I also think that sometimes the reasoning in these cases is something closer to rule consequentialism, which I have more sympathy for. And I’m sure sometimes they’re also using expected value, just plugging in different numbers.)
The feeling I struggle with the most here is paralysis in the face of a seemingly relentless string of crucial considerations flipping the sign of the value of the path I’m on. (There’s a great line in the Zhuangzi that captures this nicely for me: “Confucius went along for sixty years and transformed sixty times. What he first considered right he later considered wrong. He could never know if what he presently considered right were not fifty-nine times wrong.”) Your arguments still work in such cases—there’s still no need for paralysis, but emotionally speaking it’s very tempting!