Questioning the Foundations of EA

For me, basically every other question around effective altruism is less interesting than this basic one of moral obligation. It’s fun to debate whether some people/​institutions should gain or lose status, and I participate in those debates myself, but they seem less important than these basic questions of how we should live and what our ethics should be.

Prompted by this quote from Scott Alexander’s recent Effective Altruism As A Tower Of Assumptions, I’m linking a couple of my old LessWrong posts that speak to “these basic questions”. They were written and posted before or shortly after EA became a movement, so perhaps many in EA have never read them or heard of these arguments. (I have not seen these arguments reinvented/​rediscovered by others, or effectively countered/​refuted by anyone, but I’m largely ignorant of the vast academic philosophy literature, in which the same issues may have been discussed.)

The first post, Shut Up and Divide?, was written in response to Eliezer Yudkowsky’s slogan of “shut up and multiply” but I think also works as a counter against Peter Singer’s Drowning Child argument, which many may see as foundational to EA. (For example Scott wrote in the linked post, “To me, the core of effective altruism is the Drowning Child scenario.”)

The second post, Is the potential astronomical waste in our universe too small to care about?, describes a consideration through which someone who starts out with relatively high credence in utilitarianism (or utilitarian-ish values) may nevertheless find it unwise to devote much resources to utilitarian(-like) pursuits in the universe that we find ourselves in.

To be clear, I continue to have a lot of moral uncertainty and do not consider these to be knockdown arguments against EA or against caring about astronomical waste. There are probably counterarguments to them I’m not aware of (either in the existing literature or in platonic argument space), and we are probably still ignorant of many other relevant considerations. (For one such consideration, see my Beyond Astronomical Waste.) I’m drawing attention to them because many EAs may have too much trust in the foundations of EA in part because they’re not aware of these arguments.