I think that a broad moral circle follows from EA in the same way that generally directing resources to the developing world vs the developed world follows from EA. In fact, I think the adoption of a broad moral circle would be steps before the conclusion regarding preference for developing world assistance. However, I am not sure how wise it is to bundle certain moral commitments into the definition of EA when it could be defined simply as the deliberate use of reason to do the most good insofar as we are in the project of doing good, without specification of what “the good” is. Otherwise, there could be broad arguments about what all moral commitments one must make in order to be an EA.
Of course, my definition would require me to bite the bullet that one could be an “effective ‘altruist’” and be purely selfish if they adopted a position such as ethical egoism. But I think confining the definition of EA to the deliberate use of reason to best do good, and leave open what that consists of, is the cleaner path. And the EA community’s rejection of egoists would follow from the fact that such egoism either does not follow from their moral epistemology (or from whatever process they use to discern the good). This would be similar to the scientific community’s rejection of a theory in which the sun revolves around the earth. They do not point to enumerations within the definition of science which reject that possibility within the definition, but rather they point to a higher order process which leads to its refutation. Moral epistemology would follow from the more basic requirement of reason and deliberateness (we can’t do the most good unless we have some notion of what the good is).
I think that a broad moral circle follows from EA in the same way that generally directing resources to the developing world vs the developed world follows from EA. In fact, I think the adoption of a broad moral circle would be steps before the conclusion regarding preference for developing world assistance. However, I am not sure how wise it is to bundle certain moral commitments into the definition of EA when it could be defined simply as the deliberate use of reason to do the most good insofar as we are in the project of doing good, without specification of what “the good” is. Otherwise, there could be broad arguments about what all moral commitments one must make in order to be an EA.
Of course, my definition would require me to bite the bullet that one could be an “effective ‘altruist’” and be purely selfish if they adopted a position such as ethical egoism. But I think confining the definition of EA to the deliberate use of reason to best do good, and leave open what that consists of, is the cleaner path. And the EA community’s rejection of egoists would follow from the fact that such egoism either does not follow from their moral epistemology (or from whatever process they use to discern the good). This would be similar to the scientific community’s rejection of a theory in which the sun revolves around the earth. They do not point to enumerations within the definition of science which reject that possibility within the definition, but rather they point to a higher order process which leads to its refutation. Moral epistemology would follow from the more basic requirement of reason and deliberateness (we can’t do the most good unless we have some notion of what the good is).