Thereâs certainly room for disagreement over the precise details, but I do think of a broad moral circle as essential to the âAâ part of âEAâ. As a limiting case: an effective egoist is not an EA.
I feel like there might be two things going on here:
an abstract argument that you need some altruism before you make it effective. This would have a threshold, but probably not a very broad one.
a feeling like thereâs some important ingredient in the beliefs held by the cluster of people who associate with the label EA, which speaks to what their moral circles look like (at least moderately broad, but also probably somewhat narrowed in the sense of https://ââgwern.net/âânarrowing-circle ).
I in fact would advocate some version of EA according-to-their-own values to pretty much everyone, regardless of the breadth of their moral circle. And it seems maybe helpful to be able to talk about that? But itâs also helpful to be able to talk about the range of moral circles that people around EA tend to feel good about. It could be nice if someone named these things apart.
âEA-according-to-their-own valuesâ, i.e. E, is just instrumental rationality, right?
ETA: or maybe youâre thinking instead of something like actually internalizing/âadopting their explicit values as ends, which does seem like an important separate step?
I was meaning âinstrumental rationality applied to whatever part of their values is other-affectingâ.
I think this is especially important to pull out explicitly relative to regular instrumental rationality, because the feedback loops are less automatic (so a lot of the instrumental rationality people learn by default is in service of their prudential goals).
I think that a broad moral circle follows from EA in the same way that generally directing resources to the developing world vs the developed world follows from EA. In fact, I think the adoption of a broad moral circle would be steps before the conclusion regarding preference for developing world assistance. However, I am not sure how wise it is to bundle certain moral commitments into the definition of EA when it could be defined simply as the deliberate use of reason to do the most good insofar as we are in the project of doing good, without specification of what âthe goodâ is. Otherwise, there could be broad arguments about what all moral commitments one must make in order to be an EA.
Of course, my definition would require me to bite the bullet that one could be an âeffective âaltruistââ and be purely selfish if they adopted a position such as ethical egoism. But I think confining the definition of EA to the deliberate use of reason to best do good, and leave open what that consists of, is the cleaner path. And the EA communityâs rejection of egoists would follow from the fact that such egoism either does not follow from their moral epistemology (or from whatever process they use to discern the good). This would be similar to the scientific communityâs rejection of a theory in which the sun revolves around the earth. They do not point to enumerations within the definition of science which reject that possibility within the definition, but rather they point to a higher order process which leads to its refutation. Moral epistemology would follow from the more basic requirement of reason and deliberateness (we canât do the most good unless we have some notion of what the good is).
Thereâs certainly room for disagreement over the precise details, but I do think of a broad moral circle as essential to the âAâ part of âEAâ. As a limiting case: an effective egoist is not an EA.
I feel like there might be two things going on here:
an abstract argument that you need some altruism before you make it effective. This would have a threshold, but probably not a very broad one.
a feeling like thereâs some important ingredient in the beliefs held by the cluster of people who associate with the label EA, which speaks to what their moral circles look like (at least moderately broad, but also probably somewhat narrowed in the sense of https://ââgwern.net/âânarrowing-circle ).
I in fact would advocate some version of EA according-to-their-own values to pretty much everyone, regardless of the breadth of their moral circle. And it seems maybe helpful to be able to talk about that? But itâs also helpful to be able to talk about the range of moral circles that people around EA tend to feel good about. It could be nice if someone named these things apart.
âEA-according-to-their-own valuesâ, i.e. E, is just instrumental rationality, right?
ETA: or maybe youâre thinking instead of something like actually internalizing/âadopting their explicit values as ends, which does seem like an important separate step?
I was meaning âinstrumental rationality applied to whatever part of their values is other-affectingâ.
I think this is especially important to pull out explicitly relative to regular instrumental rationality, because the feedback loops are less automatic (so a lot of the instrumental rationality people learn by default is in service of their prudential goals).
I think that a broad moral circle follows from EA in the same way that generally directing resources to the developing world vs the developed world follows from EA. In fact, I think the adoption of a broad moral circle would be steps before the conclusion regarding preference for developing world assistance. However, I am not sure how wise it is to bundle certain moral commitments into the definition of EA when it could be defined simply as the deliberate use of reason to do the most good insofar as we are in the project of doing good, without specification of what âthe goodâ is. Otherwise, there could be broad arguments about what all moral commitments one must make in order to be an EA.
Of course, my definition would require me to bite the bullet that one could be an âeffective âaltruistââ and be purely selfish if they adopted a position such as ethical egoism. But I think confining the definition of EA to the deliberate use of reason to best do good, and leave open what that consists of, is the cleaner path. And the EA communityâs rejection of egoists would follow from the fact that such egoism either does not follow from their moral epistemology (or from whatever process they use to discern the good). This would be similar to the scientific communityâs rejection of a theory in which the sun revolves around the earth. They do not point to enumerations within the definition of science which reject that possibility within the definition, but rather they point to a higher order process which leads to its refutation. Moral epistemology would follow from the more basic requirement of reason and deliberateness (we canât do the most good unless we have some notion of what the good is).