I am a bit concerned with the âbroad moral circleâ being definitional to Effective Altruism (though it accords with my own moral views and with EAs generally). If I recall correctly, EA, zoomed out as far as possible, has not committed to specific moral views. There are disagreements among EAs, for instance, as to whether deontological constraints should limit actions, or whether we should act wholly to maximize welfare as utilitarians. I had thought that the essence of effective altruism is to âdo goodâ, at least to the extent that we are trying to do so, as effectively as we can.
Consequently, I would see the fundamental difference between what EA altruists and non-EA altruists are doing as one of deliberateness, from which instrumental rationality would proceed. The non-EA altruist looks to do good without deliberation as to how to do so as best he/âshe can, or with bounded deliberation on this point. The EA looks to do good with deliberation as to how to do so the best he/âshe can.
I would agree that setting a broad moral circle would be an early part of what one would do as an EA (before more broad cause-prioritization, for instance), but EA has traditionally been open-minded as to what philosophies are morally true or false and many have viewed this as an important part of the EA project. Consequently, I would put the âadoption of a broad moral circle moral valueâ at least one step beyond the definition of EA.
Thereâs certainly room for disagreement over the precise details, but I do think of a broad moral circle as essential to the âAâ part of âEAâ. As a limiting case: an effective egoist is not an EA.
I feel like there might be two things going on here:
an abstract argument that you need some altruism before you make it effective. This would have a threshold, but probably not a very broad one.
a feeling like thereâs some important ingredient in the beliefs held by the cluster of people who associate with the label EA, which speaks to what their moral circles look like (at least moderately broad, but also probably somewhat narrowed in the sense of https://ââgwern.net/âânarrowing-circle ).
I in fact would advocate some version of EA according-to-their-own values to pretty much everyone, regardless of the breadth of their moral circle. And it seems maybe helpful to be able to talk about that? But itâs also helpful to be able to talk about the range of moral circles that people around EA tend to feel good about. It could be nice if someone named these things apart.
âEA-according-to-their-own valuesâ, i.e. E, is just instrumental rationality, right?
ETA: or maybe youâre thinking instead of something like actually internalizing/âadopting their explicit values as ends, which does seem like an important separate step?
I was meaning âinstrumental rationality applied to whatever part of their values is other-affectingâ.
I think this is especially important to pull out explicitly relative to regular instrumental rationality, because the feedback loops are less automatic (so a lot of the instrumental rationality people learn by default is in service of their prudential goals).
I think that a broad moral circle follows from EA in the same way that generally directing resources to the developing world vs the developed world follows from EA. In fact, I think the adoption of a broad moral circle would be steps before the conclusion regarding preference for developing world assistance. However, I am not sure how wise it is to bundle certain moral commitments into the definition of EA when it could be defined simply as the deliberate use of reason to do the most good insofar as we are in the project of doing good, without specification of what âthe goodâ is. Otherwise, there could be broad arguments about what all moral commitments one must make in order to be an EA.
Of course, my definition would require me to bite the bullet that one could be an âeffective âaltruistââ and be purely selfish if they adopted a position such as ethical egoism. But I think confining the definition of EA to the deliberate use of reason to best do good, and leave open what that consists of, is the cleaner path. And the EA communityâs rejection of egoists would follow from the fact that such egoism either does not follow from their moral epistemology (or from whatever process they use to discern the good). This would be similar to the scientific communityâs rejection of a theory in which the sun revolves around the earth. They do not point to enumerations within the definition of science which reject that possibility within the definition, but rather they point to a higher order process which leads to its refutation. Moral epistemology would follow from the more basic requirement of reason and deliberateness (we canât do the most good unless we have some notion of what the good is).
I am a bit concerned with the âbroad moral circleâ being definitional to Effective Altruism (though it accords with my own moral views and with EAs generally). If I recall correctly, EA, zoomed out as far as possible, has not committed to specific moral views. There are disagreements among EAs, for instance, as to whether deontological constraints should limit actions, or whether we should act wholly to maximize welfare as utilitarians. I had thought that the essence of effective altruism is to âdo goodâ, at least to the extent that we are trying to do so, as effectively as we can.
Consequently, I would see the fundamental difference between what EA altruists and non-EA altruists are doing as one of deliberateness, from which instrumental rationality would proceed. The non-EA altruist looks to do good without deliberation as to how to do so as best he/âshe can, or with bounded deliberation on this point. The EA looks to do good with deliberation as to how to do so the best he/âshe can.
I would agree that setting a broad moral circle would be an early part of what one would do as an EA (before more broad cause-prioritization, for instance), but EA has traditionally been open-minded as to what philosophies are morally true or false and many have viewed this as an important part of the EA project. Consequently, I would put the âadoption of a broad moral circle moral valueâ at least one step beyond the definition of EA.
Thereâs certainly room for disagreement over the precise details, but I do think of a broad moral circle as essential to the âAâ part of âEAâ. As a limiting case: an effective egoist is not an EA.
I feel like there might be two things going on here:
an abstract argument that you need some altruism before you make it effective. This would have a threshold, but probably not a very broad one.
a feeling like thereâs some important ingredient in the beliefs held by the cluster of people who associate with the label EA, which speaks to what their moral circles look like (at least moderately broad, but also probably somewhat narrowed in the sense of https://ââgwern.net/âânarrowing-circle ).
I in fact would advocate some version of EA according-to-their-own values to pretty much everyone, regardless of the breadth of their moral circle. And it seems maybe helpful to be able to talk about that? But itâs also helpful to be able to talk about the range of moral circles that people around EA tend to feel good about. It could be nice if someone named these things apart.
âEA-according-to-their-own valuesâ, i.e. E, is just instrumental rationality, right?
ETA: or maybe youâre thinking instead of something like actually internalizing/âadopting their explicit values as ends, which does seem like an important separate step?
I was meaning âinstrumental rationality applied to whatever part of their values is other-affectingâ.
I think this is especially important to pull out explicitly relative to regular instrumental rationality, because the feedback loops are less automatic (so a lot of the instrumental rationality people learn by default is in service of their prudential goals).
I think that a broad moral circle follows from EA in the same way that generally directing resources to the developing world vs the developed world follows from EA. In fact, I think the adoption of a broad moral circle would be steps before the conclusion regarding preference for developing world assistance. However, I am not sure how wise it is to bundle certain moral commitments into the definition of EA when it could be defined simply as the deliberate use of reason to do the most good insofar as we are in the project of doing good, without specification of what âthe goodâ is. Otherwise, there could be broad arguments about what all moral commitments one must make in order to be an EA.
Of course, my definition would require me to bite the bullet that one could be an âeffective âaltruistââ and be purely selfish if they adopted a position such as ethical egoism. But I think confining the definition of EA to the deliberate use of reason to best do good, and leave open what that consists of, is the cleaner path. And the EA communityâs rejection of egoists would follow from the fact that such egoism either does not follow from their moral epistemology (or from whatever process they use to discern the good). This would be similar to the scientific communityâs rejection of a theory in which the sun revolves around the earth. They do not point to enumerations within the definition of science which reject that possibility within the definition, but rather they point to a higher order process which leads to its refutation. Moral epistemology would follow from the more basic requirement of reason and deliberateness (we canât do the most good unless we have some notion of what the good is).