I spoke to someone today who was planning to write a critique of this paper, so I wonât steal her thunder â but I still have a few thoughts on the paper/âthe points of the paper as paraphrased.
Epistemic status: Just rambling a bit because the post made me think.
In spite of consequentialism, itâs difficult to deny at least some particularistic moral obligations that individuals haveâthe duty to care for oneâs own children; the duty to repay monetary or social debts owed; the duty to treat others with equity and fairness.
This critique would have more teeth for me if it mapped onto anything I recognized from the actual EA community.
Many people donât have strong moral theories at all; theyâd answer questions about morality if you asked them, but they donât go about their days wondering about what the best thing to do is in a given situation. And yet, I think that most people in this category are basically âgood peopleâ: they care about their children, repay their debts, treat other people decently in most cases, etc.
Most people in EA donât have their âmaximize impactâ mode on at all times. There are lots of parents in the community; they didnât decide not to have kids because it would let them donate more, and (as far as I know) they donât neglect their children now to have more time for work. Thatâs because we can have more than one goal; itâs entirely possible to endorse a moral theory but not attempt to maximize the extent to which you fulfill that theory in your every action.
If you asked basically anyone in EA whether parents have an obligation to care for their children, I think theyâd say âyesâ. But EA isnât really focused on personal lives and relationships â for the most part, it aims to help people use spare resources that arenât already allotted for other obligations. You arenât obligated to pursue a particular career, so choosing one with EA in mind may not violate any obligations. You arenât obligated to support a particular charity⌠and so on.
I always like to refer back to Holden Karnofsky when I hear arguments of this type:
âIn general, I try to behave as I would like others to behave: I try to perform very well on âstandardâ generosity and ethics, and overlay my more personal, debatable, potentially-biased agenda on top of that rather than in replacement of it. I wouldnât steal money to give it to our top charities; I wouldnât skip an important family event (even one that had little meaning for me) in order to save time for GiveWell work.â
Right action also includes acting, when appropriate, in ways reflective of the broad virtue of justice, which aims at an endâgiving people what they are owedâthat can conflict with the end of benevolence. If we are responsive to circumstances, sometimes we will act with an eye to othersâ well-being, and sometimes with an eye to other ends.
Iâd be very interested in seeing what someoneâs approach to maximizing (justice times X) + (benevolence times Y) would look like. I see EA as the project of âtrying to get as much good as possible under certain definitions of âgoodââ, and I could be convinced that justice is something that can be part of a reasonable definition (unlike, say, the glorification of a deity).
That said, if the argument here is that justice needs to be part of any moral theory that isnât âconfusedâ or âemptyâ, it seems like Crary is picking a fight with many different branches of philosophy that are perfectly capable of defending themselves (so, as a non-philosopher who doesnât speak this language well, Iâll bow out on this point).
For EA to make space for these individuals, it would have to acknowledge that their moral and political beliefs pose threats to its guiding principles and that these principles themselves are contestable. To acknowledge this would be to concede that EA, as it is currently conceived, might need to be given up.
At this point, I realized I was really confused and perhaps misinterpreting Crary. EAâs principles are certainly contestable, just as any set of moral principles is contestable â this is an area of debate as old as the concept of debate. Does Crary believe that the moral theories she favors donât exclude any views of values? Is she arguing that a valid moral theory will necessarily be a big-tent sort of thing that gives everyone a say? (Will MacAskillâs work on moral uncertainty + putting some weight on a wide range of moral theories might align with this, though I havenât read it closely.)
I spoke to someone today who was planning to write a critique of this paper, so I wonât steal her thunder â but I still have a few thoughts on the paper/âthe points of the paper as paraphrased.
Epistemic status: Just rambling a bit because the post made me think.
This critique would have more teeth for me if it mapped onto anything I recognized from the actual EA community.
Many people donât have strong moral theories at all; theyâd answer questions about morality if you asked them, but they donât go about their days wondering about what the best thing to do is in a given situation. And yet, I think that most people in this category are basically âgood peopleâ: they care about their children, repay their debts, treat other people decently in most cases, etc.
Most people in EA donât have their âmaximize impactâ mode on at all times. There are lots of parents in the community; they didnât decide not to have kids because it would let them donate more, and (as far as I know) they donât neglect their children now to have more time for work. Thatâs because we can have more than one goal; itâs entirely possible to endorse a moral theory but not attempt to maximize the extent to which you fulfill that theory in your every action.
If you asked basically anyone in EA whether parents have an obligation to care for their children, I think theyâd say âyesâ. But EA isnât really focused on personal lives and relationships â for the most part, it aims to help people use spare resources that arenât already allotted for other obligations. You arenât obligated to pursue a particular career, so choosing one with EA in mind may not violate any obligations. You arenât obligated to support a particular charity⌠and so on.
I always like to refer back to Holden Karnofsky when I hear arguments of this type:
âIn general, I try to behave as I would like others to behave: I try to perform very well on âstandardâ generosity and ethics, and overlay my more personal, debatable, potentially-biased agenda on top of that rather than in replacement of it. I wouldnât steal money to give it to our top charities; I wouldnât skip an important family event (even one that had little meaning for me) in order to save time for GiveWell work.â
Iâd be very interested in seeing what someoneâs approach to maximizing (justice times X) + (benevolence times Y) would look like. I see EA as the project of âtrying to get as much good as possible under certain definitions of âgoodââ, and I could be convinced that justice is something that can be part of a reasonable definition (unlike, say, the glorification of a deity).
That said, if the argument here is that justice needs to be part of any moral theory that isnât âconfusedâ or âemptyâ, it seems like Crary is picking a fight with many different branches of philosophy that are perfectly capable of defending themselves (so, as a non-philosopher who doesnât speak this language well, Iâll bow out on this point).
At this point, I realized I was really confused and perhaps misinterpreting Crary. EAâs principles are certainly contestable, just as any set of moral principles is contestable â this is an area of debate as old as the concept of debate. Does Crary believe that the moral theories she favors donât exclude any views of values? Is she arguing that a valid moral theory will necessarily be a big-tent sort of thing that gives everyone a say? (Will MacAskillâs work on moral uncertainty + putting some weight on a wide range of moral theories might align with this, though I havenât read it closely.)