Thanks for sharing! One thing I didn’t notice in the summary: The talk seemed specifically focused on the impact of EA on the animal advocacy space (which I found mildly surprising and interesting, since these critiques pattern match much more to global health/equity/justice concerns)
This article seems to basically boil down to “take a specific view of morality that the author endorses, which heavily emphasises virtue, justice, systemic change and individual obligations, and is importantly not consequentialist, yet also demanding enough to be hard to satisfice on”.
Then, once you have taken this alternate view, observe that this wildly changes your moral conclusions and opinions on how to act, and much of what EA stands for.
You can quibble about “the article claims to be challenging the fundamental idea of EA, yet EA is compatible with any notion of the good and capable of doing this effectively”. But I personally think that EA DOES have a bunch of common moral beliefs, eg the importance of consequentialism, impartial views of welfare, the importance of scope and numbers, and to some degree utilitarianism. And that EA beliefs are robust to people not sharing all of these views, and to pluralistic views like others in this thread have argued (eg, put in the effort to be a basically decent person according to common sense morality and then ruthlessly optimise for your notion of the good with your spare resources). But I think you also need to make some decisions about what you do and do not value, especially for a moral view that’s demanding rather than just “be a basically decent person”, and her view seems fairly demanding?
I’m a bit confused about EXACTLY what the view of morality here described is—it pattern matches onto virtue ethics, and views on the importance of justice and systemic change? But I definitely think it’s quite different from any system that I subscribe to. And it doesn’t feel like the article is really trying to convince me to take up this view, just taking it as implicit. And it seems fine to note that most EAs have some specific moral beliefs, and that if you substantially disagree with those then you have different conclusions? But it’s hardly a put down critique of EA, it’s just a point that tradeoffs are hard and you need to pick your values to make decisions.
The paragraph of the talk that felt most confusing/relevant:
This philosophical critique brings into question effective altruists’ very notion of doing the “most good.” As effective altruists use it, this phrase presupposes that the rightness of a social intervention is a function of its consequences and that the outcome involving the best consequences counts as doing most good. This idea has no place within an ethical stance that underlies the philosophical critique. Adopting this stance is a matter of seeing the real fabric of the world as endowed with values that reveal themselves only to a developed sensibility. To see the world this way is to leave room for an intuitively appealing conception of actions as right insofar as they exhibit just sensitivity to the worldly circumstances at hand. Accepting this appealing conception of action doesn’t commit one to denying that right actions frequently aim at ends. Here acting rightly includes acting in ways that are reflective of virtues such as benevolence, which aims at the well-being of others. With reference to the benevolent pursuit of others’ well-being, it certainly makes sense to talk about good states of affairs. But it is important, as Philippa Foot once put, “that we have found this end within morality, forming part of it, not standing outside it as a good state of affairs by which moral action in general is to be judged” (Foot 1985, 205). Right action also includes acting, when appropriate, in ways reflective of the broad virtue of justice, which aims at an end—giving people what they are owed—that can conflict with the end of benevolence. If we are responsive to circumstances, sometimes we will act with an eye to others’ well-being, and sometimes with an eye to other ends. In a case in which it is not right to improve others’ well-being, it makes no sense to say that we produce a worse result. To say this would be to pervert our grasp of the matter by importing into it an alien conception of morality. If keep our heads, we will say that the result we face is, in the only sense that is meaningful, the best one. There is here simply no room for EA-style talk of “most good.”
Thanks for your remarks. I’m looking forward to her full article being published, because I agreed that as it is, she’s been pretty vague. The full article might clear up some of the gaps here.
From what you and others have said, the most important gap seems to be “why we should not be consequentialists”, which is much bigger than just EA! I think there is something compelling; I might reconstruct her argument something like:
EAs want to do “the most good possible”.
Ensuring more systemic equality and justice is good.
We can do things that ensure systemic equality and justice; doing this is good (this follows from 2), even if it’s welfare-neutral.
If you want to do “the most good” then you will need to do things that ensure systemic equality and justice, too (from 3).
Therefore (from 1 and 4) it follows that EAs should care about more than just welfare.
You can’t quantify systemic equality and justice.
Therefore (from 5 and 6) if EAs want to achieve their own goals they will need to move beyond quantifications.
Probably consequentialists will reply that (3) is wrong; actually if you improve justice and equality but this doesn’t improve long-term well-being, it’s not actually good. I suppose I believe that, but I’m unsure about it.
I think what you’ve written is not an argument against consequentialism, it’s about trying to put numbers on things in order to rank the consequences?
Regardless, that wasn’t how I interpreted her case. It doesn’t feel like she cares about the total amount of systemic equality and justice in the world. She fundamentally cares about this from the perspective of the individual doing the act, rather than the state of the world, which seems importantly different. And to me, THIS part breaks consequentialism
Thanks for sharing! One thing I didn’t notice in the summary: The talk seemed specifically focused on the impact of EA on the animal advocacy space (which I found mildly surprising and interesting, since these critiques pattern match much more to global health/equity/justice concerns)
This article seems to basically boil down to “take a specific view of morality that the author endorses, which heavily emphasises virtue, justice, systemic change and individual obligations, and is importantly not consequentialist, yet also demanding enough to be hard to satisfice on”.
Then, once you have taken this alternate view, observe that this wildly changes your moral conclusions and opinions on how to act, and much of what EA stands for.
You can quibble about “the article claims to be challenging the fundamental idea of EA, yet EA is compatible with any notion of the good and capable of doing this effectively”. But I personally think that EA DOES have a bunch of common moral beliefs, eg the importance of consequentialism, impartial views of welfare, the importance of scope and numbers, and to some degree utilitarianism. And that EA beliefs are robust to people not sharing all of these views, and to pluralistic views like others in this thread have argued (eg, put in the effort to be a basically decent person according to common sense morality and then ruthlessly optimise for your notion of the good with your spare resources). But I think you also need to make some decisions about what you do and do not value, especially for a moral view that’s demanding rather than just “be a basically decent person”, and her view seems fairly demanding?
I’m a bit confused about EXACTLY what the view of morality here described is—it pattern matches onto virtue ethics, and views on the importance of justice and systemic change? But I definitely think it’s quite different from any system that I subscribe to. And it doesn’t feel like the article is really trying to convince me to take up this view, just taking it as implicit. And it seems fine to note that most EAs have some specific moral beliefs, and that if you substantially disagree with those then you have different conclusions? But it’s hardly a put down critique of EA, it’s just a point that tradeoffs are hard and you need to pick your values to make decisions.
The paragraph of the talk that felt most confusing/relevant:
Thanks for your remarks. I’m looking forward to her full article being published, because I agreed that as it is, she’s been pretty vague. The full article might clear up some of the gaps here.
From what you and others have said, the most important gap seems to be “why we should not be consequentialists”, which is much bigger than just EA! I think there is something compelling; I might reconstruct her argument something like:
EAs want to do “the most good possible”.
Ensuring more systemic equality and justice is good.
We can do things that ensure systemic equality and justice; doing this is good (this follows from 2), even if it’s welfare-neutral.
If you want to do “the most good” then you will need to do things that ensure systemic equality and justice, too (from 3).
Therefore (from 1 and 4) it follows that EAs should care about more than just welfare.
You can’t quantify systemic equality and justice.
Therefore (from 5 and 6) if EAs want to achieve their own goals they will need to move beyond quantifications.
Probably consequentialists will reply that (3) is wrong; actually if you improve justice and equality but this doesn’t improve long-term well-being, it’s not actually good. I suppose I believe that, but I’m unsure about it.
I think what you’ve written is not an argument against consequentialism, it’s about trying to put numbers on things in order to rank the consequences?
Regardless, that wasn’t how I interpreted her case. It doesn’t feel like she cares about the total amount of systemic equality and justice in the world. She fundamentally cares about this from the perspective of the individual doing the act, rather than the state of the world, which seems importantly different. And to me, THIS part breaks consequentialism