I think this paper is weak from the outset in similar ways to the entire philosophical project of EA overall. You start with the definition of EA as “the project of trying to find the best ways of helping others, and putting them into practice”. In that definition “the best” means “the most effective”, which is one of the ways in which EA arguments rhetorically load the dice. If I don’t agree that the most effective way to help people (under EA definitions) is always and necessarily the best way to help people, then the whole paper is weakened. Essentially, one ends up preaching to the choir—which is fine if that’s what one wants to do, of course.
I take issue with a number of the arguments in the paper, but I have no desire to respond to the entire thing. However I will focus on the part of the Moral Prioritisation section that quotes Mark Goldring of Oxfam—not because I’m a fan of him or Oxfam, which I am not, but because your misinterpretation of his position is quite illustrative. You claim that “Goldring seems to be implying that so long as we help some children in each country, it does not matter how many children we end up abandoning”, but this is not the argument or an implication of the argument.
First, Goldring is referring to Oxfam’s country portfolio rather than a specific group of children, and he obviously believed that applying EA principles to Oxfam’s portfolio would require the organisation simply to cease working in South Sudan because the cost of getting children into education is higher in South Sudan than e.g. Bangladesh. It seems to me that his belief was correct, and that it is morally unjustifiable to abandon the people of South Sudan because somebody sitting in a comfortable office somewhere has done some calculations and decided that those people are not worth it.
You may object to my characterisation of EA in this way, but as far as I can tell that is the fundamental argument. Oxfam claims to, tries to and perhaps even does operate on the basis of need, and the need of children in South Sudan is at least equal to the need of children in Bangladesh. In fact it might be greater, since as Goldring points out, the barriers to school attendance are high in South Sudan compared to Bangladesh. This also highlights (to me, at least) that these situations are sufficiently complex that the type of utilitarian calculus applied by EA is largely self-defeating in many real-world attempts to help people.
Anticipating the downvotes, hoping for discussion.
I’m very puzzled by this comment. Your characterization of Goldring’s argument is precisely the argument I’m responding to, so I’m confused that you present this as though you think I am interpreting Goldring as saying something different. I argue that an objectionable implication of Goldring’s position (and yours) is that we should abandon a larger group of children because they are in a country (Bangladesh) for which we have already helped some other children. You haven’t responded to my argument at all.
Thank you for replying, although I admit to being equally puzzled by your puzzlement.
What Goldring is paraphrased as saying is that “For a certain cost, the charity might enable only a few children to go to school in a country such as South Sudan, where the barriers to school attendance are high, he says; but that does not mean it should work only in countries where the cost of schooling is cheaper, such as Bangladesh, because that would abandon the South Sudanese children.”
Goldring is not “implying that so long as we help some children in each country, it does not matter how many children we end up abandoning”. I simply don’t see where you get that from. It’s just not the argument that he’s making. His argument is that the needs of children in South Sudan and Bangladesh are equally important, that the foundation for Oxfam’s work is needs rather than costs, and that the accident of birth that placed a child in South Sudan and not Bangladesh is thus not a justification to abandon the former.
What Goldring does imply is that applying “EA principles” would require Oxfam to abandon all the children of South Sudan—and probably for every aid organisation to abandon the entire country, since South Sudan is a difficult and costly working environment. In this case “quantity has a quality all of its own”—the argument that justifies abandoning 100 children in one country in favour of 1000 children in another looks markedly different when it’s used to justify withdrawing all forms of assistance from an entire country.
This highlights the conflict between EA’s approach—which takes “effectiveness” (specifically cost-effectiveness) as an intrinsic rather than instrumental value—and the framework used by others, who have other intrinsic values. That conflict is the reason why we may be talking past each other—I recognise that you probably won’t agree with this argument, and may continue to be puzzled. I would suggest to you that this is the fundamental weakness of the paper—that you are not taking these criticisms of EA in good faith, and in some cases are addressing straw man versions of them.
How far are you willing to push this? Presumably, you wouldn’t educate 1 child in South Sudan and 10 in Bangladesh, rather than 0 in Sudan and 10 000 in Bangladesh, just so that you can say South Sudan hasn’t been abandoned? So exactly how many more children have to go without education before you say “that’s too many more” and switch to one country? What could justify a particular cut-off?
I’m not a utilitarian, so I reject the premise of this question when presented in the abstract as it is here. Effectiveness for me is an instrumental value, so I would need to have a clearer picture of the operating environments in both countries and the funding environment at the global level before I would be able to answer it.
Just because you’re not a utilitarian doesn’t mean you can reject the premise of the question. Deontologists have the same problem with trade offs! The premise of the question is one even the Oxfam report accepts. I also don’t think you know what an instrumental value is. I think you keep throwing the term out but don’t understand what it means in terms of how it is frames the instrumental empirical question in a way that other values dissolve.
Can you give me an argument for why I can’t reject the premise of the question, rather than just telling me I can’t? I’ve explained why I reject it in these comments. Goldring “accepts” the premise only in the sense that he’s attending an event which is based entirely on that premise, and has had that premise forced onto him through the rhetorical trick which I described in my reply to Chappell.
I think you’re partly right about my confusion about instrumental values. Now that I reconsider, the humanitarian principles are a strange mix of instrumental and intrinsic values; regardless, effectiveness remains solely an instrumental value. Perhaps you could explain what you mean by “other values dissolve”?
Trade-offs inhere in all ethical systems so “rejecting utilitarianism” doesn’t do the work you think it does. The values you listed up in the thread that “inhere” in
The actual premise you’re you’re rejecting is one you rely on, that of equal moral consideration of peoples. Each time you manipulate the ratio of tradeoff by rejecting “cost-effectiveness” you are breaking treating people as morally equivalent.
Reasons you actually can reject the premise:
Actions that are upside bargains. E.g. break the trade off by having both options done but this is not the nature of aid as it currently is.
I think what you think you’re doing by saying you’re not a utilitarian is saying that you care about things EAs don’t care about in the impact of aid. But even with other values you create different ratios of trade offs and Pareto Optimality such that you’re always trading off something even if it’s not utilitarianism. It’s still something that is a cost and something that is a benefit. There’s no rhetorical trick here just the fungible nature of cash. The fact that cost effectiveness isn’t an intrinsic value is what makes it a deciding force in the ratio of trade offs in other values.
Can you explain what you mean by “There’s no rhetorical trick here just the fungible nature of cash”? In practice cost effectiveness is a deciding force but not the deciding force.
I think what you’re saying. There are a plurality of values that EAs don’t seem to care about that are deeply important and are skipped over through naive utilitarianism. These values cannot be measured through cost-effectiveness because they are deeply ingrained in the human experience.
The stronger version that I think you’re trying to elucidate but are unable to clearly is that cost-effectiveness can be inversely correlated with another value that is “more” determinant on a moral level. E.g. North Koreans cost a lot more to help than Nigerians with malaria but their cost effectiveness difficulty inheres in their situation and injustice in and of itself.
What I am saying is that insofar as we’re in the realm of charity and budgets and financial tradeoffs it doesn’t matter what your intrinsic value commitments are. There are choices that produce more of that value or less of that value which is what the concept of cost effectiveness is. Thus, it is a crux no matter what intrinsic value system you pick. Even deontology has these issues which I noted in my first response to you.
Thanks, yes. I think I’m elucidating it pretty clearly, but perhaps I’m wrong!
As I’ve said, I’m not denying that cost effectiveness is a determinant in decision-making—it plainly is a determinant, and an important one. What I am claiming is that it is not the primary determinant in decision-making, and simple calculus (as in the original thought experiment) is not really useful for decision-making.
The premise I reject is not that there are always trade-offs, but that a naive utilitarian calculus that abstracts and dehumanises individuals by presenting them as numbers in an equation unmoored from reality is a useful or ethical way to frame the question of how “best” to help people.
The premise that a naive utilitarian calculus that abstracts and dehumanises individuals by presenting them as numbers in an equation unmoored from reality is a useful or ethical way to frame the question of how “best” to help people. As I’ve said in another comment, the trolley problem was meant as a stimulus to discussion, not as a guide for making policy decisions around public transport systems.
EDIT: I realise that this description may come across as harsh on a forum populated almost entirely by utilitarians, but I felt that it was important to be clear about the exact nature of my objection. My position is that I agree that utilitarianism should be a tool in our ethical toolkit, but I disagree that it is the tool that we should reach for exclusively, or even first of all.
I suppose that part of my point is that we may not be discussing whether or not it makes sense to help more people over less. We may be discussing how we can help people who are most in need, who may cost more or less to help than other people.
I’ve claimed that naive utilitarian calculus is simply not that useful in guiding actual policy decisions. Those decisions—which happen every day in aid organisations—need to include a much wider range of factors than just numbers.
If we keep it in the realm of thought experiments, it’s a simple question and an obvious answer. But do you really believe that the philosophical thought experiment maps smoothly and clearly to the real world problem?
‘But do you really believe that the philosophical thought experiment maps smoothly and clearly to the real world problem?’
No, of course not. But in assessing the real world problem, you seemed to be relying on some sort of claim that sometimes it better to help less people if it means a fairer distribution of help. So I was raising a problem for that view: if you think it is sometimes better to distribute money to more countries even though it helps less people, then either that is always better in any possible circumstance, realistic or otherwise or its sometimes better and sometimes not depending on circumstance. Then the thought experiment comes in to show that there are possible, albeit not very realistic circumstances where it clearly isn’t better. So that shows one of the two options available to someone with your view is wrong.
Then, I challenged the other option that it is sometimes better and sometimes not, but the thought experiment wasn’t doing any work there. Instead, I just asked what you think determines when it is better to distribute the money more evenly between countries versus when it is better to just help the most people, and implied that this is a hard question to answer. As it happens, I don’t actually think that this view is definitely wrong, and you have hinted at a good answer, namely that we should sometimes help less people in order to prioritize the absolutely worst off. But I think it is a genuine problem for views like this that its always going to look a bit hazy what determines exactly how much you should prioritize the best of, and the view does seem to imply there must be an answer to that.
I think we need to get away from “countries” as a frame—the thought experiment is the same whether it’s between countries, within a country, or even within a community. So my claim is not that “it is sometimes better to distribute money to more countries even though it helps less people”.
If we take the Bangladeshi school thought experiment—that with available funding, you can educate either 1000 boys or 800 girls, because girls face more barriers to access education—my claim is obviously not that “it is sometimes better to distribute money to more genders even though it helps less people”. You could definitely describe it that way—just as Chappell describes Goldring’s statement—but that is clearly not the basis of the decision itself, which is more concerned with relative needs in an equity framework.
You are right to describe my basis for making decisions as context-specific. It is therefore fair to say that I believe that in some circumstances it is morally justified to help fewer people if those people are in greater need. The view that this is *always* better is clearly wrong, but I don’t make that assessment on the basis of the thought experiment, but on the basis that moral decisions are almost always context-specific and often fuzzy around the edges.
So while I agree that it is always going to look a bit hazy what determines your priorities, I don’t see it as a problem, but simply as the background against which decisions need to be made. Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?
‘Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?‘
Yes, indeed, I think I agree with everything in this last post. In general non-utilitarian views tend to capture more of what we actually care about at the cost of making more distinctions that look arbitrary or hard to justify on reflection. It’s a hard question how to trade off between these things. Though be careful not to make the mistake of thinking utilitarianism implies that the facts about what empirical effects an action will have are simple: it says nothing about that at all.
Or at least, I think that, technically speaking, it is true that “it is sometimes better to distribute money to more genders even though it helps less people” is something you believe, but that’s a highly misleading way of describing your view: i.e. likely to make a reasonable person who takes it at face value believe other things about you and your view that are false.
I think the countries thing probably got this conversation off on the wrong foot, because EAs have very strong opposition to the idea that national boundaries ever have moral significance. But it was probably the fault of Richard’s original article that the conversation started there, since the charitable reading of Goldring was that he was making a point about prioritizing the worst off and using an example with countries to illustrate that, not saying that it’s inherently more fair to distribute resources across more countries.
My guess (though it is only a guess) is that if you ask Will MacAskill he’ll tell you that at least in an artificial case where you can either help a million people who are very badly off, or a million and one people who are much better off by the same amount, you ought to help the worse off people. It’s hard to see how he could deny that, given that he recommends giving some weight to all reasonable moral views in your decision-making, prioritizing the worse off is reasonable, and in this sort of case, helping the worse off people is much better if we ought to prioritize the worse off, while helping the million and one is only a very small amount better on the view where you ought just to help the most people.
Note by the way that you can actually have the ‘always bring about the biggest benefit when distributing resources view, without worrying about prioritizing the worst off’ view and still reject utiltarianism overall. For example, its consistent with “help more people rather than less when the benefit per person is the same size” that you value things other than happiness/suffering or preference satisfaction, that you believe it is sometimes wrong to violate rights in order to bring about the best outcome etc.
Likewise I think I agree with everything in this post. I appreciate that you took the time to engage with this discussion, and for finding grounds for agreement at least around the hazy edges.
Wait I just want to make an object level objection for the third party readers that most policy-making is guided by cost-benefit analysis and the assigning of value of statistical life (VSL) in most liberal democracies.
What do you mean by “not… good faith”? I take that to imply a lack of intellectual integrity, which seems a pretty serious (and insulting) charge. I don’t take Goldring to be arguing in bad faith—I just think his position is objectively irrational and poorly supported. If you think my arguments are bad, you’re similarly welcome to explain why you believe that, but I really don’t think anyone should be accusing me of failing to engage in good faith.
On to the substance: you (and Goldring) are especially concerned not to “withdraw all… assistance from an entire country.” You would prefer to help fewer children, some in South Sudan and some in Bangladesh, rather than help a larger number of children in Bangladesh. When you help fewer people, you are thereby “abandoning”, i.e. not helping, a larger number of people. Does it matter how many more we could help in Bangladesh? It doesn’t seem to matter to you or Goldring. But that is just to say that it does not matter how many (more) children we end up abandoning, on your view, so long as we help some in each country. That’s the implication of your view, right? Can you explain why you think this isn’t an accurate characterization?
ETA: I realize now there’s a possible reading of the “it doesn’t matter” claim on which it could be taken to impute a lack of concern even for Pareto improvements, i.e. saving just one person in each country being no better than 10 people in each country. I certainly don’t mean to attribute that view to Goldring, so will be sure to reword that sentence more carefully!
That’s not the implication of my view, no. It could matter how many more children we are abandoning, but this is not a purely utilitarian calculus. In humanitarian action effectiveness is an instrumental value not an intrinsic value, so prioritisation is not solely a question of cost-effectiveness, and neither the argument or the implication is “so long as we help some in each country”.
(This is also where my accusation of bad faith comes from. Either you do not know that there are other values at play—in which case you are not arguing properly, since you have not investigated sufficiently—or you do know that there are other values at play, but are choosing not to point this out to your reader—in which case you are not arguing honestly.)
The simple addition of non-utilitarian values exposes how this sort of naive calculus—in which one child in one location can be exchanged directly for another child in a different location—is fine as a thought experiment, but is largely useless as a basis for real-world decision-making, constrained as it is by a wider set of concerns that confound any attempt to apply such calculus.
My fundamental objection is that this thought experiment—and others like it—are an exercise in stacking the rhetorical deck, by building the conclusion that you are seeking into the framing of the question. This can be seen when you claim that I “would prefer to help fewer children, some in South Sudan and some in Bangladesh, rather than help a larger number of children in Bangladesh.”
In fact I would prefer to help all of them—perhaps through the simple solution of seeking more funding. If you argue that this solution is not available—that there is no such additional funding—then you concede that the thought experiment only works in your favour because you have specifically framed it in that way. If you accept that this solution is available, then you should allow the full range of real-world factors that must be taken into account in such decision-making, in which case the utilitarian calculus becomes just one small part of the picture. In either case the experiment is useless to guide real-world decision-making.
Perhaps I could posit a similar thought experiment. In Bangladesh it is more expensive to educate girls than boys, because girls face additional barriers to access to education. You can educate 1000 boys or 800 girls. I assume that you would accept that your argument would conclude that we should focus all our spending on educating 1000 boys. But this conclusion seems obviously unjustifiable on any reasonable consideration of fairness, and in fact leads to worse outcomes for those who are already disadvantaged. The utilitarian calculus cannot possibly be the sole basis for allocating these resources.
Either you do not know that there are other values at play—in which case you are not arguing properly, since you have not investigated sufficiently—or you do know that there are other values at play, but are choosing not to point this out to your reader—in which case you are not arguing honestly.)
Obviously I’m engaging with a position on which there are believed to be “other values in play” (e.g. a conception of fairness which prioritizes national representation over number of people helped), since I’m arguing that those other values are ultimately indefensible.
I’m going to leave the conversation at that. I can deal with polite philosophical ignorance (e.g. not understanding how to engage productively with thought experiments), or with arrogance from a sharp interlocutor who is actually making good points; but the combination of arrogance and ignorance is just too much for me.
Thanks for continuing to engage—I appreciate that it must be frustrating for you.
The other values at play are quite obviously not “prioritise national representation over number of people helped”. That’s why I proposed the parallel thought experiment of schoolboys and schoolgirls in Bangladesh—to show that your calculus is subject to the exact same objections without any implication of “national representation”, and therefore “national representation” is not part of this discussion.
The other values that I am referring to (as I’ve mentioned in other replies) might be the core humanitarian principles of humanity, impartiality, neutrality, and independence. These values are contested, and you’re obviously welcome to contest them, but they are the moral and to some extent legal basis of C20 humanitarian action.
They are not necessarily key to e.g. education provision, which although it is often delivered by “dual mandate” organisations, is not strictly speaking a lifesaving activity, so you may wish to reject them on those grounds. However it seems to me that you believe that your cardinal value of effectiveness is applicable across all areas of altruism, so I think they are relevant to the argument.
You originally asked for any feedback, and I took you at your word. My feedback is simply that this paper is preaching to the choir, and it would be a stronger paper if you addressed these other value systems—the very basis of the topic that you are discussing—rather than ignoring them completely. You can of course argue that they’re indefensible—and clearly we disagree there—but first you have to identify them correctly.
To the accusations of arrogance and ignorance. Obviously we’re all ignorant—it’s the human condition—but I try to alleviate my ignorance by e.g. reading papers and listening to viewpoints that I disagree with. Clearly you find me arrogant, but there’s not much I can do about that—I’ve tried to be as polite as I can, but clearly that was insufficient.
If you can give me any tips on how to engage productively with thought experiments, I would welcome them. I would however note that I’ve always believed that the trolley problem was intended as a basis for discussion, rather than as a basis for policy decisions about public transport systems.
Clearly you find me arrogant, but there’s not much I can do about that—I’ve tried to be as polite as I can, but clearly that was insufficient.
You come across as arrogant for a few reasons which are in principle fixable.
1: You seem to believe people who don’t share your values are simply ignorant of them, and not in a deep “looking for a black cat in an unlit room through a mirror darkly” sort of way. If you think your beliefs are prima facie correct, fine, most people do—but you still have to argue for them.
2: You mischaracterize utilitarianism in ways that are frankly incomprehensible, and become evasive when those characterizations are challenged. At the risk of reproducing exactly that pattern, here’s an example:
In humanitarian action effectiveness is an instrumental value not an intrinsic value
...
EA is a form of utilitarianism, and when the word effective is used it has generally been in the sense of “cost effective”. If you are not an effective altruist (which I am not), then cost effectiveness—while important—is an instrumental value rather than an intrinsic value.
...
I’m not a utilitarian, so I reject the premise of this question when presented in the abstract as it is here. Effectiveness for me is an instrumental value
As you have been more politely told many times in this comment section already: claiming that utilitarians assign intrinsic value to cost-effectiveness is absurd. Utilitarians value total well-being (though what exactly that means is a point of contention) and nothing else. I would happily incinerate all the luxury goods humanity has ever produced if it meant no one ever went hungry again. Others would go much further.
What I suspect you’re actually objecting to is aggregation of utility across persons—since that, plus the grossly insufficient resources available to us, is what makes cost-effectiveness a key instrumental concern in almost all situations—but if so the objection is not articulated clearly enough to engage with.
3: Bafflingly, given (1), you also don’t seem to feel the need to explain what your values are! You name them (or at least it seems these are yours) and move on, as if we all understood
humanity, impartiality, neutrality, and independence
in precisely the same way. But we don’t. For example: utilitarianism is clearly “impartial” and “neutral” as I understand them (i.e. agent-neutral and impartial with respect to different moral patients) whereas folk-morality is clearly not.
I’m guessing, having just googled that quote, that you mean something like this
Humanity means that human suffering must be addressed wherever it is found, with particular attention to the most vulnerable.
Neutrality means that humanitarian aid must not favour any side in an armed conflict or other dispute.
Impartiality means that humanitarian aid must be provided solely on the basis of need, without discrimination.
Independence means the autonomy of humanitarian objectives from political, economic, military or other objectives.
in which case there’s a further complication: you’re almost certainly using “intrinsic value” and “instrumental value” in a very different sense from the people you’re talking to. The above versions of “independence” and “neutrality” are, by my lights, obviously instrumental—these are cultural norms for one particular sort of organization at one particular moment in human history, not universal moral law.
Thanks for your comment. I’ll try to address each of your points.
“You seem to believe people who don’t share your values are simply ignorant of them… If you think your beliefs are prima facie correct, fine, most people do—but you still have to argue for them.”
In general, no—I do not believe that people who don’t share my values are simply ignorant of them, and I have communicated poorly if that is your impression. Nor do I believe that my beliefs are prima facie correct, and I don’t think I’ve claimed that in any of these comments. I did not post here to argue for my beliefs—I don’t expect anybody on this forum to agree with them—but to point out that the paper under discussion fails to deal with those beliefs adequately, which seemed to me a weakness.
“You mischaracterize utilitarianism in ways that are frankly incomprehensible, and become evasive when those characterizations are challenged.”
I think it’s an exaggeration to say that my characterisation is “frankly incomprehensible” and that I “become evasive” when challenged. My characterisation may be slightly inaccurate, but it’s not as if I am a million miles away from common understanding, and I have tried to be as direct as possible in my responses.
The confusion may arise from the fact that when I claim that effectiveness is an intrinsic value, I am making that claim for effective altruism specifically, rather than utilitarianism more broadly. And indeed effectiveness does appear to be an intrinsic value for effective altruism—because if what effective altruists proposed was not effective, it would not constitute effective altruism.
Your final point has the most traction:
“Bafflingly, given (1), you also don’t seem to feel the need to explain what your values are! You name them (or at least it seems these are yours) and move on, as if we all understood… I’m guessing, having just googled that quote, that you mean something like this”
I was indeed referring to these principles, and you’re right—I didn’t explain them! This may have been a mistake on my part, but as I implied above, my intent was not to persuade anybody here to accept those principles. I am not expecting random people on a message board to even be aware of these principles—but I would expect an academic who writes a paper on the subject that in part intends to refute the arguments of organisations involved in humanitarian action to refer to these principles at least in passing, wouldn’t you?
“you’re almost certainly using “intrinsic value” and “instrumental value” in a very different sense from the people you’re talking to.”
Yes, this may be the case. In another comment in this thread I reconsidered my position, and suggested that humanitarian principles are a curious mix of intrinsic and instrumental. But I’m not sure my usage is that far away from the common usage, is it? I also raised the point that they are in fact contested—partly for the cultural reason you raise—and the way in which they are viewed varies from organisation to organisation. Obviously this will cause more concern for people who prefer their principles much cleaner!
I don’t think you’re understanding what EAs truly object to though. If the problem is the moral arbitrariness and moral luck of South Sudan vs. Bangladesh then you end up having to prioritise. EA works on the margins so the argument conditionally breaks at the point quantity has a quality all of its own.
If borders and the birth lottery are truly arbitrary I don’t understand why it would be so bad to “abandon” a country if there are equally needs for kids of each country. In the same way typical humanitarians are ok with donations moved from the first world to the developing world.
To put inversely your example, the argument that justifies funding every single country because they are distinct categories also justifies abandoning 1000 children in one country for 100 children in another country. If anything your example weighs on the fact South Sudan and Bangladesh feel worthy on both ends so it feels intuitive. But the categories of countries themselves are wonderfully arbitrary, South Sudan did not exist until 2011!
Moreover, I wish you defended another intrinsic value that could be isolated away from cost-effectiveness. Is it a desserts claim that the most difficult places to administer aid are also the most “needy” and therefore deserve it more even if it costs more?
The intrinsic values that I would point to in this context are the humanitarian principles of humanity, neutrality, impartiality and independence. (However I should note that these are the subject of continual debate, and neutrality in particular has come under serious pressure during the Ukraine war.)
Also to be clear, “humanity, neutrality, impartiality and independence” aren’t values as most philosophers know of them. Neutrality and impartiality are not ones you seem to defend above which is why people find you to be confused.
Yes, you’re absolutely right. Academic philosophy has largely failed to engage with contemporary humanitarianism, which is puzzling given that the field of humanitarianism provides plenty of examples of actual moral dilemmas. That failure is also what leads to the situation we have now, where an academic paper that wants to engage with that topic lacks the language to describe it accurately.
This might be because the ethics of humanitarian action is (broadly) a species of virtue ethics, in which those humanitarian principles are the values that need to be cultivated by individuals and organisations in order to make the sort of utilitarian, deontological or other ethical decisions that we are using as thought experiments here, guided by the sort of “practical wisdom” that is often not factored into those thought experiments.
I think the problem is actually reversed. Most humanitarian organisations do not have firm foundational beliefs and are about using poverty porn and feelings of the donor to guide judgements. The language you use of the value of “humanity” is a non-sequitur and doesn’t provide information—even those with high status in humanitarian aid circles like Rory Stewart express a lot of regret over this fuzziness. Put sharply, I don’t think contemporary humanitarianism has language to describe itself accurately and “humanity, neutrality, impartiality and independence” are not values but rather buzzwords for charity reports and pamphlets.
From what I’ve inferred is that you’re some sort of Bernard Williams type moral particularism instead of virtue ethics in that you think there are morally salient facts everywhere on the ground in these cases and that the configuration of the morally relevant features of the action in a particular context. But the problem in this discourse is you won’t name the thing you’re defending because I don’t think you know what exactly your moral system is beyond being against thought experiments and vibes of academic philosophy.
This is definitely an uncharitable reading of humanitarian action. The humanitarian principles are rarely to be found in “charity reports and pamphlets” (by which I assume you mean public-facing documents) and if they are found there, they are not the focus of those documents at all. The exception would be for the ICRC, for the obvious reason that the principles largely originated in their work and they act as stewards to some extent.
Your characterisation of humanitarian organisations as “using poverty porn and feelings of the donor to guide judgements” and so on—well, you’re welcome to your opinion, but that clearly obviates the hugely complex nature of decision-making in humanitarian action. Humanitarian organisations clearly have foundational beliefs, even if they’re not sufficiently unambiguous for you. The world is unfortunately an ambiguous place.
(I should explain at this point that I am not a full-throated and unapologetic supporter of the humanitarian sector. I am in fact a sharp critic of the way in which it works, and I appreciate sharp criticism of it in general. But that criticism needs to be well-informed rather than armchair criticism, which I suppose is why I’m in this thread!)
I do in fact practice virtue ethics, and while there is some affinity between humanitarian decision-making and moral particularism, there are clearly moral principles in the former which the latter might deny—the principle of impartiality means that one is required to provide assistance to (for example) genocidaires from Rwanda when they find themselves in a refugee camp in Tanzania, regardless of what criminal actions they might have carried out in their own country.
I’m not sure what you mean when you say that I won’t name the thing defending because I don’t know what my moral system is. My personal moral framework is one of virtue ethics, taking its cue from classical virtue ethics but aware that the virtues of the classical age are not necessarily best for flourishing in the modern age; and my professional moral framework is—as you might have guessed—based on the humanitarian principles.
You might not believe that either of these frameworks is defensible, but that’s different from saying that I don’t know what they are. Could you explain exactly what you meant, and why you believe it?
Ok to be clear, I am 100% certain you don’t know what virtue ethics is because you’re literally describing principles of action not virtues. Virtues in virtues ethics are dispositions we cultivate in ourselves not in the consequence of the world. So taking your example of the “principle of impartiality” is that if you are a virtue ethicist you’re trying to cultivate “impartiality” not duty bound by it. This is also why you’re confused when you name virtues because independence is a virtue in the person receiving aid not in you! Also these are canonically not virtues any well-known virtue ethicist would name!
Moreover, this impartiality is more a metaethical principle that you keep violating in your own examples. If Oxfam trades off 2:1 Bangladeshis to South Sudanese (replace the countries with whatever you want) that breaks impartiality because you are necessarily saying one life is worth more than another (there are morally particular facts that can change this obviously but you keep biting the bullet on any and just say the world is fuzzy!)
Overall, the world is fuzzy but the problem in this chain of logic is your fuzziness in understanding of what commonly used concepts like virtue ethics are. It’s really frustrating when you keep excusing your mistaken understanding of concepts with the world being fuzzy. Please just go read Alastair McIntyre’s After Virtue.
“I am 100% certain you don’t know what virtue ethics is because you’re literally describing principles of action not virtues… Virtues in virtues ethics are dispositions we cultivate in ourselves not in the consequence of the world.”
I fear that it may be you who do not know what virtue ethics is. You refer to McIntyre, who defines virtues as qualities requiring both possession *and* exercise. One does not become courageous by sitting at home thinking about how courageous one will become, but by practising acts of courage. Virtues are developed through such practice, which surely means that they are principles of action.
”Also these are canonically not virtues any well-known virtue ethicist would name!”
I agree. I haven’t claimed that they are, and I’ve referred to humanitarian ethics as a species of virtue ethics for that very reason. But one of the strengths of virtue ethics is that it is possible—indeed necessary—to update what the virtues mean in practice to account for the way in which the social environment has changed—and in fact there’s no reason why one shouldn’t introduce new virtues that may be more appropriate for human flourishing.
“This is also why you’re confused when you name virtues because independence is a virtue in the person receiving aid not in you!… Moreover, this impartiality is more a metaethical principle that you keep violating in your own examples. If Oxfam trades off 2:1 Bangladeshis to South Sudanese (replace the countries with whatever you want) that breaks impartiality because you are necessarily saying one life is worth more than another”
I believe you are confused here. Independence is not a virtue of the person receiving aid but of the organisation providing aid—and here I’ll use the ICRC as the exemplar—which “must always maintain their autonomy so that they may be able at all times to act in accordance with the principles”.
Likewise you are confused about what is meant by impartiality, which requires that the organisation provides aid to individuals “guided solely by their needs, and to give priority to the most urgent cases of distress.” It does not break impartiality to say “We should assist X rather than Y” if X is in greater need, and does not imply that X’s life is worth more than Y’s.
Let’s return to the Bangladeshi schoolchildren. If you allocate resources to support education for 800 girls instead of 1000 boys, it does not necessarily imply that you think girls are worth more than boys (although it might). The decision is being made on the basis that girls’ need for support is greater because they face more barriers to access than boys.
I am not a philosopher by any means, but I simply cannot accept your criticism that I do not understand these concepts, or how they are applied in practice.
This is not how words work. You can’t just say I believe X is a virtue because in humanitarian ethics (which is ill-defined). I truly don’t think you understand the concept of virtue ethics at the end of the day. This sounds mean by it’s definitionally a misunderstanding you keep doubling down on like everything here. For instance you tried to use the red cross as an example but most virtue ethicists wouldn’t abide by an entity holding a virtue (the ICRC can’t cultivate a virtue it’s not a person) -- because that’s definitionally not what a virtue is. You also misquoted Alasdair McIntyre and misrepresented it as shown by the fact your quoting all come from google book snippets from undergraduate classes.
I think you believe what you believe and I’ll leave it at that. This is not a productive conversation. Funnily enough I do not think the paper draft is charitable but I don’t think you fully understand your axiomatic values (you probably are prioritarian not a virtue ethicist). I also think the educating girls example is a very strong prioritarian argument.
“You can’t just say I believe X is a virtue because in humanitarian ethics (which is ill-defined). I truly don’t think you understand the concept of virtue ethics at the end of the day… You also misquoted Alastair MacIntyre and misrepresented it.”
Let me then quote MacIntyre in full, to avoid misrepresenting him.
1.
MacIntyre defines a practice as “any coherent and complex form of socially established cooperative human activity through which goods internal to that form of activity are realized in the course of trying to achieve those standards of excellence which are appropriate to, and partially definitive of, that form of activity”.
MacIntyre gives a range of examples of practices, including the games of football and chess, professional disciplines of architecture and farming, scientific enquiries in physics, chemistry and biology, creative pursuits of painting and music, and “the creation and sustaining of human communities—of households, cities, nations”.
Humanitarian action meets this definition of a practice.
2.
MacIntyre defines a good with reference to their conception in the middle ages as “The ends to which men as members of such a species move… and their movement towards or away from various goods are to be explained with reference to the virtues and vices which they have learned or failed to learn and the forms of practical reasoning which they employ.”
The humanitarian imperative “that action should be taken to prevent or alleviate human suffering arising out of disaster or conflict” meets this definition of a good.
3.
MacIntyre defines a virtue as “an acquired human quality the possession and exercise of which tends to enable us to achieve those goods which are internal to practices and the lack of which effectively prevents us from achieving any such goods”.
Humanitarian principles can be treated as virtues under this definition. They are acquired human qualities which enable us to achieve a good (the human imperative) which is internal to a practice (humanitarian action).
They should be seen as professional virtues in addition to any personal virtues (the more familiar virtues such as courage or patience) that aid workers might cultivate, in the same way that architects would cultivate different virtues to farmers.
4.
MacIntyre asserts that “A practice involves standards of excellence and obedience to rules as well as the achievement of goods. To enter into a practice is to accept the authority of those standards and the inadequacy of my own performance as judged by them.”
The institutions of humanitarian aid—whether operational bodies such as the Red Cross/Red Crescent movement, professional standards such as the Sphere Standards, or communities of practice such as the CALP Network—provide exactly this context.
You are correct to say that those institutions are not themselves possessed of the virtues, but they constitute the practice which is required to acquire these virtues, and within which the exercise of the virtue takes place.
*
This account is inadequate—it does not account for the wider swathe of humanitarian action happening outside the formal humanitarian sector—but it is sufficient to demonstrate that the concept of “humanitarian virtues” is coherent with MacIntyre’s conception of virtue ethics.
I am perfectly happy with the fact that you are not a virtue ethicist, and therefore simply do not agree with this argument. Your accusation that I don’t understand the concept of virtue ethics, however, simply does not hold water.
You’re clear that you don’t wish to continue this conversation because it’s not productive. Nevertheless I appreciate your engagement, so thank you for taking the time to comment over the past few days.
When seeing the title of this post I really wanted to like it, and I appreciate the effort that went into it all so far.
Unfortunately, I have to agree with Paul—both the post as well as the paper draft itself read pretty weak to me. In many instances, it seems that you argue against strawpeople rather than engaging with criticism of EA in good faith, and even worse, the arguments you use to counter the criticism boil down to what EA is advocating for “obviously” being correct (you wrote in the post that the arguments are very much shortened because there is just so much ground to cover, but I believe that if an argument cannot be made in a convincing way, we should either focus more time on making it properly, or dropping the discussion entirely, rather than just vaguely pointing towards something and hoping for the best.)
Also, you seem to not defend all of EA, but whatever part of EA that is most easily defendable in the particular paragraph, such as arguing that EA does not require people to always follow its moral implications, only sometimes—which some EAers might agree with, but certainly not all.
This is more of a misread than a strawman, but on page 8 the paper says:
Sometimes the institutional critique is stated in ways that illegitimately presuppose that “complicity” with suboptimal institutions entails net harm. For example, Adams, Crary, and Gruen (2023, xxv) write:
> EA’s principles are actualized in ways that support some of the very social structures that cause suffering, thereby undermining its efforts to “do the most good.” (emphasis added)
This reasoning is straightforwardly invalid. It’s entirely possible—indeed, plausible—that you may do the most good by supporting some structures that cause suffering. For one thing, even the best possible structures—like democracy—will likely cause some suffering; it suffices that the alternatives are even worse. For another, even a suboptimal structure might be too costly, or too risky, to replace. But again, if there’s evidence that current EA priorities are actually doing more harm than good, then that’s precisely the sort of thing that EA principles are concerned with. So it makes literally no sense to express this as an external critique 10 (i.e. of the ideas, rather than their implementation).
I don’t think saying that Adams, Crary, and Gruen “illegitimately presuppose that “complicity” with suboptimal institutions entails net harm” is correct. The paper misunderstands what they were saying. Here’s the full sentence (emphasis added):
Taken together, the book’s chapters show that in numerous interrelated areas of social justice work—including animal protection, antiracism, public health advocacy, poverty alleviation, community organizing, the running of animal sanctuaries, education, feminist and LGBTQ politics, and international advocacy—EA’s principles are actualized in ways that support some of the very social structures that cause suffering, thereby undermining its efforts to “do the most good.”
I interpret it as saying:
The way the EA movement/community/professional network employs EA principles in practice fundamentally support and enable fundamental causes of suffering, which undermines EA’s ability to do the most good.
In other words, it is an empirical claim that the way EA is carried out in practice has some counterproductive results. It is not a normative claim about whether complicity with suboptimal institutions is ever okay.
But they never even try to argue that EA support for “the very social structures that cause suffering” does more harm than good. As indicated by the “thereby”, they seem to take the mere fact of complicity to suffice for “undermining its efforst to ‘do the most good’.”
I agree that they’re talking about the way that EA principles are “actualized”. They’re empirically actualized in ways that involve complicity with suboptimal institutions. And the way these authors argue, they take this fact to suffice for critique. I’m pointing out that this fact doesn’t suffice. They need to further show that the complicity does more harm than good.
Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. … Every decent person should share the basic goals or values underlying effective altruism.
It starts here in the abstract—writing this way immediately sounds condescending to me, making disagreement with EA sound like an entirely unreasonable affair. So this is devaluing the position of a hypothetical someone opposing EA, rather than honestly engaging with their criticisms.
Either their total evidence supports the idea that attempting to promote systemic change would be a better bet (in expectation) than safer alternatives, or it does not. … If it does not, then by their own lights they have no basis for thinking it a better option.
On systemic change: The whole point is that systemic change is very hard to estimate. It is like sitting on a local maximum of awesomeness, and we know that there must be higher hills—higher maxima—out there, but we do not know how to get there; any particular systemic change might as well make things worse. But if EA principles told us to only ever sit at this local maximum and never even attempt to go anywhere else, then those would not be principles I would be happy following. So yes, people who support systemic change often do not have the mathematical basis to argue that it necessarily will be a good deal—but that does not mean that there is no basis for thinking attempting it is a good option. Or, more clearly: By not mentioning uncertainty in this paragraph, I do believe you are arguing against a strawperson, as the presence of uncertainty is absolutely crucial to the argument.
Rare exceptions aside, most careers are presumably permissible. … This claim is both true and widely neglected. … Neither of these important truths is threatened by the deontologist’s claim that one should not pursue an impermissible career.
On earning to give: Again, the arguments are very simplified here. A career being permissible or not is not a binary choice, true or false. It is a gradient, and it fluctuates and evolves over time, depending on how what you are asked to do on the job fluctuates over time, and depending on how the ambient morality of yourself and society shifts over time. So the question is not “among all of these completely equivalent permissible options, should I choose the highest-paying one and earn to give?” but “what is the tradeoff I should be willing to make between the career being more morally iffy, and the positive impact I can have by donation from a larger income baseline?”, and additionally, if you still just donate e.g. 10% of your income but your income is higher it means that also there is a larger amount of money you do not donate, which counterfactually you might use to buy things you do not actually need that need to be produced and shipped and so on, in the worse case making the world a worse place for everyone to be in, so even just “more money = more good” is not a simple truth that just holds. And despite all these simplifications, the sentence “This claim is … true” just really, really gets to me—such binary language again completely sweeps any criticism, any debate, any nuance under the rug.
EA explicitly acknowledges the fact that billionaire philanthropists are capable of doing immense good, not just immense harm. Some find this an inconvenient truth … Unless critics seriously want billionaires to deliberately try to do less good rather than more, it’s hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.
On billionaire philanthropy: Yes, billionaires are capable of doing immense good, and again, I have not seen anyone actually arguing against that. The most common arguments I am aware of against billionaire philanthropists are (1) that billionaires in the first place just shouldn’t exist, as yes they have the capacity to do immense good, but also the capacity to do immense harm, and no single person should be allowed to have the capacity to do so much harm to living beings on a whim. And (2) billionaires are capable of paying people to advise them on how to best make it look like they are doing good, when actually, they are not (such as creating huge charitable foundations and equipping them with lots of money, but these foundations then actually just re-investing that money into projects run by companies these billionaires have shares in, etc.)
So that is what I mean by “arguing against strawpeople”—claims are so far simplified and/or misrepresented that they do not accurately represent the actual positions of EAers, or of people who criticise them.
So this is devaluing the position of a hypothetical someone opposing EA, rather than honestly engaging with their criticisms.
That’s a non-sequitur. There’s no inconsistency between holding a certain conclusion—that “every decent person should share the basic goals or values underlying effective altruism”—and “honestly engaging with criticisms”. I do both. (Specifically, I engage with criticisms of EA principles; I’m very explicit that the paper is not concerned with criticisms of “EA” as an entity.)
I’ve since reworded the abstract since the “every decent person” phrasing seems to rub people the wrong way. But it is my honest view. EA principles = beneficentrism, and rejecting beneficentrism is morally indecent. That’s a view I hold, and I’m happy to defend it. You’re trying to assert that my conclusion is illegitimate or “dishonest”, prior to even considering my supporting reasons, and that’s frankly absurd.
The whole point is that systemic change is very hard to estimate. It is like sitting on a local maximum of awesomeness, and we know that there must be higher hills—higher maxima—out there, but we do not know how to get there; any particular systemic change might as well make things worse.
Yes, and my “whole point” is to respond to this by observing that one’s total evidence either supports the gamble of moving in a different direction, or it does not. You don’t seem to have understood my argument, which is fine (I’m guessing you don’t have much philosophy background), but it really should make you more cautious in your accusations.
Or, more clearly: By not mentioning uncertainty in this paragraph, I do believe you are arguing against a strawperson, as the presence of uncertainty is absolutely crucial to the argument.
It’s all about uncertainty—that’s what “in expectation” refers to. I’m certainly not attributing certainty to the proponent of systemic change—that would indeed be a strawperson, but it’s an egregious misreading to think that I’m making any such misattribution. (Especially since the immediately preceding paragraphs were discussing uncertainty, explicitly and at length!)
the sentence “This claim is … true” just really, really gets to me
Again, I think this is just a result of your not being familiar with the norms of philosophy. Philosophers talk about true claims all the time, and it doesn’t mean that they’re failing to engage honestly with those who disagree with them.
So the question is not “among all of these completely equivalent permissible options, should I choose the highest-paying one and earn to give?”
Now this is a straw man! The view I defend there is rather that “we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings.” Reasons always need to be balanced against countervailing reasons. The point of the appeal to permissibility is just to allow that some careers may be ruled out as a matter of deontic constraints. But obviously more moderate harms also need to be considered, and balanced against the benefits, and I never suggest otherwise.
The most common arguments I am aware of against billionaire philanthropists are...
Those aren’t arguments against how EA principles apply to billionaires, so aren’t relevant to my paper.
So that is what I mean by “arguing against strawpeople”
You didn’t accurately identify any misrepresentations or fallacies in my paper. It’s just a mix of (i) antecedently disliking the strength of my conclusion, (ii) not understanding philosophy, and (iii) your being more interested in a different topic than what my paper addresses.
you seem to not defend all of EA, but whatever part of EA that is most easily defendable in the particular paragraph, such as arguing that EA does not require people to always follow its moral implications, only sometimes—which some EAers might agree with, but certainly not all.
This criticism suggests that you have not understood the point of the paper. I’m defending the coreideas behind EA. It’s just a basic logical point that defending EA principles as such does not require defending the more specific views of particular EAs.
In many instances, it seems that you argue against strawpeople rather than engaging with criticism of EA in good faith, and even worse, the arguments you use to counter the criticism boil down to what EA is advocating for “obviously” being correct
This is far too vague to be helpful (and so comes off as gratuitously insulting). What instances? Which of my specific counterarguments do you find unpersuasive, and why? I do indeed conclude that the core principles of EA are undeniably correct. I never claim that any specific causes EAs “advocate for” are even correct at all, let alone obviously so.
I believe that if an argument cannot be made in a convincing way, we should either focus more time on making it properly, or dropping the discussion entirely, rather than just vaguely pointing towards something and hoping for the best
I agree with that methodological claim. (I flag the brevity just to indicate that there is, of course, always more that could be said. But I wouldn’t say what I do if I didn’t think it was productive and important, even in its brief form.) I believe that I made convincing arguments that go beyond “vaguely pointing… and hoping for the best.” Perhaps you could apply this same methodological principle to your own comments.
I understand that my vague criticism was unhelpful; sadly, when posting I did not have enough time to really point out specific instances, and thought it would still be higher value to mention it in general than to just not write anything at all.
I will try to find the time now to write down my criticisms in more detail, and once I am ready will comment then on the question of Dr. David Mathers above, as he also asked for it (and by commenting here and there, you both will be notified. Hooray.)
I was confused by the first paragraph of Paul’s comment.
Is it saying that EA assumes that “the best” way to help people = “the most effective” way to help people?
If so, could you please define what you meant “best” and “effective”?
I get the impression Paul has some distinction in mind, but I don’t understand what it is. (Paragraph copied below)
I think this paper is weak from the outset in similar ways to the entire philosophical project of EA overall. You start with the definition of EA as “the project of trying to find the best ways of helping others, and putting them into practice”. In that definition “the best” means “the most effective”, which is one of the ways in which EA arguments rhetorically load the dice. If I don’t agree that the most effective way to help people (under EA definitions) is always and necessarily the best way to help people, then the whole paper is weakened. Essentially, one ends up preaching to the choir—which is fine if that’s what one wants to do, of course.
Yes, I am claiming that when Effective Altruism is defined as “trying to find the best ways” what it really means is “trying to find the most effective ways”. As far as I can tell the reasons for using “the best” are to avoid a circular definition (“Effective Altruism is trying to find the most effective ways to perform altruism”) and as a rhetorical device to deflect criticism (“Surely you can’t object to trying to find the best ways of helping others?!”).
Despite protests to the contrary EA is a form of utilitarianism, and when the word effective is used it has generally been in the sense of “cost effective”. If you are not an effective altruist (which I am not), then cost effectiveness—while important—is an instrumental value rather than an intrinsic value. Depending on your ethical framework, therefore, what you define as “the best way” to help people will differ from the effective altruist.
p.s. I’m aware that Oxfam’s programs are also currently decided by “somebody sitting in a comfortable office somewhere [who] has done some calculations”, and I object to this as well while recognising that it may be inevitable given how the world works. My argument is that EA is no better than this current situation in principle, and may be worse than this *in practice* given that it could lead to the complete abandonment of entire countries.
I think this paper is weak from the outset in similar ways to the entire philosophical project of EA overall. You start with the definition of EA as “the project of trying to find the best ways of helping others, and putting them into practice”. In that definition “the best” means “the most effective”, which is one of the ways in which EA arguments rhetorically load the dice. If I don’t agree that the most effective way to help people (under EA definitions) is always and necessarily the best way to help people, then the whole paper is weakened. Essentially, one ends up preaching to the choir—which is fine if that’s what one wants to do, of course.
I take issue with a number of the arguments in the paper, but I have no desire to respond to the entire thing. However I will focus on the part of the Moral Prioritisation section that quotes Mark Goldring of Oxfam—not because I’m a fan of him or Oxfam, which I am not, but because your misinterpretation of his position is quite illustrative. You claim that “Goldring seems to be implying that so long as we help some children in each country, it does not matter how many children we end up abandoning”, but this is not the argument or an implication of the argument.
First, Goldring is referring to Oxfam’s country portfolio rather than a specific group of children, and he obviously believed that applying EA principles to Oxfam’s portfolio would require the organisation simply to cease working in South Sudan because the cost of getting children into education is higher in South Sudan than e.g. Bangladesh. It seems to me that his belief was correct, and that it is morally unjustifiable to abandon the people of South Sudan because somebody sitting in a comfortable office somewhere has done some calculations and decided that those people are not worth it.
You may object to my characterisation of EA in this way, but as far as I can tell that is the fundamental argument. Oxfam claims to, tries to and perhaps even does operate on the basis of need, and the need of children in South Sudan is at least equal to the need of children in Bangladesh. In fact it might be greater, since as Goldring points out, the barriers to school attendance are high in South Sudan compared to Bangladesh. This also highlights (to me, at least) that these situations are sufficiently complex that the type of utilitarian calculus applied by EA is largely self-defeating in many real-world attempts to help people.
Anticipating the downvotes, hoping for discussion.
I’m very puzzled by this comment. Your characterization of Goldring’s argument is precisely the argument I’m responding to, so I’m confused that you present this as though you think I am interpreting Goldring as saying something different. I argue that an objectionable implication of Goldring’s position (and yours) is that we should abandon a larger group of children because they are in a country (Bangladesh) for which we have already helped some other children. You haven’t responded to my argument at all.
Thank you for replying, although I admit to being equally puzzled by your puzzlement.
What Goldring is paraphrased as saying is that “For a certain cost, the charity might enable only a few children to go to school in a country such as South Sudan, where the barriers to school attendance are high, he says; but that does not mean it should work only in countries where the cost of schooling is cheaper, such as Bangladesh, because that would abandon the South Sudanese children.”
Goldring is not “implying that so long as we help some children in each country, it does not matter how many children we end up abandoning”. I simply don’t see where you get that from. It’s just not the argument that he’s making. His argument is that the needs of children in South Sudan and Bangladesh are equally important, that the foundation for Oxfam’s work is needs rather than costs, and that the accident of birth that placed a child in South Sudan and not Bangladesh is thus not a justification to abandon the former.
What Goldring does imply is that applying “EA principles” would require Oxfam to abandon all the children of South Sudan—and probably for every aid organisation to abandon the entire country, since South Sudan is a difficult and costly working environment. In this case “quantity has a quality all of its own”—the argument that justifies abandoning 100 children in one country in favour of 1000 children in another looks markedly different when it’s used to justify withdrawing all forms of assistance from an entire country.
This highlights the conflict between EA’s approach—which takes “effectiveness” (specifically cost-effectiveness) as an intrinsic rather than instrumental value—and the framework used by others, who have other intrinsic values. That conflict is the reason why we may be talking past each other—I recognise that you probably won’t agree with this argument, and may continue to be puzzled. I would suggest to you that this is the fundamental weakness of the paper—that you are not taking these criticisms of EA in good faith, and in some cases are addressing straw man versions of them.
How far are you willing to push this? Presumably, you wouldn’t educate 1 child in South Sudan and 10 in Bangladesh, rather than 0 in Sudan and 10 000 in Bangladesh, just so that you can say South Sudan hasn’t been abandoned? So exactly how many more children have to go without education before you say “that’s too many more” and switch to one country? What could justify a particular cut-off?
I’m not a utilitarian, so I reject the premise of this question when presented in the abstract as it is here. Effectiveness for me is an instrumental value, so I would need to have a clearer picture of the operating environments in both countries and the funding environment at the global level before I would be able to answer it.
Just because you’re not a utilitarian doesn’t mean you can reject the premise of the question. Deontologists have the same problem with trade offs! The premise of the question is one even the Oxfam report accepts. I also don’t think you know what an instrumental value is. I think you keep throwing the term out but don’t understand what it means in terms of how it is frames the instrumental empirical question in a way that other values dissolve.
Can you give me an argument for why I can’t reject the premise of the question, rather than just telling me I can’t? I’ve explained why I reject it in these comments. Goldring “accepts” the premise only in the sense that he’s attending an event which is based entirely on that premise, and has had that premise forced onto him through the rhetorical trick which I described in my reply to Chappell.
I think you’re partly right about my confusion about instrumental values. Now that I reconsider, the humanitarian principles are a strange mix of instrumental and intrinsic values; regardless, effectiveness remains solely an instrumental value. Perhaps you could explain what you mean by “other values dissolve”?
Reasons why you can’t reject the premise:
Trade-offs inhere in all ethical systems so “rejecting utilitarianism” doesn’t do the work you think it does. The values you listed up in the thread that “inhere” in
The actual premise you’re you’re rejecting is one you rely on, that of equal moral consideration of peoples. Each time you manipulate the ratio of tradeoff by rejecting “cost-effectiveness” you are breaking treating people as morally equivalent.
Reasons you actually can reject the premise:
Actions that are upside bargains. E.g. break the trade off by having both options done but this is not the nature of aid as it currently is.
I think what you think you’re doing by saying you’re not a utilitarian is saying that you care about things EAs don’t care about in the impact of aid. But even with other values you create different ratios of trade offs and Pareto Optimality such that you’re always trading off something even if it’s not utilitarianism. It’s still something that is a cost and something that is a benefit. There’s no rhetorical trick here just the fungible nature of cash. The fact that cost effectiveness isn’t an intrinsic value is what makes it a deciding force in the ratio of trade offs in other values.
Can you explain what you mean by “There’s no rhetorical trick here just the fungible nature of cash”? In practice cost effectiveness is a deciding force but not the deciding force.
I think what you’re saying. There are a plurality of values that EAs don’t seem to care about that are deeply important and are skipped over through naive utilitarianism. These values cannot be measured through cost-effectiveness because they are deeply ingrained in the human experience.
The stronger version that I think you’re trying to elucidate but are unable to clearly is that cost-effectiveness can be inversely correlated with another value that is “more” determinant on a moral level. E.g. North Koreans cost a lot more to help than Nigerians with malaria but their cost effectiveness difficulty inheres in their situation and injustice in and of itself.
What I am saying is that insofar as we’re in the realm of charity and budgets and financial tradeoffs it doesn’t matter what your intrinsic value commitments are. There are choices that produce more of that value or less of that value which is what the concept of cost effectiveness is. Thus, it is a crux no matter what intrinsic value system you pick. Even deontology has these issues which I noted in my first response to you.
Thanks, yes. I think I’m elucidating it pretty clearly, but perhaps I’m wrong!
As I’ve said, I’m not denying that cost effectiveness is a determinant in decision-making—it plainly is a determinant, and an important one. What I am claiming is that it is not the primary determinant in decision-making, and simple calculus (as in the original thought experiment) is not really useful for decision-making.
The premise I reject is not that there are always trade-offs, but that a naive utilitarian calculus that abstracts and dehumanises individuals by presenting them as numbers in an equation unmoored from reality is a useful or ethical way to frame the question of how “best” to help people.
What is “the premise” that you reject?
The premise that a naive utilitarian calculus that abstracts and dehumanises individuals by presenting them as numbers in an equation unmoored from reality is a useful or ethical way to frame the question of how “best” to help people. As I’ve said in another comment, the trolley problem was meant as a stimulus to discussion, not as a guide for making policy decisions around public transport systems.
EDIT: I realise that this description may come across as harsh on a forum populated almost entirely by utilitarians, but I felt that it was important to be clear about the exact nature of my objection. My position is that I agree that utilitarianism should be a tool in our ethical toolkit, but I disagree that it is the tool that we should reach for exclusively, or even first of all.
How can we discuss whether or not it makes sense to help more people over less without discussing cases where more/less people are helped?
I suppose that part of my point is that we may not be discussing whether or not it makes sense to help more people over less. We may be discussing how we can help people who are most in need, who may cost more or less to help than other people.
I’ve claimed that naive utilitarian calculus is simply not that useful in guiding actual policy decisions. Those decisions—which happen every day in aid organisations—need to include a much wider range of factors than just numbers.
If we keep it in the realm of thought experiments, it’s a simple question and an obvious answer. But do you really believe that the philosophical thought experiment maps smoothly and clearly to the real world problem?
‘But do you really believe that the philosophical thought experiment maps smoothly and clearly to the real world problem?’
No, of course not. But in assessing the real world problem, you seemed to be relying on some sort of claim that sometimes it better to help less people if it means a fairer distribution of help. So I was raising a problem for that view: if you think it is sometimes better to distribute money to more countries even though it helps less people, then either that is always better in any possible circumstance, realistic or otherwise or its sometimes better and sometimes not depending on circumstance. Then the thought experiment comes in to show that there are possible, albeit not very realistic circumstances where it clearly isn’t better. So that shows one of the two options available to someone with your view is wrong.
Then, I challenged the other option that it is sometimes better and sometimes not, but the thought experiment wasn’t doing any work there. Instead, I just asked what you think determines when it is better to distribute the money more evenly between countries versus when it is better to just help the most people, and implied that this is a hard question to answer. As it happens, I don’t actually think that this view is definitely wrong, and you have hinted at a good answer, namely that we should sometimes help less people in order to prioritize the absolutely worst off. But I think it is a genuine problem for views like this that its always going to look a bit hazy what determines exactly how much you should prioritize the best of, and the view does seem to imply there must be an answer to that.
I think we need to get away from “countries” as a frame—the thought experiment is the same whether it’s between countries, within a country, or even within a community. So my claim is not that “it is sometimes better to distribute money to more countries even though it helps less people”.
If we take the Bangladeshi school thought experiment—that with available funding, you can educate either 1000 boys or 800 girls, because girls face more barriers to access education—my claim is obviously not that “it is sometimes better to distribute money to more genders even though it helps less people”. You could definitely describe it that way—just as Chappell describes Goldring’s statement—but that is clearly not the basis of the decision itself, which is more concerned with relative needs in an equity framework.
You are right to describe my basis for making decisions as context-specific. It is therefore fair to say that I believe that in some circumstances it is morally justified to help fewer people if those people are in greater need. The view that this is *always* better is clearly wrong, but I don’t make that assessment on the basis of the thought experiment, but on the basis that moral decisions are almost always context-specific and often fuzzy around the edges.
So while I agree that it is always going to look a bit hazy what determines your priorities, I don’t see it as a problem, but simply as the background against which decisions need to be made. Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?
‘Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?‘
Yes, indeed, I think I agree with everything in this last post. In general non-utilitarian views tend to capture more of what we actually care about at the cost of making more distinctions that look arbitrary or hard to justify on reflection. It’s a hard question how to trade off between these things. Though be careful not to make the mistake of thinking utilitarianism implies that the facts about what empirical effects an action will have are simple: it says nothing about that at all.
Or at least, I think that, technically speaking, it is true that “it is sometimes better to distribute money to more genders even though it helps less people” is something you believe, but that’s a highly misleading way of describing your view: i.e. likely to make a reasonable person who takes it at face value believe other things about you and your view that are false.
I think the countries thing probably got this conversation off on the wrong foot, because EAs have very strong opposition to the idea that national boundaries ever have moral significance. But it was probably the fault of Richard’s original article that the conversation started there, since the charitable reading of Goldring was that he was making a point about prioritizing the worst off and using an example with countries to illustrate that, not saying that it’s inherently more fair to distribute resources across more countries.
As a further point: EAs who are philosophers likely are aware, when they are being careful and reflective, that some people reasonably think that it is better to help a person the worse off they are, since the philosopher Derek Parfit, who is one of the intellectual founders of EA, invented a particular famous variant of that view: https://oxfordre.com/politics/politics/view/10.1093/acrefore/9780190228637.001.0001/acrefore-9780190228637-e-232
My guess (though it is only a guess) is that if you ask Will MacAskill he’ll tell you that at least in an artificial case where you can either help a million people who are very badly off, or a million and one people who are much better off by the same amount, you ought to help the worse off people. It’s hard to see how he could deny that, given that he recommends giving some weight to all reasonable moral views in your decision-making, prioritizing the worse off is reasonable, and in this sort of case, helping the worse off people is much better if we ought to prioritize the worse off, while helping the million and one is only a very small amount better on the view where you ought just to help the most people.
Note by the way that you can actually have the ‘always bring about the biggest benefit when distributing resources view, without worrying about prioritizing the worst off’ view and still reject utiltarianism overall. For example, its consistent with “help more people rather than less when the benefit per person is the same size” that you value things other than happiness/suffering or preference satisfaction, that you believe it is sometimes wrong to violate rights in order to bring about the best outcome etc.
Likewise I think I agree with everything in this post. I appreciate that you took the time to engage with this discussion, and for finding grounds for agreement at least around the hazy edges.
Thanks to you and @Dr. David Mathers for this useful discussion.
Wait I just want to make an object level objection for the third party readers that most policy-making is guided by cost-benefit analysis and the assigning of value of statistical life (VSL) in most liberal democracies.
To clarify your objection: such policy-making is guided by, but not solely determined by, such approaches.
What do you mean by “not… good faith”? I take that to imply a lack of intellectual integrity, which seems a pretty serious (and insulting) charge. I don’t take Goldring to be arguing in bad faith—I just think his position is objectively irrational and poorly supported. If you think my arguments are bad, you’re similarly welcome to explain why you believe that, but I really don’t think anyone should be accusing me of failing to engage in good faith.
On to the substance: you (and Goldring) are especially concerned not to “withdraw all… assistance from an entire country.” You would prefer to help fewer children, some in South Sudan and some in Bangladesh, rather than help a larger number of children in Bangladesh. When you help fewer people, you are thereby “abandoning”, i.e. not helping, a larger number of people. Does it matter how many more we could help in Bangladesh? It doesn’t seem to matter to you or Goldring. But that is just to say that it does not matter how many (more) children we end up abandoning, on your view, so long as we help some in each country. That’s the implication of your view, right? Can you explain why you think this isn’t an accurate characterization?
ETA: I realize now there’s a possible reading of the “it doesn’t matter” claim on which it could be taken to impute a lack of concern even for Pareto improvements, i.e. saving just one person in each country being no better than 10 people in each country. I certainly don’t mean to attribute that view to Goldring, so will be sure to reword that sentence more carefully!
That’s not the implication of my view, no. It could matter how many more children we are abandoning, but this is not a purely utilitarian calculus. In humanitarian action effectiveness is an instrumental value not an intrinsic value, so prioritisation is not solely a question of cost-effectiveness, and neither the argument or the implication is “so long as we help some in each country”.
(This is also where my accusation of bad faith comes from. Either you do not know that there are other values at play—in which case you are not arguing properly, since you have not investigated sufficiently—or you do know that there are other values at play, but are choosing not to point this out to your reader—in which case you are not arguing honestly.)
The simple addition of non-utilitarian values exposes how this sort of naive calculus—in which one child in one location can be exchanged directly for another child in a different location—is fine as a thought experiment, but is largely useless as a basis for real-world decision-making, constrained as it is by a wider set of concerns that confound any attempt to apply such calculus.
My fundamental objection is that this thought experiment—and others like it—are an exercise in stacking the rhetorical deck, by building the conclusion that you are seeking into the framing of the question. This can be seen when you claim that I “would prefer to help fewer children, some in South Sudan and some in Bangladesh, rather than help a larger number of children in Bangladesh.”
In fact I would prefer to help all of them—perhaps through the simple solution of seeking more funding. If you argue that this solution is not available—that there is no such additional funding—then you concede that the thought experiment only works in your favour because you have specifically framed it in that way. If you accept that this solution is available, then you should allow the full range of real-world factors that must be taken into account in such decision-making, in which case the utilitarian calculus becomes just one small part of the picture. In either case the experiment is useless to guide real-world decision-making.
Perhaps I could posit a similar thought experiment. In Bangladesh it is more expensive to educate girls than boys, because girls face additional barriers to access to education. You can educate 1000 boys or 800 girls. I assume that you would accept that your argument would conclude that we should focus all our spending on educating 1000 boys. But this conclusion seems obviously unjustifiable on any reasonable consideration of fairness, and in fact leads to worse outcomes for those who are already disadvantaged. The utilitarian calculus cannot possibly be the sole basis for allocating these resources.
I hope this clarifies my position.
Obviously I’m engaging with a position on which there are believed to be “other values in play” (e.g. a conception of fairness which prioritizes national representation over number of people helped), since I’m arguing that those other values are ultimately indefensible.
I’m going to leave the conversation at that. I can deal with polite philosophical ignorance (e.g. not understanding how to engage productively with thought experiments), or with arrogance from a sharp interlocutor who is actually making good points; but the combination of arrogance and ignorance is just too much for me.
Thanks for continuing to engage—I appreciate that it must be frustrating for you.
The other values at play are quite obviously not “prioritise national representation over number of people helped”. That’s why I proposed the parallel thought experiment of schoolboys and schoolgirls in Bangladesh—to show that your calculus is subject to the exact same objections without any implication of “national representation”, and therefore “national representation” is not part of this discussion.
The other values that I am referring to (as I’ve mentioned in other replies) might be the core humanitarian principles of humanity, impartiality, neutrality, and independence. These values are contested, and you’re obviously welcome to contest them, but they are the moral and to some extent legal basis of C20 humanitarian action.
They are not necessarily key to e.g. education provision, which although it is often delivered by “dual mandate” organisations, is not strictly speaking a lifesaving activity, so you may wish to reject them on those grounds. However it seems to me that you believe that your cardinal value of effectiveness is applicable across all areas of altruism, so I think they are relevant to the argument.
You originally asked for any feedback, and I took you at your word. My feedback is simply that this paper is preaching to the choir, and it would be a stronger paper if you addressed these other value systems—the very basis of the topic that you are discussing—rather than ignoring them completely. You can of course argue that they’re indefensible—and clearly we disagree there—but first you have to identify them correctly.
To the accusations of arrogance and ignorance. Obviously we’re all ignorant—it’s the human condition—but I try to alleviate my ignorance by e.g. reading papers and listening to viewpoints that I disagree with. Clearly you find me arrogant, but there’s not much I can do about that—I’ve tried to be as polite as I can, but clearly that was insufficient.
If you can give me any tips on how to engage productively with thought experiments, I would welcome them. I would however note that I’ve always believed that the trolley problem was intended as a basis for discussion, rather than as a basis for policy decisions about public transport systems.
You come across as arrogant for a few reasons which are in principle fixable.
1: You seem to believe people who don’t share your values are simply ignorant of them, and not in a deep “looking for a black cat in an unlit room through a mirror darkly” sort of way. If you think your beliefs are prima facie correct, fine, most people do—but you still have to argue for them.
2: You mischaracterize utilitarianism in ways that are frankly incomprehensible, and become evasive when those characterizations are challenged. At the risk of reproducing exactly that pattern, here’s an example:
As you have been more politely told many times in this comment section already: claiming that utilitarians assign intrinsic value to cost-effectiveness is absurd. Utilitarians value total well-being (though what exactly that means is a point of contention) and nothing else. I would happily incinerate all the luxury goods humanity has ever produced if it meant no one ever went hungry again. Others would go much further.
What I suspect you’re actually objecting to is aggregation of utility across persons—since that, plus the grossly insufficient resources available to us, is what makes cost-effectiveness a key instrumental concern in almost all situations—but if so the objection is not articulated clearly enough to engage with.
3: Bafflingly, given (1), you also don’t seem to feel the need to explain what your values are! You name them (or at least it seems these are yours) and move on, as if we all understood
in precisely the same way. But we don’t. For example: utilitarianism is clearly “impartial” and “neutral” as I understand them (i.e. agent-neutral and impartial with respect to different moral patients) whereas folk-morality is clearly not.
I’m guessing, having just googled that quote, that you mean something like this
in which case there’s a further complication: you’re almost certainly using “intrinsic value” and “instrumental value” in a very different sense from the people you’re talking to. The above versions of “independence” and “neutrality” are, by my lights, obviously instrumental—these are cultural norms for one particular sort of organization at one particular moment in human history, not universal moral law.
Thanks for your comment. I’ll try to address each of your points.
“You seem to believe people who don’t share your values are simply ignorant of them… If you think your beliefs are prima facie correct, fine, most people do—but you still have to argue for them.”
In general, no—I do not believe that people who don’t share my values are simply ignorant of them, and I have communicated poorly if that is your impression. Nor do I believe that my beliefs are prima facie correct, and I don’t think I’ve claimed that in any of these comments. I did not post here to argue for my beliefs—I don’t expect anybody on this forum to agree with them—but to point out that the paper under discussion fails to deal with those beliefs adequately, which seemed to me a weakness.
“You mischaracterize utilitarianism in ways that are frankly incomprehensible, and become evasive when those characterizations are challenged.”
I think it’s an exaggeration to say that my characterisation is “frankly incomprehensible” and that I “become evasive” when challenged. My characterisation may be slightly inaccurate, but it’s not as if I am a million miles away from common understanding, and I have tried to be as direct as possible in my responses.
The confusion may arise from the fact that when I claim that effectiveness is an intrinsic value, I am making that claim for effective altruism specifically, rather than utilitarianism more broadly. And indeed effectiveness does appear to be an intrinsic value for effective altruism—because if what effective altruists proposed was not effective, it would not constitute effective altruism.
Your final point has the most traction:
“Bafflingly, given (1), you also don’t seem to feel the need to explain what your values are! You name them (or at least it seems these are yours) and move on, as if we all understood… I’m guessing, having just googled that quote, that you mean something like this”
I was indeed referring to these principles, and you’re right—I didn’t explain them! This may have been a mistake on my part, but as I implied above, my intent was not to persuade anybody here to accept those principles. I am not expecting random people on a message board to even be aware of these principles—but I would expect an academic who writes a paper on the subject that in part intends to refute the arguments of organisations involved in humanitarian action to refer to these principles at least in passing, wouldn’t you?
“you’re almost certainly using “intrinsic value” and “instrumental value” in a very different sense from the people you’re talking to.”
Yes, this may be the case. In another comment in this thread I reconsidered my position, and suggested that humanitarian principles are a curious mix of intrinsic and instrumental. But I’m not sure my usage is that far away from the common usage, is it? I also raised the point that they are in fact contested—partly for the cultural reason you raise—and the way in which they are viewed varies from organisation to organisation. Obviously this will cause more concern for people who prefer their principles much cleaner!
I don’t think you’re understanding what EAs truly object to though. If the problem is the moral arbitrariness and moral luck of South Sudan vs. Bangladesh then you end up having to prioritise. EA works on the margins so the argument conditionally breaks at the point quantity has a quality all of its own.
If borders and the birth lottery are truly arbitrary I don’t understand why it would be so bad to “abandon” a country if there are equally needs for kids of each country. In the same way typical humanitarians are ok with donations moved from the first world to the developing world.
To put inversely your example, the argument that justifies funding every single country because they are distinct categories also justifies abandoning 1000 children in one country for 100 children in another country. If anything your example weighs on the fact South Sudan and Bangladesh feel worthy on both ends so it feels intuitive. But the categories of countries themselves are wonderfully arbitrary, South Sudan did not exist until 2011!
Moreover, I wish you defended another intrinsic value that could be isolated away from cost-effectiveness. Is it a desserts claim that the most difficult places to administer aid are also the most “needy” and therefore deserve it more even if it costs more?
I’m not sure what the last sentence of your first paragraph means—can you explain it for me?
For most of the rest of your comment, I’d refer you to my other answer at https://forum.effectivealtruism.org/posts/ShCENF54ZN6bxaysL/why-not-ea-paper-draft?commentId=o4q6AFoKt7kDpN5cD. I don’t know if that answers your points, but it should clarify a little.
The intrinsic values that I would point to in this context are the humanitarian principles of humanity, neutrality, impartiality and independence. (However I should note that these are the subject of continual debate, and neutrality in particular has come under serious pressure during the Ukraine war.)
Also to be clear, “humanity, neutrality, impartiality and independence” aren’t values as most philosophers know of them. Neutrality and impartiality are not ones you seem to defend above which is why people find you to be confused.
Yes, you’re absolutely right. Academic philosophy has largely failed to engage with contemporary humanitarianism, which is puzzling given that the field of humanitarianism provides plenty of examples of actual moral dilemmas. That failure is also what leads to the situation we have now, where an academic paper that wants to engage with that topic lacks the language to describe it accurately.
This might be because the ethics of humanitarian action is (broadly) a species of virtue ethics, in which those humanitarian principles are the values that need to be cultivated by individuals and organisations in order to make the sort of utilitarian, deontological or other ethical decisions that we are using as thought experiments here, guided by the sort of “practical wisdom” that is often not factored into those thought experiments.
I think the problem is actually reversed. Most humanitarian organisations do not have firm foundational beliefs and are about using poverty porn and feelings of the donor to guide judgements. The language you use of the value of “humanity” is a non-sequitur and doesn’t provide information—even those with high status in humanitarian aid circles like Rory Stewart express a lot of regret over this fuzziness. Put sharply, I don’t think contemporary humanitarianism has language to describe itself accurately and “humanity, neutrality, impartiality and independence” are not values but rather buzzwords for charity reports and pamphlets.
From what I’ve inferred is that you’re some sort of Bernard Williams type moral particularism instead of virtue ethics in that you think there are morally salient facts everywhere on the ground in these cases and that the configuration of the morally relevant features of the action in a particular context. But the problem in this discourse is you won’t name the thing you’re defending because I don’t think you know what exactly your moral system is beyond being against thought experiments and vibes of academic philosophy.
This is definitely an uncharitable reading of humanitarian action. The humanitarian principles are rarely to be found in “charity reports and pamphlets” (by which I assume you mean public-facing documents) and if they are found there, they are not the focus of those documents at all. The exception would be for the ICRC, for the obvious reason that the principles largely originated in their work and they act as stewards to some extent.
Your characterisation of humanitarian organisations as “using poverty porn and feelings of the donor to guide judgements” and so on—well, you’re welcome to your opinion, but that clearly obviates the hugely complex nature of decision-making in humanitarian action. Humanitarian organisations clearly have foundational beliefs, even if they’re not sufficiently unambiguous for you. The world is unfortunately an ambiguous place.
(I should explain at this point that I am not a full-throated and unapologetic supporter of the humanitarian sector. I am in fact a sharp critic of the way in which it works, and I appreciate sharp criticism of it in general. But that criticism needs to be well-informed rather than armchair criticism, which I suppose is why I’m in this thread!)
I do in fact practice virtue ethics, and while there is some affinity between humanitarian decision-making and moral particularism, there are clearly moral principles in the former which the latter might deny—the principle of impartiality means that one is required to provide assistance to (for example) genocidaires from Rwanda when they find themselves in a refugee camp in Tanzania, regardless of what criminal actions they might have carried out in their own country.
I’m not sure what you mean when you say that I won’t name the thing defending because I don’t know what my moral system is. My personal moral framework is one of virtue ethics, taking its cue from classical virtue ethics but aware that the virtues of the classical age are not necessarily best for flourishing in the modern age; and my professional moral framework is—as you might have guessed—based on the humanitarian principles.
You might not believe that either of these frameworks is defensible, but that’s different from saying that I don’t know what they are. Could you explain exactly what you meant, and why you believe it?
Ok to be clear, I am 100% certain you don’t know what virtue ethics is because you’re literally describing principles of action not virtues. Virtues in virtues ethics are dispositions we cultivate in ourselves not in the consequence of the world. So taking your example of the “principle of impartiality” is that if you are a virtue ethicist you’re trying to cultivate “impartiality” not duty bound by it. This is also why you’re confused when you name virtues because independence is a virtue in the person receiving aid not in you! Also these are canonically not virtues any well-known virtue ethicist would name!
Moreover, this impartiality is more a metaethical principle that you keep violating in your own examples. If Oxfam trades off 2:1 Bangladeshis to South Sudanese (replace the countries with whatever you want) that breaks impartiality because you are necessarily saying one life is worth more than another (there are morally particular facts that can change this obviously but you keep biting the bullet on any and just say the world is fuzzy!)
Overall, the world is fuzzy but the problem in this chain of logic is your fuzziness in understanding of what commonly used concepts like virtue ethics are. It’s really frustrating when you keep excusing your mistaken understanding of concepts with the world being fuzzy. Please just go read Alastair McIntyre’s After Virtue.
“I am 100% certain you don’t know what virtue ethics is because you’re literally describing principles of action not virtues… Virtues in virtues ethics are dispositions we cultivate in ourselves not in the consequence of the world.”
I fear that it may be you who do not know what virtue ethics is. You refer to McIntyre, who defines virtues as qualities requiring both possession *and* exercise. One does not become courageous by sitting at home thinking about how courageous one will become, but by practising acts of courage. Virtues are developed through such practice, which surely means that they are principles of action.
”Also these are canonically not virtues any well-known virtue ethicist would name!”
I agree. I haven’t claimed that they are, and I’ve referred to humanitarian ethics as a species of virtue ethics for that very reason. But one of the strengths of virtue ethics is that it is possible—indeed necessary—to update what the virtues mean in practice to account for the way in which the social environment has changed—and in fact there’s no reason why one shouldn’t introduce new virtues that may be more appropriate for human flourishing.
“This is also why you’re confused when you name virtues because independence is a virtue in the person receiving aid not in you!… Moreover, this impartiality is more a metaethical principle that you keep violating in your own examples. If Oxfam trades off 2:1 Bangladeshis to South Sudanese (replace the countries with whatever you want) that breaks impartiality because you are necessarily saying one life is worth more than another”
I believe you are confused here. Independence is not a virtue of the person receiving aid but of the organisation providing aid—and here I’ll use the ICRC as the exemplar—which “must always maintain their autonomy so that they may be able at all times to act in accordance with the principles”.
Likewise you are confused about what is meant by impartiality, which requires that the organisation provides aid to individuals “guided solely by their needs, and to give priority to the most urgent cases of distress.” It does not break impartiality to say “We should assist X rather than Y” if X is in greater need, and does not imply that X’s life is worth more than Y’s.
Let’s return to the Bangladeshi schoolchildren. If you allocate resources to support education for 800 girls instead of 1000 boys, it does not necessarily imply that you think girls are worth more than boys (although it might). The decision is being made on the basis that girls’ need for support is greater because they face more barriers to access than boys.
I am not a philosopher by any means, but I simply cannot accept your criticism that I do not understand these concepts, or how they are applied in practice.
This is not how words work. You can’t just say I believe X is a virtue because in humanitarian ethics (which is ill-defined). I truly don’t think you understand the concept of virtue ethics at the end of the day. This sounds mean by it’s definitionally a misunderstanding you keep doubling down on like everything here. For instance you tried to use the red cross as an example but most virtue ethicists wouldn’t abide by an entity holding a virtue (the ICRC can’t cultivate a virtue it’s not a person) -- because that’s definitionally not what a virtue is. You also misquoted Alasdair McIntyre and misrepresented it as shown by the fact your quoting all come from google book snippets from undergraduate classes.
I think you believe what you believe and I’ll leave it at that. This is not a productive conversation. Funnily enough I do not think the paper draft is charitable but I don’t think you fully understand your axiomatic values (you probably are prioritarian not a virtue ethicist). I also think the educating girls example is a very strong prioritarian argument.
[edited for tone]
“You can’t just say I believe X is a virtue because in humanitarian ethics (which is ill-defined). I truly don’t think you understand the concept of virtue ethics at the end of the day… You also misquoted Alastair MacIntyre and misrepresented it.”
Let me then quote MacIntyre in full, to avoid misrepresenting him.
1.
MacIntyre defines a practice as “any coherent and complex form of socially established cooperative human activity through which goods internal to that form of activity are realized in the course of trying to achieve those standards of excellence which are appropriate to, and partially definitive of, that form of activity”.
MacIntyre gives a range of examples of practices, including the games of football and chess, professional disciplines of architecture and farming, scientific enquiries in physics, chemistry and biology, creative pursuits of painting and music, and “the creation and sustaining of human communities—of households, cities, nations”.
Humanitarian action meets this definition of a practice.
2.
MacIntyre defines a good with reference to their conception in the middle ages as “The ends to which men as members of such a species move… and their movement towards or away from various goods are to be explained with reference to the virtues and vices which they have learned or failed to learn and the forms of practical reasoning which they employ.”
The humanitarian imperative “that action should be taken to prevent or alleviate human suffering arising out of disaster or conflict” meets this definition of a good.
3.
MacIntyre defines a virtue as “an acquired human quality the possession and exercise of which tends to enable us to achieve those goods which are internal to practices and the lack of which effectively prevents us from achieving any such goods”.
Humanitarian principles can be treated as virtues under this definition. They are acquired human qualities which enable us to achieve a good (the human imperative) which is internal to a practice (humanitarian action).
They should be seen as professional virtues in addition to any personal virtues (the more familiar virtues such as courage or patience) that aid workers might cultivate, in the same way that architects would cultivate different virtues to farmers.
4.
MacIntyre asserts that “A practice involves standards of excellence and obedience to rules as well as the achievement of goods. To enter into a practice is to accept the authority of those standards and the inadequacy of my own performance as judged by them.”
The institutions of humanitarian aid—whether operational bodies such as the Red Cross/Red Crescent movement, professional standards such as the Sphere Standards, or communities of practice such as the CALP Network—provide exactly this context.
You are correct to say that those institutions are not themselves possessed of the virtues, but they constitute the practice which is required to acquire these virtues, and within which the exercise of the virtue takes place.
*
This account is inadequate—it does not account for the wider swathe of humanitarian action happening outside the formal humanitarian sector—but it is sufficient to demonstrate that the concept of “humanitarian virtues” is coherent with MacIntyre’s conception of virtue ethics.
I am perfectly happy with the fact that you are not a virtue ethicist, and therefore simply do not agree with this argument. Your accusation that I don’t understand the concept of virtue ethics, however, simply does not hold water.
You’re clear that you don’t wish to continue this conversation because it’s not productive. Nevertheless I appreciate your engagement, so thank you for taking the time to comment over the past few days.
When seeing the title of this post I really wanted to like it, and I appreciate the effort that went into it all so far.
Unfortunately, I have to agree with Paul—both the post as well as the paper draft itself read pretty weak to me. In many instances, it seems that you argue against strawpeople rather than engaging with criticism of EA in good faith, and even worse, the arguments you use to counter the criticism boil down to what EA is advocating for “obviously” being correct (you wrote in the post that the arguments are very much shortened because there is just so much ground to cover, but I believe that if an argument cannot be made in a convincing way, we should either focus more time on making it properly, or dropping the discussion entirely, rather than just vaguely pointing towards something and hoping for the best.)
Also, you seem to not defend all of EA, but whatever part of EA that is most easily defendable in the particular paragraph, such as arguing that EA does not require people to always follow its moral implications, only sometimes—which some EAers might agree with, but certainly not all.
Can you mention some places where you think he has strawmanned people and what you think the correct interpretation of them is?
This is more of a misread than a strawman, but on page 8 the paper says:
I don’t think saying that Adams, Crary, and Gruen “illegitimately presuppose that “complicity” with suboptimal institutions entails net harm” is correct. The paper misunderstands what they were saying. Here’s the full sentence (emphasis added):
I interpret it as saying:
In other words, it is an empirical claim that the way EA is carried out in practice has some counterproductive results. It is not a normative claim about whether complicity with suboptimal institutions is ever okay.
But they never even try to argue that EA support for “the very social structures that cause suffering” does more harm than good. As indicated by the “thereby”, they seem to take the mere fact of complicity to suffice for “undermining its efforst to ‘do the most good’.”
I agree that they’re talking about the way that EA principles are “actualized”. They’re empirically actualized in ways that involve complicity with suboptimal institutions. And the way these authors argue, they take this fact to suffice for critique. I’m pointing out that this fact doesn’t suffice. They need to further show that the complicity does more harm than good.
Here is my criticism in more detail:
It starts here in the abstract—writing this way immediately sounds condescending to me, making disagreement with EA sound like an entirely unreasonable affair. So this is devaluing the position of a hypothetical someone opposing EA, rather than honestly engaging with their criticisms.
On systemic change: The whole point is that systemic change is very hard to estimate. It is like sitting on a local maximum of awesomeness, and we know that there must be higher hills—higher maxima—out there, but we do not know how to get there; any particular systemic change might as well make things worse. But if EA principles told us to only ever sit at this local maximum and never even attempt to go anywhere else, then those would not be principles I would be happy following.
So yes, people who support systemic change often do not have the mathematical basis to argue that it necessarily will be a good deal—but that does not mean that there is no basis for thinking attempting it is a good option.
Or, more clearly: By not mentioning uncertainty in this paragraph, I do believe you are arguing against a strawperson, as the presence of uncertainty is absolutely crucial to the argument.
On earning to give: Again, the arguments are very simplified here. A career being permissible or not is not a binary choice, true or false. It is a gradient, and it fluctuates and evolves over time, depending on how what you are asked to do on the job fluctuates over time, and depending on how the ambient morality of yourself and society shifts over time. So the question is not “among all of these completely equivalent permissible options, should I choose the highest-paying one and earn to give?” but “what is the tradeoff I should be willing to make between the career being more morally iffy, and the positive impact I can have by donation from a larger income baseline?”, and additionally, if you still just donate e.g. 10% of your income but your income is higher it means that also there is a larger amount of money you do not donate, which counterfactually you might use to buy things you do not actually need that need to be produced and shipped and so on, in the worse case making the world a worse place for everyone to be in, so even just “more money = more good” is not a simple truth that just holds.
And despite all these simplifications, the sentence “This claim is … true” just really, really gets to me—such binary language again completely sweeps any criticism, any debate, any nuance under the rug.
On billionaire philanthropy: Yes, billionaires are capable of doing immense good, and again, I have not seen anyone actually arguing against that. The most common arguments I am aware of against billionaire philanthropists are (1) that billionaires in the first place just shouldn’t exist, as yes they have the capacity to do immense good, but also the capacity to do immense harm, and no single person should be allowed to have the capacity to do so much harm to living beings on a whim. And (2) billionaires are capable of paying people to advise them on how to best make it look like they are doing good, when actually, they are not (such as creating huge charitable foundations and equipping them with lots of money, but these foundations then actually just re-investing that money into projects run by companies these billionaires have shares in, etc.)
So that is what I mean by “arguing against strawpeople”—claims are so far simplified and/or misrepresented that they do not accurately represent the actual positions of EAers, or of people who criticise them.
That’s a non-sequitur. There’s no inconsistency between holding a certain conclusion—that “every decent person should share the basic goals or values underlying effective altruism”—and “honestly engaging with criticisms”. I do both. (Specifically, I engage with criticisms of EA principles; I’m very explicit that the paper is not concerned with criticisms of “EA” as an entity.)
I’ve since reworded the abstract since the “every decent person” phrasing seems to rub people the wrong way. But it is my honest view. EA principles = beneficentrism, and rejecting beneficentrism is morally indecent. That’s a view I hold, and I’m happy to defend it. You’re trying to assert that my conclusion is illegitimate or “dishonest”, prior to even considering my supporting reasons, and that’s frankly absurd.
Yes, and my “whole point” is to respond to this by observing that one’s total evidence either supports the gamble of moving in a different direction, or it does not. You don’t seem to have understood my argument, which is fine (I’m guessing you don’t have much philosophy background), but it really should make you more cautious in your accusations.
It’s all about uncertainty—that’s what “in expectation” refers to. I’m certainly not attributing certainty to the proponent of systemic change—that would indeed be a strawperson, but it’s an egregious misreading to think that I’m making any such misattribution. (Especially since the immediately preceding paragraphs were discussing uncertainty, explicitly and at length!)
Again, I think this is just a result of your not being familiar with the norms of philosophy. Philosophers talk about true claims all the time, and it doesn’t mean that they’re failing to engage honestly with those who disagree with them.
Now this is a straw man! The view I defend there is rather that “we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings.” Reasons always need to be balanced against countervailing reasons. The point of the appeal to permissibility is just to allow that some careers may be ruled out as a matter of deontic constraints. But obviously more moderate harms also need to be considered, and balanced against the benefits, and I never suggest otherwise.
Those aren’t arguments against how EA principles apply to billionaires, so aren’t relevant to my paper.
You didn’t accurately identify any misrepresentations or fallacies in my paper. It’s just a mix of (i) antecedently disliking the strength of my conclusion, (ii) not understanding philosophy, and (iii) your being more interested in a different topic than what my paper addresses.
This criticism suggests that you have not understood the point of the paper. I’m defending the core ideas behind EA. It’s just a basic logical point that defending EA principles as such does not require defending the more specific views of particular EAs.
This is far too vague to be helpful (and so comes off as gratuitously insulting). What instances? Which of my specific counterarguments do you find unpersuasive, and why? I do indeed conclude that the core principles of EA are undeniably correct. I never claim that any specific causes EAs “advocate for” are even correct at all, let alone obviously so.
I agree with that methodological claim. (I flag the brevity just to indicate that there is, of course, always more that could be said. But I wouldn’t say what I do if I didn’t think it was productive and important, even in its brief form.) I believe that I made convincing arguments that go beyond “vaguely pointing… and hoping for the best.” Perhaps you could apply this same methodological principle to your own comments.
I understand that my vague criticism was unhelpful; sadly, when posting I did not have enough time to really point out specific instances, and thought it would still be higher value to mention it in general than to just not write anything at all.
I will try to find the time now to write down my criticisms in more detail, and once I am ready will comment then on the question of Dr. David Mathers above, as he also asked for it (and by commenting here and there, you both will be notified. Hooray.)
I was confused by the first paragraph of Paul’s comment.
Is it saying that EA assumes that “the best” way to help people = “the most effective” way to help people?
If so, could you please define what you meant “best” and “effective”?
I get the impression Paul has some distinction in mind, but I don’t understand what it is. (Paragraph copied below)
Yes, I am claiming that when Effective Altruism is defined as “trying to find the best ways” what it really means is “trying to find the most effective ways”. As far as I can tell the reasons for using “the best” are to avoid a circular definition (“Effective Altruism is trying to find the most effective ways to perform altruism”) and as a rhetorical device to deflect criticism (“Surely you can’t object to trying to find the best ways of helping others?!”).
Despite protests to the contrary EA is a form of utilitarianism, and when the word effective is used it has generally been in the sense of “cost effective”. If you are not an effective altruist (which I am not), then cost effectiveness—while important—is an instrumental value rather than an intrinsic value. Depending on your ethical framework, therefore, what you define as “the best way” to help people will differ from the effective altruist.
p.s. I’m aware that Oxfam’s programs are also currently decided by “somebody sitting in a comfortable office somewhere [who] has done some calculations”, and I object to this as well while recognising that it may be inevitable given how the world works. My argument is that EA is no better than this current situation in principle, and may be worse than this *in practice* given that it could lead to the complete abandonment of entire countries.