I’ve previously suggested a constraint on warranted hostility: the target must be ill-willed and/or unreasonable. Common hostility towards either utilitarianism or effective altruism seems to violate this constraint. I could see someone reasonably disagreeing with the former view, and at least abstaining from the latter project, but I don’t think either could reasonably be regarded as inherently ill-willed or unreasonable.
Perhaps the easiest way to see this is to just imagine a beneficentric virtue ethicist who takes scope-sensitive impartial benevolence to be the central (or even only) virtue. Their imagined virtuous agent seems neither ill-willed nor unreasonable. But the agent thus imagined would presumably be committed to the principles of effective altruism. On the stronger version, where benevolence is the sole virtue, the view described is just utilitarianism by another name.[1]
The Good-Willed Utilitarian
A lot of my research is essentially about why an ideally virtuous person would be a utilitarian or something close to it. (Equivalently: why benevolence plausibly trumps other virtues in importance.) Many philosophers make false assumptions about utilitarianism that unfairly malign the view and its proponents. For a series of important correctives, see, e.g., Bleeding-Heart Consequentialism, Level-up Impartiality, Theses on Mattering, How Intention Matters, and Naïve Instrumentalism vs Principled Proceduralism. (These posts should be required reading for anyone who wants to criticize utilitarianism.)
Conversely, one of my central objections to non-consequentialist views is precisely that they seem to entail severe disrespect or inadequate concern for agents arbitrarily disadvantaged under the status quo. My new paradox of deontology and pre-commitment arguments both offer different ways of developing this underlying worry. As a result, I actually find it quite mysterious that more virtue ethicists aren’t utilitarians. (Note that the demandingness objection to utilitarianism is effectively pleading to let us be less than ideally virtuous.)
At its heart, I see utilitarianism as the combination of (exclusively) beneficentric moral goals + instrumental rationality. Beneficentric goals are clearly good, and plausibly warrant higher priority than any competing goals. (“Do you really think that X is more important than saving and improving lives?” seems like a pretty compelling objection for any non-utilitarian value X.) And instrumental rationality, like “competence”, is an executive virtue: good to have in good people, bad to have in bad people. It doesn’t turn good into bad. So it’s very puzzling that so many seem to find utilitarianism “deeply appalling”. To vindicate such a claim, you really need to trace the objectionability back to one of the two core components of the view: exclusively beneficentric goals, or instrumental rationality. Neither seems particularly “appalling”.[2]
Effective Altruism and Good Will
Utilitarianism remains controversial. I get that. What’s even more baffling is that hostility extends to effective altruism: the most transparently well-motivated moral view one could possibly imagine. If anyone really think that the ideally virtuous agent would be opposed to either altruism or effectiveness, I’d love to hear their reasoning! (I think this is probably the most clear-cut no-brainer in all of philosophy.)
A year ago, philosopher Mary Townsend took a stab, writing that:
any morality that prioritizes the distant, whether the distant poor or the distant future, is a theoretical-fanaticism, one that cares more about the coherence of its own ultimate intellectual triumph—and not getting its hands dirty—than about the fate of human beings…
This is so transparently false that I cannot imagine what coherent thought she was trying to express. Is she really not aware that distant people are people too? Combine concern for all human beings with the empirical facts that we can often do more for the distant poor (and perhaps the distant future, in expectation), and Townsend’s rhetoric immediately collapses into nonsense. Any morality that cares about “the fate of human beings” without restriction could very easily end up “prioritizing the distant” for obviously good and virtuous reasons.
Prioritizing the homeless person before your eyes over distant children dying of malaria is not virtuous. As I argue in Overriding Virtue, it rather reflects a failure of empathy: you feel for those you see (good so far!), but not those you don’t (which is obviously less than morally ideal). To make up for the latter failing, a more virtuous agent will use their abstract benevolence to compensate, and ensure that the distant needy aren’t unjustly neglected as a result of one’s own emotional shortcomings. Put another way: to prioritize the lesser nearby need, simply because it’s more salient to you, is a form of moral self-indulgence—prioritizing your own feelings over the fate of real human beings. Nobody should consider such emotional self-indulgence to be ideally virtuous.
So the virtuous agent would obviously be an effective altruist. But a second mistake I want to address from Townsend’s essay is her dismissal of moral interest in quality of will:
Being fair and just to MacAskill and the still-grant-dispersing EA community doesn’t mean we have to search out a yet-uncut thread of quixotic moral exemplariness in them. The assumption that there must remain something praiseworthy in EA flips us into a bizzarro-Kantianism wherein we long for a holy and foolish person who fails in everything consequential yet whose goodwill, as Kant put it, shines like a jewel. In fact, the desire to admire EA despite its flaws indulges a quixotic longing to admire an ineffective altruist. Do not be deceived.
I think it’s very hard to deny that effective altruism is good in expectation, for the reasons set out in What “Effective Altruism” Means to Me. It also seems clear that the actual positive impact of the EA movement to date dwarfs even the harm done by SBF’s massive fraud (which is not to excuse or downplay the latter, but just to emphasize the immensity of the former).
But suppose that weren’t the case. Suppose that, despite his best efforts and all the evidence to the contrary, MacAskill turned out to be an “ineffective altruist” for some unpredictable reason—imagine SBF later breaks out of jail and somehow nukes New York City, and none of it would have happened if it weren’t for WM’s original encouragement to consider “earning to give”. You might then say (speaking very loosely) that WM “failed in everything consequential”.[3] Even then, would it follow that he’s a bad person? Obviously not! To think otherwise is just an abject failure to distinguish the two dimensions of moral evaluation.
You don’t have to be a “bizarro-Kantian” to think that quality of will is importantly distinguishable from actual outcomes. Any minimally competent ethicist should appreciate this basic point. What sort of virtue ethicist would deny that there is “something praiseworthy” in having virtuous motivations, even in the event that the agent’s best efforts turn out unfortunately (through no fault of their own)?
Townsend here sounds like the crudest of crude utilitarians. The alternative to her unmitigated hostility to effective altruism—even if you believed it to have turned out unfortunately—is not “bizarro-Kantianism”, but universal common sense. Of course an individual’s intentions are relevant to our moral assessment of them. And of course there is something deeply admirable about altruistic concern, especially when melded with concern to be effective in one’s altruism. These are literally the intrinsically best motivations any moral agent could possibly have.[4] What sort of person would deny this?
- ^
Or, equivalently, a form of Rossian deontology on which beneficence and non-maleficence are equally weighted and together exhaust the prima facie duties.
- ^
This breakdown is also helpful for bringing out why some common “objections”, e.g. cluelessness and abusability, are really nothing of the sort. Nobody should think either one speaks to the truth of the view, since neither casts doubt on the appropriateness of either beneficentric goals or instrumental rationality. They’re more like expressions of wishful thinking: “our task would be easier (in some sense) if utilitarianism were false.” But so what?
- ^
This has to be speaking loosely, because lives aren’t fungible. So the lives saved are still consequential, even if outweighed!
- ^
Of course, that’s not to say that actually-existing effective altruists are themselves so virtuous. You could imagine more cynical motivations in many cases. Realistically, I think all human beings (including both EAs and their critics) are apt to have mixed motivations. But I generally prefer to give folks the benefit of the doubt; I don’t know that much is gained by defaulting to cynical interpretations when a more charitable alternative is available. It’s very obvious what charitable interpretation is available for making sense of effective altruists. It’s much harder to charitably interpret the anti-EAs, given their apparent indifference to the obvious harms they risk in discouraging effective philanthropy.
I really like the idea of “a beneficentric virtue ethicist who takes scope-sensitive impartial benevolence to be the central virtue”, and feel that something approximating this would be a plausible recommendation of utilitarianism for the heuristics people should actually use to act. (For this purpose, it obviously wouldn’t work to include the parenthetical “(or even only)”.)
However, I’m confused by your confusion at people being appalled by utilitarianism. It seems to me that the heart of it is that utilitarianism, combined with poor choices in instrumental rationality, can lead to people doing really appalling things. Philosophically, you may reasonably object that this is a failure of instrumental rationality, not of utilitarianism. But humans are notoriously bad at instrumental rationality! From a consequentialist perspective it’s a pretty big negative to recommend something which, when combined with normal human levels of failure at instrumental rationality, can lead to catastrophic failures. It could be that it’s still, overall, a good thing to recommend, but I’d certainly feel happier if people doing so (unless they’re explicitly engaged just in a philosophical truth-seeking exercise, and not concerned with consequences) would recognise and address this issue.
Are you disagreeing with my constraint on warranted hostility? As I say in the linked post on that, I think it’s warranted to be hostile towards naive instrumentalism, since it’s actually unreasonable for limited human agents to use that as their decision procedure. But that’s no part of utilitarianism per se.
You say: it could turn out badly to recommend X, if too many people would irrationally combine it with Y, and X+Y has bad effects. I agree. That’s a good reason for being cautious about communicating X without simultaneously communicating not-Y. But that doesn’t warrant hostility towards X, e.g. philosophical judgments that X is “deeply appalling” (something many philosophers claim about utilitarianism, which I think is plainly unwarranted).
There’s a difference between thinking “there are risks to communicating X to people with severe misunderstandings” and “X is inherently appalling”. What baffles me is philosophers who claim the latter, when X = utilitarianism (and even more strongly, for X = EA).
Mmm, while I can understand “appalling” or “deeply appalling”, I don’t think “inherently appalling” makes sense to me (at least coming from philosophers, who should be careful about their language use). I guess you didn’t use that phrase in the original post and now I’m wondering if it’s a precise quote.
(I’d also missed the fact that these were philosophical judgements, which makes me think it’s reasonable to hold them to higher standards than otherwise.)
I don’t think there’s any such thing as non-inherent appallingness. To judge X as warranting moral disgust, revulsion, etc. seems a form of intrinsic evaluation (attributing a form of extreme vice rather than mere instrumental badness).
Hence the paradigmatic examples being things like racist attitudes, not things like… optimism about the prospects were one to implement communism.
I can see where you’re coming from, but I’m not sure I agree. People would be appalled by restaurant staff not washing their hands after going to the toilet, and I think this is because it’s instrumentally bad (in an uncooperative way + may make people ill) rather than because it’s extreme vice.
But negligence / lack of concern for obvious risks to others is a classic form of vice? (In this case, the connection to toilet waste may amplify disgust reactions, for obvious evolutionary reasons.)
If you specify that the staff are from a distant tribe that never learned about basic hygiene facts, I think people would cease to be “appalled” in the same way, and instead just feel that the situation was very lamentable. (Maybe they’d instead blame the restaurant owner for not taking care to educate their staff, depending on whether the owner plausibly “should have known better”.)
Thanks, that helped me sharpen my intuitions about what triggers the “appalled” reaction.
I think I’m still left with: People may very reasonably say that fraud in the service of effective altruism is appalling. Then it’s pretty normal and understandable (even if by my lights unreasonable) to label as “appalling” things which you think will predictably lead others to appalling action.
I mean, lots of fallacious reasoning is “normal and understandable”, but I’m still confused when philosophers do it—I expect better from them!
A tangential request: I’d welcome feedback about whether I should cross-post more or fewer of my (potentially EA-relevant) philosophy posts to this forum.
My default has generally been to err on the side of “fewer”, since I figure anyone who generally likes my writing can freely subscribe to my substack. And others might dislike my posts and find them annoying (at least in too high a quantity, and I do tend to blog quite a lot). OTOH, someone did recently point out that they appreciate the free audio transcription offered on the forum. And maybe others like my writing but just would rather just find it here rather than in their email inbox or on another website.
So: a poll. Agree-vote this comment if you’d like me to cross-post marginally more of my posts (like this one, which seemed kind of “on the margin” by my current standards). Disagree-vote if you’d prefer fewer or about the same.
(I plan to leave most of my pure ethical theory stuff—on utilitarianism, population ethics, etc. -- exclusively on the substack in either case.)
‘So it’s very puzzling that so many seem to find utilitarianism “deeply appalling”. To vindicate such a claim, you really need to trace the objectionability back to one of the two core components of the view: exclusively beneficentric goals, or instrumental rationality. Neither seems particularly “appalling”‘
I think the second sentence here is probably wrong (even though I also distrust people who can’t see the force of the arguments in favour of welfarist consequentialism, which are indeed strong.) It’s normal to evaluate ideas on their entailments as well as their intrinsic appeal. For example, the t-scheme ’ “p” is true iff p’ has extremely high intuitive appeal. But (as you’ll know as a philosopher obviously!) when combined with other highly intuitive principles it entails that the liar sentence “This sentence is false” is both true and false. Whether or not it is actually correct, it is reasonable to worry that maybe this show the t-scheme is just wrong (i.e. not all instances are true), even if you have no explanation of why it is wrong. (Though ultimately you’d want one obviously.) I think examples like this show that even apparently extremely unobjectionable claims can be reasonably doubted if their consequences are bad enough. Indeed, it’s impossible to consistently avoid thinking this, since what paradoxes like the liar or the sorites show is precisely that we can’t consistently hold on to every obvious platitude involved in generating them (or even inconsistently hold on to all of them, since the law of noncontradiction is a platitude.)
Utilitarianism specifically has many consequences that sure seem like they are appalling, like that in theory one sadist torturing everyone in the world at once could be good if the sadist enjoys it enough. That seems like strong evidence against utilitarianism on its own, even though noting the appalling consequence doesn’t “trace the objectionability back to one of the two core components of the view”. Maybe you could argue that is enough to cast doubt on utilitarianism but not enough to justify finding it “deeply appalling”, but if a views entailments are enough to cast doubt on it, why couldn’t they be enough to do the latter? In the case of the t-scheme, which is definitely not a stupid thing to believe in, despite the liar, the answer is “sure the consequences are bad, but it’s SO obvious”. But it’s a substantive claim that something like that is true of utilitarianism.
The sadist example can be “traced back”—it casts doubt on a particular (hedonistic) axiology, i.e. a hedonistic interpretation of the beneficentric goals.
Executive summary: The author argues that utilitarianism and effective altruism are neither ill-willed nor unreasonable, and that high-minded critics who claim otherwise are mistaken or irrational.
Key points:
Utilitarianism can be seen as the combination of beneficent goals and instrumental rationality, neither of which is inherently objectionable.
Critics often make unfair assumptions about utilitarianism that misrepresent its goals and implications.
Effective altruism stems from virtuous motivations of impartial benevolence and a desire to effectively help others, which should be praised rather than dismissed.
Dismissing effective altruism for failing to prioritize the nearby ignores that the distant poor and future generations also deserve moral consideration.
Even if effective altruists ended up being ineffective, their good intentions and virtuous motivations would still be praiseworthy.
Criticisms of utilitarianism and effective altruism often reflect an indifference to obvious harms caused by discouraging effective philanthropic efforts.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
“constraint on warranted hostility: the target must be ill-willed and/or unreasonable.”
Trying to apply this constraint seems to contradict with non-violent communication norms on not assuming intent and keeping the discussion focused on harms/benefits/specific behaviours.
Seems compatible if you simply refrain from hostility altogether? The constraint identifies a necessary condition for hostility, not a sufficient one.