Moral Misdirection (full post)

Link post

I previously included a link to this as part of my trilogy on anti-philanthropic misdirection, but a commenter asked me to post the full text here for the automated audio conversion. This forum post combines my two substack posts on ‘Moral Misdirection’ and on ‘Anti-Philanthropic Misdirection’. Apologies to anyone who has already read them.

Moral Misdirection

One can lie—or at least misdirect—by telling only truths.

Suppose Don shares news of every violent crime committed by immigrants (while ignoring those committed by native-born citizens, and never sharing evidence of immigrants positively contributing to society). He spreads the false impression that immigrants are dangerous and do more harm than good. Since this isn’t true, and promulgates harmful xenophobic sentiments, I expect most academics in my social circles would judge Don very negatively, as both (i) morally bad, and (ii) intellectually dishonest.

It would not be a convincing defense for Don to say, “But everything I said is literally true!” What matters is that he led his audience to believe much more important falsehoods.[1]

I think broadly similar epistemic vices (not always deliberate) are much more common than is generally appreciated. Identifying them requires judgment calls about which truths are most important. These judgment calls are contestable. But I think they’re worth making. (Others can always let us know if they think our diagnoses are wrong, which could help to refocus debate on the real crux of the disagreement.) People don’t generally think enough about moral prioritization, so encouraging more importance-based criticism could provide helpful correctives against common carelessness and misfocus.

Moral misdirection thus strikes me as an important and illuminating concept.[2] In this post, I’ll first take an initial stab at clarifying the idea, and then suggest a few examples. (Free free to add more in the comments!)

Defining Moral Misdirection

Moral misdirection involves leading people morally astray, specifically by manipulating their attention. So explicitly asserting a sincerely believed falsehood doesn’t qualify. But misdirection needn’t be entirely deliberate, either. Misdirection could be subconscious (perhaps a result of motivated reasoning, or implicit biases), or even entirely inadvertent—merely negligent, say. In fact, deliberately implicating something known to be false won’t necessarily count as “misdirection”. Innocent examples include simplification, or pedagogical “lies-to-children”. If a simplification helps one’s audience to better understand what’s important, there’s nothing dishonest about that—even if it predictably results in some technically false beliefs.

Taking all that into account, here’s my first stab at a conceptual analysis:

Moral misdirection, as it interests me here, is a speech act that functionally operates to distract one’s audience from more important moral truths. It thus predictably reduces the importance-weighted accuracy of the audience’s moral beliefs.

Explanation: Someone who is sincerely, wholeheartedly in error may have the objective effect of leading their audiences astray, but their assertions don’t functionally operate towards that end, merely in virtue of happening to be false.[3] Their good-faith erroneous assertions may rather truly aim to improve the importance-weighted accuracy of their audience’s beliefs, and simply fail. Mistakes happen.

At the other extreme, sometimes people deliberately mislead (about important matters) while technically avoiding any explicit assertion of falsehoods. These bad-faith actors maintain a kind of “plausible deniability”—a sheen of superficial intellectual respectability—while deliberately poisoning the epistemic commons. I find this deeply vicious.

But very often, I believe, people are negligent communicators. They just aren’t thinking sufficiently carefully or explicitly about what’s important in the dispute at hand. They may have other (perhaps subconscious) goals that they implicitly prioritize: making “their side” look good, and the “other side” look bad. When they communicate in ways that promote these others goals at predictable cost to importance-weighted accuracy, they are engaging in moral misdirection—whether they realize it or not.

Significance: I think that moral misdirection, so understood, is a great force for ill in the world: one of the major barriers to intellectual and moral progress. It is a vice that even many otherwise “good” people routinely engage in. Its avoidance may be the most important component of intellectual integrity. It’s disheartening to consider how rare this form of intellectual integrity seems to be, even amongst intellectuals (in part because attention to the question of what is truly important is so rare). By drawing explicit attention to it, I hope to make it more common.

Three Examples

(1) Anti-Woke Culture Warriors

In a large, politically polarized country, you’ll find plenty of bad behavior spanning the political spectrum. So you can probably think of some instances of “wokeness run amok”. (If you wanted to, you could probably find a new example every week.) But as with Don the xenophobe, if you draw attention to all and only misbehavior from one specific group, you can easily exaggerate the threat they pose: creating the (mis)impression that wokeness is a grave threat to civilized society and should be our top political priority (providing sufficient reason to vote Republican, say).

As always, if someone is willing to explicitly argue for this conclusion—that wokeness really is the #1 problem in American society today—then I’ll give them points for intellectual honesty. (I’ll just disagree on the substance.)[4] But I think most recognize that this claim isn’t really defensible. And if one grants that Democrats are better on the more important issues (not all would grant this, of course), then it would constitute moral misdirection for one to engage in anti-woke culture warring without stressing the far graver threats from MAGA culture. Even if some anti-woke were 100% correct about every particular dispute they draw attention to, it matters how important these particulars are compared to competing issues of concern.

Similar observations apply to many political disputes. Politics is absolutely full of moral misdirection. Like all intellectual vices, we find it easier to recognize when the “other side” is guilty of it. But it’s worth being aware of more generally. I think that academics have an especially strong obligation to communicate with intellectual honesty,[5] even if dishonesty may (lamentably) sometimes be justified for politicians.[6]

(2) Media Misdirection: “But Her Emails!”

An especially important form of moral misdirection comes from misplaced media attention. In an ideal world, the prominence of an issue in the news media would highly correlate with its objective moral importance. Real-world journalistic practices notoriously fall far short of this ideal.

Election coverage is obviously especially high stakes here, and it’s a perennial complaint that the media does not sufficiently focus on “the real issues” of importance: what practical difference it would make to have one candidate elected rather than the other. The media’s treatment of Hilary Clinton’s email cybersecurity as the #1 issue in the 2016 election was a paradigmatic example of moral misdirection. (No one could seriously believe that this was what an undecided voter’s decision rationally ought to turn on.)

A general lesson we can take from this is that scandals tend to absorb our attention in ways that are vastly disproportionate to their objective importance. (We would probably be better-off, epistemically, were we to completely ignore them.) Politicians exploit this, whipping up putative scandals to make the other side look bad. Media coverage of them would be much more responsible if they foregrounded analysis of how, if at all, any given “scandal” should change our expectations about how the candidate would govern.

(3) Anti-Vax scaremongering

Here’s another clear example of moral misdirection: highlighting the “risks” of vaccines, while ignoring or downplaying the far greater risks from remaining unvaccinated.

For a subtler (and hence more philosophically interesting) variation on the case: Consider how, at the peak of the pandemic, with limited vaccines available, western governments suspended access to some COVID vaccines (AstraZeneca in Europe, Johnson & Johnson in the US) due to uncertain risks of side-effects.

As I argued in my 2022 paper, ‘Pandemic Ethics and Status Quo Risk’, the suspensions communicated a kind of moral misinformation:[7]

Public institutions ought not to engage in strategic deception of the public. The idea that vaccine risks outweigh (either empirically or normatively) the risks of being unvaccinated during the pandemic is an instance of public health misinformation that is troublingly prevalent in our society. When public health institutions implement alarmist vaccine suspensions or other forms of vaccine obstructionism on strategic grounds, this communicates and reinforces the false message that the vaccine risks warrant such a response. Rather than trying to manipulate the public by pandering to unwarranted fears, public institutions have an obligation to communicate accurate information and promote the policies that are warranted in light of that information.

The most important thing for anyone to know during the pandemic was that they would be better off vaccinated ASAP. Any message that undermined this most important truth thus constituted (inadvertent) moral misdirection. To avoid this charge, public communication around the risks and side-effects of vaccines should always have been accompanied by the reminder that the risks and side-effects of getting COVID while unvaccinated were far more severe. When public health agencies instead engaged in alarmist vaccine suspensions, this was both (i) harmful, and (ii) intellectually dishonest. It’s no excuse that what they said about the risks and uncertainty was true. They predictably led their audience to believe much more important falsehoods.

It’s reasonable for public health agencies to want to insulate tried-and-true vaccines from the reputational risks of experimental vaccines (due to irresponsible media alarmism). But I think they should find a better way to do this. (One option: make clear that they do not vouch for the safety of these vaccines the way that they do for others. Downgrade them to “experimental” status. But allow access, and further communicate that many individuals may find, in consultation with their doctors, that the vaccine remains a good bet for them given our current evidence—despite the uncertainty—because COVID most likely posed a greater risk.)

Misleading Appeals to Complexity

“X is more complex than you’d realize from proponents’ public messaging,” is a message that academics are very open to (we love complexity!). But it’s also a message that can very easily slide into misdirection, as becomes obvious when you plug ‘vaccine safety’ in place of ‘X’.

To repeat my central claims:

Honest communication requires taking care not to mislead your audience. Honest public communication requires taking care not to mislead general audiences. True claims can still (very predictably) mislead.

In particular, over-emphasizing the “uncertainties” of overall good things can easily prove misleading to general audiences. (It’s uncertain whether any given immigrant will turn out to be a criminal—or to be the next Steve Jobs—but it would clearly constitute moral misdirection to try to make the “risk” of criminality more salient, as nativist politicians too often do.) Public communicators should appreciate the risks they run—not just morally, but epistemically—and take appropriate care in how they communicate about high-stakes topics. Remember: if you mislead your audience into believing important falsehoods, that is both (i) morally bad, and (ii) dishonest. The higher the stakes, the worse it is to commit this moral-epistemic vice.

How to Criticize Good Things Responsibly

I think it’s almost always possible to find a responsible way to express your beliefs. And it’s usually worth doing so: even Good Things can be further improved, after all. (Or you might learn that your beliefs are false, and update accordingly.)

To responsibly criticize a (possibly) Good Thing, a good first step is to work out (i) what its proponents take to be the most important truth, and (ii) whether you agree on that point or not.

Either way, you should be honest and explicit about your verdict. If you think that proponents’ “most important truth” is either unimportant or false, you should explicitly explain why. That would be the most fundamental and informative criticism you could offer to their view. (I would love for critics of my views to attempt this!)

If you agree that your target is correct about the most important truth in the context at hand, then in a public-facing article you should probably start off by acknowledging this. And end by reinforcing it. Generally try not to mislead your audience into thinking that the important truth is false. After first doing no epistemic harm, in the middle you can pursue your remaining disagreements.[8] With any luck, everyone will emerge from the discussion with overall more accurate (importance-weighted) beliefs.

Anti-Philanthropic Misdirection

I’ve so far argued that honest communication aims to increase the importance-weighted accuracy of your audience’s beliefs. Discourse that predictably does the opposite on a morally important matter—even if the explicit assertions are technically true—constitutes moral misdirection. Emphasizing minor, outweighed costs of good things (e.g. vaccines) is a classic form that this can take. I’ll now turn to another important case study: exaggerating the harms of trying to do good.

What’s Important

Here’s something that strikes me as very important, true, and neglected:

Target-Sensitive Potential for Good (TSPG): We have the potential to do a lot of good in the face of severe global problems (including global poverty, factory-farmed animal welfare, and protecting against catastrophic risks). Doing so would be extremely worthwhile. In all these areas, it is worth making deliberate, informed efforts to try to do more good rather than less with our resources: Better targeting our efforts may make even more of a difference than the basic decision to help at all.

This belief, together with a practical commitment to acting upon it, is basically the defining characteristic of effective altruists. So, applying the above guidance on how to criticize good things responsibly, responsible critics of EA should first consider whether they agree that TSPG is true and important, and explain their verdict.

As I explain in a companion post (see #25) the stakes here are extremely high: whether or not people engage in acts of effective altruism is literally a matter of life or death for the potential beneficiaries of our moral efforts. A total lack of concern about these effects is not morally decent. Public-facing rhetoric that predictably creates the false impression that TSPG is false, or that acts of effective altruism are not worth doing, is more plainly and obviously harmful than any other speech I can realistically imagine philosophers engaging in.[9] It constitutes literally lethal moral misdirection.

Responsible Criticism

To draw attention to these stakes is not to claim that people “aren’t allowed to criticize EA.” As I :

I think it’s almost always possible to find a responsible way to express your beliefs. And it’s usually worth doing so: even Good Things can be further improved, after all. (Or you might learn that your beliefs are false, and update accordingly.)

But it requires care. And the mud-slinging vitriol of EA’s public critics is careless in the extreme, elevating lazy hostile rhetoric over lucid ethical analysis.

There’s no reason that criticism of EA must take this vicious form. You could instead highlight up-front your agreement with TSPG (or whatever other important neglected truths you agree we do well to bring more attention to), before going on to calmly explain your disagreements.

The hostile, dismissive tone of many critics seems to communicate something more like “EAs are stupid and wrong about everything.” (Even if this effect is not deliberate, it’s entirely predictable that vitriolic articles will have this effect on first-world readers who have every incentive to find an excuse to dismiss EA’s message. I’ve certainly seen many people on social media pick up—and repeat—exactly this sort of indiscriminate dismissal.) If TSPG is true, then EAs are right about the most important thing, and it’s both harmful and intellectually dishonest to imply otherwise.

Of course, if you truly think that TSPG is false, then by all means explicitly argue for that. (Similarly, regarding my initial examples of moral misdirection: if public health authorities ever truly believed that some vaccines were more dangerous than COVID itself, they should say so and explain why. And if immigrants truly caused more harm than benefit to their host societies, that too would be important to learn.) It’s vital to get at the truth about important questions, and that requires open debate. I’m 100% in favor of that.

But if you agree that TSPG is true and important, then you really should take care not to implicitly communicate its negation when pursuing less-important disagreements.

The critics might not realize that they’re engaged in moral misdirection,[10] any more than Don the xenophobe does.[11] I expect the critics don’t explicitly think about the moral costs of their anti-philanthropic advocacy: that less EA influence means more kids dying of malaria (or suffering lead exposure), less effective efforts to mitigate the evils of factory farming, and less forethought and precautionary measures regarding potential global catastrophic risks. But if you’re going to publicly advocate for less altruism and/​or less effective altruism in the world, you need to face up to the reality of what you’re doing![12]

Wenar’s Counterpart on “Deaths from Vaccines”

I previously discussed how academic audiences may be especially susceptible to moral misdirection based upon misleading appeals to complexity. “Things are more complex than they seem,” is a message that appeals to us, and is often true!

But true claims can still (very predictably) mislead. So when writing for a general audience on a high-stakes issue, in a very prominent venue, public intellectuals have an obligation not to reduce the importance-weighted accuracy of their audience’s beliefs.

Leif Wenar egregiously violated this obligation with his WIRED article, ‘The Deaths of Effective Altruism’. And (judging by my social media feeds) a hefty chunk of the philosophy profession publicly cheered him on.

I can’t imagine that an implicitly anti-vax screed about “Deaths from Vaccines” would have elicited the same sort of gushing praise from my fellow academics. But it’s structurally very similar, as I’ll now explain.

Wenar begins by suggesting, “When you meet [an effective altruist], ask them how many people they’ve killed.” He highlights various potential harms from aid (many of which are not empirically well-supported, and don’t plausibly apply to GiveWell’s top charities in particular, while the few that clearly do apply seem rather negligible compared to the benefits), while explicitly disavowing full-blown aid skepticism: rather, he compares aid to a doctor who offers useful medicine that has some harmful side-effects.[13]

His anti-vax counterpart writes that he “absolutely does not mean that vaccines don’t work… Yet what no one in public health should say is that all they’re doing is improving health.” Anti-vax Wenar goes on to describe “haranguing” a pro-vaccine visiting speaker for giving a conceptual talk explaining how many small health benefits (from vaccinating against non-lethal diseases) can add up to a benefit equivalent to “saving a life”. Why does this warrant haranguing? Because vaccines are so much “more complex than ‘jabs save lives’!”

Wenar laments that the speaker didn’t see the value in this point—their eyes glazed over with the “pro-vax glaze”. He interprets this as the speaker having a hero complex, and fearing “He’s trying to stop me.” As I explain on the EA Forum, Wenar’s “hero complex” seems an entirely gratuitous projection. But it would seem very reasonable for the pro-vax speaker to worry that this haranguing lunatic was trying to stop or undermine net-beneficial interventions. I worry that, too!

People are very prone to status-quo bias, and averse to salient harms. If you go out of your way to make harms from action extra-salient, while ignoring (far greater) harms from inaction, this will very predictably lead to worse decisions. We saw this time and again throughout the pandemic, and now Wenar is encouraging a similarly biased approach to thinking about aid. Note that his “dearest test” does not involve vividly imagining your dearest ones suffering harm as a result of your inaction; only action.[14] Wenar is here promoting a general approach to practical reasoning that is systematically biased (and predictably harmful as a result): a plain force for ill in the world.[15]

Wenar scathingly criticized GiveWell—the most reliable and sophisticated charity evaluators around—for not sufficiently highlighting the rare downsides of their top charities on their front page.[16] This is insane: like complaining that vaccine syringes don’t come with skull-and-crossbones stickers vividly representing each person who has previously died from complications. He is effectively complaining that GiveWell refrains from engaging in moral misdirection. It’s extraordinary, and really brings out why this concept matters.

Honest public communication requires taking care not to mislead general audiences.

Wenar claims to be promoting “honesty”, but the reality is the opposite. My understanding of honesty is that we aim to increase importance-weighted accuracy in our audiences. It’s not honest to selectively share stories of immigrant crime, or rare vaccine complications, or that one time bandits killed two people while trying to steal money from an effective charity. It’s distorting. There are ways to carefully contextualize these costs so that they can be discussed honestly without giving a misleading impression. But to demand, as Wenar does, that costs must always be highlighted to casual readers is not honest. It’s outright deceptive.

Further reading

There’s a lot more to say about the bad reasoning in Wenar’s article (and related complaints from other anti-EAs). One thing that I especially hope to explore in a future post is how deeply confused many people (evidently including Wenar) are about the role of quantitative tools (like “expected value” calculations) in practical reasoning about how to do the most good. But that will have to wait for another day.

In the meantime, I recommend also checking out the following two responses:

  1. ^

    As I was finishing up this post, I saw that Neil Levy & Keith Raymond Harris offer a similar example of “truthful misinformation” on the Practical Ethics blog. They’re particularly interested in communication that induces “false beliefs about a group”, and don’t make the general link to importance that I focus on in this post.

  2. ^

    Huge thanks to Helen for many related discussions over the years that have no doubt shaped my thoughts—and for suggestions and feedback on an earlier draft of this post.

  3. ^

    A tricky case: what if they misdirect as a result of sincerely but falsely believing that what they’re drawing our attention to is really more important than what they’re distracting us from? I’m not sure how best to extend the concept to this case. (Maybe it comes down to whether their false belief about importance is reasonable or not?) Either way, the main claim I want to make about this sort of case is that we would make more dialectical progress by foregrounding the background disagreement about importance.

  4. ^

    I might be more sympathetic to a more limited claim, e.g. that excessive wokeness is one of the worst cultural tendencies on university campuses. (I don’t have a firm view on the matter, but that at least sounds like a live possibility—I wouldn’t be shocked if it turned out to be true.) But I don’t think campus culture is the most important political issue in the world. And I certainly don’t trust Republican politicians to be principled defenders of academic freedom!

  5. ^

    It’s obviously valuable for society to have truth-seeking institutions and apolitical “experts” who can be trusted to communicate accurate information about their areas of expertise. When academics behave like political hacks for short-term political gain, they are undermining one of the most valuable social institutions that we have. As I previously put it: “Those on the left who treat academic research as just another political arena for the powerful to enforce their opinions as orthodoxy are making DeSantis’ case for him—why shouldn’t a political arena be under political control? The only principled grounds to resist this, I’d think, is to insist that academic inquiry isn’t just politics by another means.”

  6. ^

    I find this really sad, but I assume an intellectually honest politician would (like carbon taxes) be a dismal political failure. Matthew Yglesias has convinced me of the virtues of political pandering. But that’s very much a role-specific virtue. Good politicians should pander so that they’re able to get the democratic support needed to do go things, given the realities of the actually-existing electorate and the fact that their competition will otherwise win and do bad things. As a consequence, no intelligent person should believe what politicians say. But, as per the previous note, it’s really important that people in many other professions (e.g. academics) be more trustworthy!

  7. ^

    I also argued that killing innocent people (by blocking their access to life-saving vaccines) is not an acceptable means to placating the irrationally vaccine-hesitant. (I’m a bit surprised that more non-consequentialists weren’t with me on this one!)

  8. ^

    Helen pointed me to this NPR article on the “perils of intense meditation” as a possible exemplar. They highlight in their intro that “Meditation and mindfulness have many known health benefits,” and conclude by noting that “the podcast isn’t about the people for whom this works.… The purpose is to scrutinize harm that is being done to people and to question why isn’t the organization itself doing more to prevent that harm.” This seems perfectly reasonable, and the framing helps to reduce the risk of misleading their audience.

  9. ^

    Compare all the progressive hand-wringing over wildly speculative potential for causing “harm” whenever politically-incorrect views are expressed in obscure academic journals. Many of the same people seem completely unconcerned about the far more obvious risks of spreading anti-philanthropic misinformation. The inconsistency is glaring.

  10. ^

    My best guess at what is typically going on: I suspect many people find EAs annoying. So they naturally feel some motivation to undermine the movement, if the opportunity arises. And plenty of opportunities inevitably do arise. (When a movement involves large numbers of people, many of whom are unusually ambitious and non-conformist, some will inevitably mess up. Some will even be outright crooks.) But once again, even if some particular complaints are true, that’s no excuse for predictably leading their audiences to believe much more important falsehoods.

  11. ^

    One difference: Don’s behavior is naturally understood as stemming from hateful xenophobic attitudes. I doubt that most critics of EA are so malicious. But I do think they’re morally negligent (and very likely driven by motivated reasoning, given the obvious threat that EA ideas pose to either your wallet or your moral self-image). And the stakes, if anything, are even higher.

  12. ^

    In the same way, I wish anyone invoking dismissive rhetoric about utilitarian “number-crunching” would understand that those numbers represent people’s lives, and it is worth thinking about how we can help more rather than fewer people. It would be nice to have a catchy label for the failure to see through to the content of what’s represented in these sorts of cases. “Representational myopia,” perhaps? It’s such a common intellectual-cum-moral failure.

  13. ^

    Though he doesn’t even mention GiveDirectly, a long-time EA favorite that’s often treated as the most reliably-good “baseline” for comparison with other promising interventions.

  14. ^

    As Bentham’s Bulldog aptly notes:

    Perhaps Wenar should have applied the “dearest test” before writing the article. He should have looked in the eyes of his loved ones, the potential extra people who might die as a result of people opposing giving aid to effective charities, and saying “I believe in my decisions, enough that I’d still make them even if one of the people who could be hurt was you.”

  15. ^

    As Scott Alexander puts it:

    I want to make it clear that I think people like this Wired writer are destroying the world. Wind farms could stop global warming—BUT WHAT IF A BIRD FLIES INTO THE WINDMILL, DID YOU EVER THINK OF THAT? Thousands of people are homeless and high housing costs have impoverished a generation—BUT WHAT IF BUILDING A HOUSE RUINS SOMEONE’S VIEW? Medical studies create new cures for deadly illnesses—BUT WHAT IF SOMEONE CONSENTS TO A STUDY AND LATER REGRETS IT? Our infrastructure is crumbling, BUT MAYBE WE SHOULD REQUIRE $50 MILLION WORTH OF ENVIRONMENTAL REVIEW FOR A BIKE LANE, IN CASE IT HURTS SOMEONE SOMEHOW.

    “Malaria nets save hundreds of thousands of lives, BUT WHAT IF SOMEONE USES THEM TO CATCH FISH AND THE FISH DIE?” is a member in good standing of this class. I think the people who do this are the worst kind of person, the people who have ruined the promise of progress and health and security for everybody, and instead of feting them in every newspaper and magazine, we should make it clear that we hate them and hold every single life unsaved, every single renewable power plant unbuilt, every single person relegated to generational poverty, against their karmic balance.

    They never care when a normal bad thing is going on. If they cared about fish, they might, for example, support one of the many EA charities aimed at helping fish survive the many bad things that are happening to fish all over the world. They will never do this. What they care about is that someone is trying to accomplish something, and fish can be used as an excuse to criticize them. Nothing matters in itself, everything only matters as a way to extract tribute from people who are trying to do stuff. “Nice cause you have there . . . shame if someone accused it of doing harm.”

  16. ^

    Note that GiveWell is very transparent in their full reports: that’s where Wenar got many of his examples from. But to list “deaths caused” on the front page would mislead casual readers into thinking that these deaths were directly caused by the interventions. Wenar instead references very indirectly caused deaths—like when bandits killed two people while trying to steal money from an effective charity, or when a charity employs a worker who was previously doing other good work. Even deontologists should not believe in constraints against unintended indirect harm of this sort—that would immediately entail total paralysis. Morally speaking, every sane view should agree that these harms merely count by reducing the net benefit. They aren’t something to be highlighted in their own right.