Effective altruism in a non-ideal world

Introduction

Imagine that you are trying to decide if some action A is morally justified. The action might be donating to a particular charity, adopting a vegetarian diet, advocating for a treaty to reduce the risk of accidental nuclear war, or supporting legislation to create a universal child allowance. You go through a 3-step process:

Moral reasoning: You begin your inquiry by thinking carefully about the moral values at stake. You try to put aside your personal interests and think impartially about what we owe to each other. You decide that the most reasonable approach is to maximize the welfare of all human beings (and perhaps animals), both those alive now and those not yet born, in an impartial manner.

Consequence analysis: You then need to decide if A is likely to promote long-term welfare. To do this you search for evidence on relevant empirical questions. You drink coffee and try to evaluate the evidence in an openminded way. You compare A to various alternatives. You decide that A is indeed justified.

Persuasion: Having decided that A is justified, you turn to the task of persuading others to support A with you. You do this using the same moral and empirical reasoning that led you to conclude that A is justified. For example, you try to persuade people that morality requires us to put equal weight on the welfare of all existing and future people. You discuss in detail the theoretical and empirical judgments that lead you to think A maximizes long-term welfare.

In this essay I want to consider the final step in this hypothetical process. Careful utilitarian reasoning is useful for motivating and guiding pro-social behavior on the part of people who are naturally cosmopolitan in their moral orientation, able and willing to engage with complicated and ambiguous arguments about the causal efficacy of different strategies for promoting social welfare, and generally optimistic about our ability to use reason to persuade others and effect social change. However, welfare promotion often requires political action, and politics requires winning the support of people who are not receptive to utilitarian arguments about morality or policy and who are distrustful of change and collective action. In the messy world of democratic politics, winning support for welfare-improving policies sometimes requires meeting people who are not naturally attracted to utilitarian reasoning where they are: appealing to their values, addressing their concerns, and winning their trust. This raises the possibility that utilitarian reasoning can be self-defeating in practice.

In this essay I describe a simple model in which persuasion depends on a combination of temperament, values, trust, and empirical arguments. In this model, utilitarian reasoning can be counterproductive even if values are potentially subject to rational revision and utilitarianism is substantively correct. I give examples to illustrate the importance of appealing to people’s values and acknowledging their concerns, even when we believe they are wrong on the merits. I suggest that focusing on longtermism is likely to be ineffective outside the effective altruism community. Finally, I briefly consider whether the approach I advocate for here threatens to undermine public trust.

As I discuss below, people often overestimate the extent to which their views will prevail in an open debate about what should be done. Effective altruists steeped in academic moral philosophy may be particularly prone to using reason to resolve disagreements by persuading people on the merits, rather than trying to figure out how to make progress in the messy and imperfect world of democratic politics. I certainly could be wrong about this – I am not an expert on effective altruism, and no doubt effective altruists are aware of many of the problems I discuss. In any event my goal is not primarily to criticize effective altruism, which I think is a very positive development, but to flag a potential problem, and to indicate when it might arise and how it might be avoided. Readers can decide for themselves how plausible my suggestions are and when they are worth taking into account.

The psychological appeal of utilitarianism

For some people, utilitarianism is a good psychological fit. If you are inclined to help others, cosmopolitan in your outlook, have a natural sense of efficacy, and enjoy complicated, open-ended thinking about how to achieve your goals, utilitarian morality and analysis is not just a source of useful advice on what to do, it is a welcome source of moral encouragement and affirmation. It challenges you to live up to your values with psychological carrots. Effective altruism makes this moral validation more explicit by turning utilitarianism into a positive identity and source of pride. It offers you a way to understand yourself and your behavior: “I give 10% of my income to charity because I am an effective altruist”. The effective altruism movement also offers external praise, connectedness with others, and a sense of possibility and efficacy. For some people, effective altruism may play the same kind of grounding role that organized religion or fraternal organizations do. Utilitarianism, and especially effective altruism, is not just a set of academic ideas.

For many people, however, utilitarianism is not a natural fit. They may believe that people should generally look out for themselves. They may think of justice as a matter of fairness between groups, or morality as a set of somewhat rigid norms tailored to specific situations. They may be skeptical of consequential reasoning, or think that efforts to make the world a better place usually backfire. They may dislike the open-ended thinking and ambiguity inherent in utilitarian thinking about options and causality. For these people, utilitarian reasoning is unlikely to be persuasive; it may be off-putting or even threatening.

There are many useful things that effective altruists can do on their own, without winning over people who are not naturally inclined to utilitarian thinking. But there are many important problems that effective altruists cannot solve on their own, through private action. To achieve their goals, they will need to engage in politics and get legislation through Congress. To do this, they will need to build a coalition that includes many people who do not share their utilitarian orientation; appealing to natural utilitarians will not be enough. And this means thinking about persuasion.

It is easy to think that rational argument is the key to successful persuasion. If someone is wrong, we need to set them straight, and the way to do this is to explain why our view is correct. Academic proponents of utilitarianism tend to proceed on the assumption that there are better and worse ways of thinking, that the right way to make progress is to identify the best moral arguments, and then to share these arguments – which support utilitarianism, perhaps with some caveats – with others.

Careful thinking is important, but this view is both beguiling and wrong. As Friedrich Hayek noted in The Road to Serfdom, “we all think that our personal order of values is not merely personal, but that in a free discussion among rational people we would convince others that ours is the right one.” Academics may be particularly prone to overestimating the persuasive power of reasoned argument, since they tend to proceed on the assumption that ideas can be evaluated on their merits. And effective altruists may overestimate the power of utilitarian reasoning because it provides them with moral affirmation and a valued identity.

Reason, trust, and arguing about consequences

In The Methodology of Positive Economics, economist Milton Friedman suggested that disagreements over policy can reflect either differences in values or different beliefs about the consequences of policy choices, and that reasoning can help us resolve our disputes about consequences but not our disputes over values:

I venture the judgment, however, that currently in the Western world, and especially in the United States, differences about economic policy among disinterested citizens derive predominant from different predictions about the economic consequences of taking action—differences that in principle can be eliminated by the progress of positive economics—rather than from fundamental differences in basic values, differences about which men can ultimately only fight.

There is surely some truth to Friedman’s view. Arguing about values can lead to intractable conflict, and differences due to disagreements about consequences can sometimes be resolved by clearing up empirical misunderstandings. But I will argue that Friedman’s influential account is too optimistic about the persuasive power of consequentialist reasoning and overlooks the critical role of trust. He is also unduly pessimistic about the persuasive power of moral arguments.

Let’s begin with consequentialist reasoning. Suppose you carefully review the evidence on policy A and conclude that it will promote the public welfare. I am sympathetic to utilitarianism, and you want to persuade me that I should support A. You do this by providing me with a detailed account of the evidence that supports A. This might work, if I am able and willing to sort through the evidence and validate your reasoning for myself. But if the policy is complicated, there is a good chance that I will not have the time or ability to validate your reasoning. In this case, your ability to persuade me will depend on whether I trust you, so that I can rely on your assurances that the policy will be beneficial. (Of course, there is a tradeoff here: the more I trust you, the less validation I need to do on my own, and vice-versa.)

Whether I trust you will depend in part on whether I believe we share values. If I believe that we do not share common values, I may suspect that you are using deceptive empirical claims to manipulate me into doing something that is valuable to you but not to me. In the political sphere, decisions about who to trust often turn on identity and partisanship.

If I can partially validate your reasoning, and am unsure how much I can trust you, my decision to support A may depend on my general beliefs about the efficacy of government action, the reliability of expert knowledge, and similar matters. If I tend to be skeptical of government and expert knowledge, then it is especially important that I trust your judgment and values.

So far, I have assumed that you try to persuade me that A is beneficial using the best available theoretical and empirical arguments. But you can also use arguments that are incomplete, exaggerated, appeal to cognitive biases or common economic fallacies, or are simply outright false. Of course, the possibility that you may do this may make me distrustful, but there is no doubt that incomplete or misleading arguments can sometimes be effective.

Persuasion and values: resolving conflict or eliding differences

Now let’s briefly consider the possibility that we disagree about A because of differences in values: I am not a committed a utilitarian. In this case, Friedman seems to think that we cannot come to a reasoned agreement. Although arguing over values can lead to intractable conflict, reasoning about values is possible for several reasons. I will focus on one, viz., the fact that our moral views are often inconsistent and conflict with each other.

This fact – that our moral views are messy and often inconsistent – is often thought to provide a role for reasoning about values through the process that Rawls described as reflective equilibrium: we try to bring our moral principles and judgments about specific cases into some kind of order. In practice, this process often favors consequentialist reasoning of a utilitarian sort over rule-like precepts that we endorse, at least in situations where following the precept conflicts sharply with promoting welfare.

This kind of reasoning is sometimes effective. For example, when vaccine supplies are limited and a disease is spreading there can be a strong utilitarian case for splitting doses to vaccinate more people. However, regulators may be reluctant to authorize dose splitting in an emergency. Using reduced dosages violates ethical norms that focus on treating individual patients in a manner consistent with established standards of care. In addition, dose splitting may require the use of low doses that have not been tested and shown to be effective in a randomized controlled trial. This means that to evaluate and approve dose splitting regulators must be willing to rely on a kind of loose, Bayesian form of reasoning that considers experience with other vaccines, general knowledge about the immune system, the use of methods of vaccine administration that have not been tested in a particular vaccine, etc. Getting regulators to adopt this kind of loose, Bayesian reasoning may be difficult, given strong professional norms favoring the use of clinical trial data, but success seems possible over time. Arguments for using untested vaccine dosing strategies during the COVID pandemic failed in the United States, but that debate appears to have laid the groundwork for the use partial doses of monkeypox vaccine.

The fact that our moral views are messy and inconsistent provides another avenue for persuasion, one that does not necessarily require me to consciously update my views. Rather than highlighting values that cause us to disagree, you can try to elide our differences by highlighting moral considerations of mine that happen to support your position.

The basic idea here his simple: if you want to persuade me, appeal to my values, not yours. If you want to persuade a social conservative to support abortion rights, talk about the dangers of government overreach. If you want to persuade a national security hawk who is not much concerned about climate change to support renewable energy, emphasize the national security benefits of energy independence.

This may seem like an obvious point, but it does not come naturally to people. As Feinberg and Willer put it:

Both liberals and conservatives typically craft arguments based on their own moral convictions rather than the convictions of the people they target for persuasion. As a result, these moral arguments tend to be unpersuasive, even offensive, to their recipients.

A better approach is to use arguments that appeal to the values of those you need to persuade. This is sometimes called “moral reframing”:

The technique of moral reframing—whereby a position an individual would not normally support is framed in a way that is consistent with that individual’s moral values—can be an effective means for political communication and persuasion.

A recent study by Kalla, Levine, and Broockman suggests that moral reframing can be effective at increasing support for abortion rights.

This kind of persuasion has the drawback that it does not correct the error you perceive in my (nonutilitarian) moral thinking. The advantage is that it may be able to build a political coalition that can move legislation through Congress.

An example: climate change

Let’s quickly look at some examples that are relevant to effective altruism.

Climate change is an extremely serious problem, and strong government action to facilitate a transition to green energy is clearly justified. However, marshaling support for legislation has been difficult, and utilitarian reasoning is unlikely to break the logjam. To fix ideas, suppose you review the evidence and decide that the best policy for fighting climate change is a carbon tax to encourage decarbonization coupled with a reduction in taxes on capital to foster investment and economic growth. There are obviously many reasons why such a policy will not pass:

  • Many people are not willing to accept a significant reduction in their standard of living to pay for a clean energy transition. Even though a carbon tax is (you believe) the least costly way to encourage a transition to green energy, carbon taxes are politically unsustainable because they lead to a visible increase in energy prices.

  • Some people oppose carbon taxes for moral reasons (“people should not be allowed to pay to pollute”) or because they do not understand the economic logic behind them and doubt they will be effective.

  • Some people oppose taxes on general ideological grounds, or because they fear government will simply waste the additional revenue collected.

  • Powerful interest groups oppose efforts to limit fossil fuel use.

  • Climate policy has become highly polarized, with many Republicans unwilling to acknowledge the problem or accept government action to reduce greenhouse gas emissions.

Policy analysts, academics, elected officials, and activists have tried to build a coalition for reform by taking these various constraints into account, not by trying to persuade voters and legislators of the economic merits of carbon taxation (or the benefits of cuts in investment taxes).

Proponents of carbon taxes often propose tying a carbon tax to a per capita rebate of tax revenues, to forestall a possible backlash against higher energy prices and to deal with the fact that many do not trust government to use the new revenue efficiently.

To the best of my knowledge using carbon tax revenue to cut investment taxes has never been seriously entertained outside of academic books and journals. Economists can try to educate people about tax incidence and growth, but most people cannot validate these arguments and are suspicious of economists, and many would suspect that arguments about the efficiency of investment tax cuts are just a smokescreen for upward redistribution.

Most proponents of climate action have given up on taxes entirely and advocate for some combination of direct regulation, subsidies, and government procurement to reduce carbon emissions. The main virtue of these policies is that they hide the costs of transitioning to clean technologies. In addition, many people probably have more confidence in direct regulation than indirect incentives.

Advocates and politicians typically describe the effects of climate policy in terms that ordinary people find appealing but that are (arguably) misleading. For example, they describe climate policy using the evocative but inconsistent metaphors of a Green New Deal (which suggests massive economic slack and the possibility that large numbers of people can be put to work on a green transition with little cost) and a World War II mobilization (which suggests severe supply constraints and the need for large sacrifices).

Economic policy analysis certainly plays a role in developing proposals, but workable proposals need to take political constraints into account (as Sachs does here). At the very least this means trying to assure legislators that a carbon tax will not lead to a backlash by coupling a tax with a very visible rebate. But so far it has meant primarily structuring policies so the costs are hard for voters to discern, not reasoning with voters about the economics of carbon taxation.

Finally, although the benefits of preventing climate change will primarily accrue in the future and to people in poor countries, there is no reason to think that arguing for longtermism or emphasizing the plight of people in developing nations will be effective in motivating more aggressive action on climate. If we are unwilling to stop global warming for the sake ourselves in 25 or 50 years, or for the sake of our children and grandchildren, or to prevent a mass extinction, emphasizing benefits to people who will live thousands of years in the future seems unlikely to change the balance of political forces.

Some additional examples:

Suppose you review the available evidence and conclude that utilitarian justice requires a universal child allowance without a work requirement or means testing. You begin advocating publicly for a universal child allowance. You develop a proposal, write op-eds, and lobby politicians. You make progress, and your allies in Congress introduce a proposal for a universal child allowance. It turns out, however, that many people believe in work requirements, and their concerns are threatening to derail your proposed legislation.

You can try to talk them out of a work requirement by making empirical arguments (the effect on work will be limited). But suppose this doesn’t work. You may be able to build a coalition for a time-limited child allowance by appealing to conservatives who believe in traditional family arrangements in which women stay home and care for young children. Samuel Hammond proposed a compromise along these lines to salvage the recent Democratic effort to create a permanent child allowance. My point here is simply that an approach to moral persuasion that uses reason to identify the requirements of ideal justice can lead us miss opportunities to compromise and get half a loaf. Something like this may have happened with the Democrat’s proposal for a child tax credit, with advocacy groups pressing Senators to reject compromise.

Suppose that you want to substantially increase government aid to the global poor. A pitch based on our utilitarian obligations to help the poor might not be effective with Americans who feel that the government is not doing much for people like them and do not feel much kinship with poor people in Africa or Asia. Appealing to self-interest or patriotism by talking about the benefits of soft power, the need for international support in our competition with China, or the importance of international public health in an interconnected world might be more effective. In the case of foreign aid and domestic welfare spending, working to dispel false beliefs about how much we spend on helping the poor might also be effective (most people greatly overestimate the fraction of the budget that goes to foreign aid and welfare). Of course, it is possible that nothing will work now, and that a purely moral appeal might work during a period of rapid economic growth and good feeling. My point is simply that proponents of increased aid should not limit themselves to pure utilitarian arguments.

Finally, when we suspect that people have motivations that we think are objectionable, it is often tempting to attack the offensive motives directly. Suppose that you think that opposition to a child tax credit or to income support for poor families is rooted in racial animosity. There is plenty of reason to think that racial animosity does reduce support for welfare state policies. If utilitarianism is true, racial animosity is hard to justify; people should put it aside when they think about helping those who are disadvantaged. Yet accusing people of racism, or even asking them to reflect on whether their opposition is due to racial bias, may not be very effective in this situation. It may backfire by racializing the debate, or it may offend and drive away people who do not think of themselves as racist but whose support might be winnable. The best response in this situation may be to ignore the questionable motives and to focus on reasons for helping poor families that may be able to win over some people who harbor some degree of racial animosity. A similar point can be made about charges of misogyny in the current debate over abortion rights: some proponents of extreme restrictions seem to be misogynistic, and these attitudes are wrong, but it is unclear that much is gained by making this charge in political debate where it might drive away potential supporters of abortion rights who are conservative on women’s issues. I am not claiming that it is always best to avoid criticizing people for their bad motives; sometimes this is essential. My point is that this is a choice that needs to be carefully evaluated.

Longtermism, catastrophe risk, and identity politics:

Effective altruists are absolutely right to insist that we need to do more to avoid catastrophes like nuclear war, climate change, malevolent AI, pandemics, and democratic collapse. Longtermism seems able to account for our duty to address these risks more forcefully. Yet longtermism seems debatable as a matter of ideal philosophy, and at least potentially counterproductive as an approach to political advocacy.

I agree that we have a duty to take catastrophic risks far more seriously than we do. It is not obvious, however, that this requirement should be understood as following from a general duty to maximize the welfare of all people who will ever live. A duty to maximize long run welfare could require us to make enormous sacrifices for the sake of future people who will almost certainly be much better off than we are, and who may well be far better off than we can even imagine, provided we do not leave them a nuclear wasteland. In all likelihood, far future people will look back on us with unspeakable pity, if they bother to think of us at all. They will not blame us for not sacrificing more on their behalf, just as we never look back and think that people toiling in factories during the industrial revolution should have worked even harder to make the world better for us.

Thus a reasonable case can be made that we owe people in the distant future a habitable and free world, but it is not clear that we have a general duty weight their welfare equally with ours. How we should think about this is unclear, at least to me. Prioritarianism might explain why we have an obligation to avoid despoiling the earth, but not an obligation to maximize the welfare of trillions of future people who will be better off than we are (assuming we do not despoil). Alternatively, perhaps our moral obligations are stronger towards people we are in a cooperative relationship with. These philosophical issues are and others like them are above my pay grade, but the case for longtermism is far from self-evident.

But is longtermism rhetorically useful for getting us to do more about catastrophe risk? Effective altruists seem to have been at least somewhat successful at raising awareness of catastrophic risk among people who pay attention to policy debates. They have also inspired people to devote time and money to reducing catastrophe risk, and that is all to the good. If longtermism helped to inspire this, then that counts in its favor.

Here I want to flag one limitation and one potential risk of emphasizing longtermism. The limitation is the one I mentioned regarding climate change: it seems doubtful that emphasizing gains to people in the distant future will overcome the political forces of distrust, self-interest, partisan obstruction that stop us from taking catastrophic risks more seriously. If we are unlikely to devote sufficient resources and effort to preventing an accidental nuclear war for the sake of ourselves and our children, it seems unlikely that moral appeals to longtermism will help.

The risk has to do with polarization and identity politics. On the landscape of American politics, effective altruism is aligned with the Democrats in at least two senses. First, effective altruists are generally optimistic about the prospects for conscious action to improve the human lot. Although they often focus on private giving, they also (generally) take a positive, constructive view of the role of government. Like Democrats, they want the government to address serious problems. Second, effective altruists share a cultural affinity with the highly educated liberals who dominate the Democratic coalition. Effective altruists tend to be highly educated, cosmopolitan, and self-critical in their thinking. They tend to embrace change and reject the authority of tradition. It seems to me that longtermism adds to the perception that effective altruists are culturally distant from most Americans.

I am concerned that in our polarized state the political and cultural valence of effective altruism may undercut its influence. More speculatively, there is a risk that when effective altruists bring an issue to public attention and advocate for it using arguments that have a clear partisan valence and that many will find culturally alien, the issue may get tagged as liberal and mired in partisan conflict. Certainly, progress on some issues effective altruists care about has been derailed by partisan polarization, including climate change and even pandemic response preparation. Effective altruists are not responsible for the current dysfunction in American politics, but they need to operate within it.

Trust, reason, and democracy

Even if utilitarian reasoning is justified on the merits, it can backfire in politics. Politics is not a seminar in political morality or public policy, and it is potentially self-defeating to pretend otherwise. But surely this does not give us carte blanche to argue in an opportunistic way, saying whatever we think is most likely to bring about good outcomes on a case-by-case basis. Misrepresenting our reasons can undermine trust, and it may seem to be objectionably manipulative.

I cannot address these issues in detail here, but I want to comment briefly on trust and effective altruism.

Using completely opportunistic reasoning to get our preferred outcomes on a short-term basis can indeed be self-defeating over time because it reduces trust. We have reasons to avoid misrepresenting our values or empirical beliefs, both to preserve our own reputations, and to contribute to trust as a social good.

It is important, however, to put this problem in perspective. Unconstrained misrepresentation of values and beliefs by utilitarians can undermine trust. However, there is little reason to think that the persuasive strategies I have described in this essay will have this effect. There is no reason utilitarians cannot appeal to the values of (say) social conservatives when there is room for compromise; it is plausible to believe that these types of appeals promote trust. And insisting on policies that seem desirable on obscure, technocratic utilitarian grounds can undermine the credibility of utilitarians. This is a perennial problem for economists. Efforts to educate voters and politicians are fine, but at the end of the day fighting public opinion can undermine trust.

I should emphasize, finally, that lack of trust in democratic institutions is a serious problem, and I believe this is an issue that effective altruists will need to grapple with. The problems effective altruists care about are important and many can only be addressed through government action. Figuring out how to promote a modest degree of trust in government and how to create conditions in which that trust is justified is a critical challenge.

No comments.