Why EA needs to be more conservative

(Note: this is not about recent events. At least, not directly.)

Introduction and summary

I recently read two interesting articles that identify problems that afflict Effective Altruism (EA) when it works at the kind of large scales that it is commonly intended (and probably obliged) to consider. Those problems are (a) a potentially paralysing “cluelessness” resulting from uncertainty about outcomes and (b) a “crazy train” to absurd beliefs from which it seems impossible to escape. I argue below that mental models or habits developed by conservatives provide workable answers to these two problems. I would also suggest that this is indicative of a wider point, namely that EA can benefit from the accumulated wisdom of conservatism, which has experience of operating over the kinds of scale and time periods with which EA is now concerned, not least if EA-ers want to achieve or maintain widespread acceptability for their beliefs and practices.

Tyler Cowen has recently recommended that EA-ers should exercise more social conservatism in their private lives. No doubt Cowen is right, but that is no my concenr here. I am concerned in this article only with the professional lives of EA-ers.

While I think, as many people do, that EA should spend more time just getting on with making the world better and less time considering science fiction scenarios and philosophical thought experiments, I make no apology for adding to the ‘theology’ rather than the practice of EA in this article. (If you want my suggestion for a practical project then consider the domestication of the zebra here.) One way or another, EA is getting publicity and with that comes scrutiny: my argument is intended to help EA survive that scrutiny (and reduce the potential for ridicule) by giving it some tried and tested techniques for converting good intentions in action.

The problems

The two articles I am concerned with are Peter McLaughlin’s article “Getting on a different train: can Effective Altruism avoid collapsing into absurdity?” and Hilary Greaves’ paper “Cluelessness” . I suggest that you read both of them if you are at all interested in these topics: they make their arguments in good and interesting ways, and both touch on areas outside the scope of this essay. For present purposes, a brief summary of the concerns raised by each will suffice.

“Cluelessness” identifies a particular worry that people engaged in EA face when wondering whether a particular project under consideration will result in net good or net evil. Greaves suggests that that the EA practitioner will be “clueless”, in an important sense, when facing (to take one of the author’s examples) “the arguments for and against the claim that the direct health benefits of effective altruists’ interventions in the end outweigh any disadvantages that accrue via diminished political activity on the part of citizens in recipient countries”. There are various consequences – perfectly foreseeable consequences – that could follow from the various alternatives before us and we just don’t know what those consequences will be: there are good and plausible arguments for all outcomes. But the kinds of big decisions that EA seems to entail (and tends to involve) are ones that typically face uncertainty of this kind. Are we “clueless” as to what to do?

“Getting on a different train” looks at the kinds of problems that EA faces when dealing with problems of very large scale. One aspect of EA that is appealing to many people, whatever their moral beliefs, is that it opens up the possibility of doing a lot of good. You can save thousands of lives! Whatever your particular religious or moral beliefs, you are likely to think that’s a good thing to do. But if saving thousands of lives is better than saving hundreds of lives (which it is) then surely saving millions or billions of lives will be even better? Which means that, if we are being serious about doing good, we need to think about very large numbers and the very long term. But once you start thinking in those terms then you quickly find yourself on the ‘train to crazy town’, i.e. endorsing bizarre or repugnant moral conclusions. (You know the sort of thing: you are morally compelled to torture someone to death if there is a small but non-zero chance that this would do some enormous good for enormous numbers of people as yet unborn.) But if you try to escape the train to crazy town then what principles do you have? Don’t you have to abandon the project of doing good for large numbers of people? Aren’t you forced to discount some people, to reintroduce myopias or biases you tried so hard to shed?

Why these are problems of scale

The two articles, which I will call “Cluelessness” and “Crazy Train”, both take the approach that we do not need to pin down our theory of morality very precisely: we can simply agree that there are better and worse states of affairs and that it is generally morally preferable to bring about the better states of affairs. Greaves says that our decisions “should be guided at least in part by considerations of the consequences that would result from the various available actions”, while McLaughlin refers to “broadly consequentialist-style” reasoning. We can probably simply think in terms of utilitarianism.

At a small scale, pure “all consequences matter” utilitarianism is simply not an intuitive moral approach. The reason for that might be summarised as follows: we have an intuitive sense of the consequences of our actions for which we are responsible and those for which we are not responsible. Doctors don’t have to consider whether it would be better, all things considered, if their patients recover or if their organs instead were used for donations: in fact, they should not even ask themselves that question. Their job – their responsibility – is simply to make their patients better. Lawyers should not try to work out whether it would be in the best interests of everyone if they ‘threw the case’ and let their clients get convicted. That’s how common-sense morality works, and that’s how common-sense morality is (generally) codified in law.

Or let’s take an even more homely example. If you have just bought an ice cream on a hot summer’s day then you don’t have to look around at all the people within “melting distance” of the ice cream vendor and work out which would get the most pleasure from the ice cream: just give it to your child, who is your responsibility, and let everyone else buy their own ice creams. Only weirdos are utilitarian in their private lives. So much the better for weirdos, you might say. But if EA wants to gain wider approval then I suggest that it steers away from encouraging people to give ice creams to strangers’ children.

None of these examples is a knock-down argument against utilitarian or consequentialist, maximising-type thinking. My point is simply to say that common sense morality finds such thinking repugnant at this scale. (And this is true even if you try to ‘reverse-engineer’ common sense morality via utilitarian-type chains of reasoning: Bernard Williams famously suggested that the man who is faced with the choice of saving his drowning wife or a stranger is not only justified in saving his wife, but should do so with no thought more sophisticated than “that’s my wife!” because thinking, for example, “that’s my wife and in such situations it’s permissible to save one’s wife” would be one thought too many. The man who is presented with his drowning wife and starts thinking “it is optimal for society overall if individuals are able to make a binding, life-long commitments to each other and such commitments can be credible only if …” will end up having about thirty-seven thoughts too many.)

But when we change the scale then our intuitions also change. If the average person were forced to govern a small country then they would almost inevitably approach the questions that arise by applying broadly utilitarian thinking. Should the new hospital go here or there? Should taxes on X be raised or taxes on Y lowered? People automatically reach for some kind of consequentialist, utilitarian calculus to answer these questions: they look to familiar proxies for utility (GDP, quality-adjusted life years, life expectancy, satisfaction surveys and so on). Common sense morality might suggest a couple of non-utilitarian considerations (would the glory of the nation be best served by a new museum? What if my predecessor has promised that the hospital should go here even though it would save more lives if it went there?) but, even so, no one would be shocked to subject even these kinds of consideration to utilitarian scrutiny, certainly not in the way that they are shocked when considering (for example) whether doctors should slaughter innocent hospital visitors in order to provide healthy organs for others.

I would suggest that the difference derives from the same underlying sense of responsibility. The doctor, parent or lawyer can, quite properly, say “I am only responsible for outcomes of a certain kind, or for the welfare of certain people. The fact that my actions might cause worse outcomes on other metrics or harm to other people is simply not my responsibility.” But the government of a country cannot say that. If our notional ruler decides to place a hospital here, in a vibrant city, because it would be easier to recruit and retain medical staff than if the hospital were there, s/​he cannot reply, when pressed with the problem that the roads here are congested and too many people will die in ambulances before they reach the hospital, “not my problem – I was only looking at the recruitment side of things”. The ruler is responsible for everything – all the consequences – and can’t wash their hands of some subset of them. (I leave aside the case of foreigners: the ruler is responsible for their own country, not its neighbours.)

In Crazy Train, McLaughlin quotes a suggestion from Tyler Cowen that utilitarianism is only good as a “mid-scale” theory, i.e. the small country scale I have described, the scale between, on the one hand, the small, personal level of doctors, lawyers and ice cream vans and, on the other, the mega-scale beloved of these kinds of theoretical discussions, consisting of trillions of future humans spread across the galaxy. Samuel Hammond makes a similar point here. Perhaps that’s right, perhaps not. But the fact is that EA is in fact interested in operating at this mid-level; we are talking about the kinds of things that states do: public health projects, for example, or criminal justice reform, or just plain crime reduction. That means that EA needs to deal with cluelessness and craziness at this kind of scale.

Solutions

As I said, I think taking a leaf out of conservatives’ book can help here. What do I mean? Let’s think again about running a small country.

In broad terms, the left-wing view of the politics of running a country is that it consists of two parts: (1) raising money through taxation and (2) spending money on government programmes. The taxation part is a mixture of good and bad: there are some taxes that are bad (ones paid by poor people or ones that cause resentment) but there are some that are good (ones that reduce inequality, say, or ones that penalise anti-social activities like polluting or smoking). Overall, the taxing side of things might be about neutral. But the spending side is positive: each time the government spends money on X then it helps X.

That’s a simple and pleasant model to believe in. It leaves room for some tricky decisions (should high earners be taxed more or should smokers? Should more money go on welfare or education?), but the universe of those decisions is constrained and familiar.

The conservative model is more complicated. First, and most familiar to left-wingers, the taxing side is more fraught; taxation might best be summarised as a necessary evil that should be minimised. But more significantly for present purposes, the spending side is also fraught: it is by no means obvious to right-wingers that when the government spends money on X then it helps X. It might instead be making X worse off, if not now then in the long run, or – and even more confusingly – it might be making Y worse off.

I think that left-wingers often see these kinds of right-wing objections to government programmes as being objections made in bad faith: for example, “you don’t want to see welfare payments to single mothers because, really, you just don’t like single mothers”. But the “cluelessness” that Greaves refers to might, I hope, explain why this is not so. Government spending on a social evil in some sense ‘rewards’ that evil and might encourage more of it; or it might (to take Greaves’ own example) diminish the ability of people to solve the problem themselves.

That means that conservatives are perpetually “clueless” in just the way Greaves describes. They can see both benefits and dangers from spending government-sized amounts of money on government-style programmes. And much of EA, at least to the conservative, looks like government-style programmes—specifically the kind of foreign aid or international development projects on which Western governments have lavished billions over many years and about which conservatives are typically very sceptical. Why, the conservative asks, should we believe that EA practitioners will achieve outcomes so much better than those of the myriad of well-meaning aid workers who have come before? Are the ‘Bright Young Things’ of EA really so much cleverer, or better-intentioned, or knowledgeable than, say, the Scandinavian or Canadian international development establishments?

Yet conservatives do support some government spending programmes. Indeed, a notable feature of the intellectual development of the Right in English-speaking countries since c.2008 is its increased willingness to support government spending programmes. How do they do it?

In short: by exercising the old and well-established virtues of prudence, good judgment and statesmanship.

I’m afraid that sounds vague and unhelpful, and a far cry from the kind of quantitative, data-driven, rapidly scalable maximising decision-making processes that EA practitioners would like. But it’s true. These virtues are the best tools that human have yet found for navigating the cluelessness inherent in making big decisions that affect the future. If you are not cultivating and endorsing these virtues then you are not thinking seriously about how to run something resembling a government-sized operation.

Now let’s turn to Crazy Train. What contribution can people as notoriously averse to strict rules and principles as conservatives make to the problem of drawing theoretical distinctions? Quite a useful one, I would suggest: prioritising the avoidance of negative outcomes.

The virtue of statesmanship is perhaps best exemplified in practice by the likes of Abraham Lincoln or Winston Churchill. Each of those left office with their countries in a bad way. Churchill lost the 1945 election, at which time the country he led was still at war, impoverished and bombed. Lincoln was assassinated at a time when the country he led was devastated by civil war. Neither of them has the obvious record of a utility-maximiser. Yet they are renowned because their actions contributed to the avoidance of far greater evils.

There are other examples too. Nelson Mandela’s great achievement was not bringing majority rule to South Africa: that was FW De Klerk’s achievement. Mandela’s achievement was to ensure that the process was peaceful – it was his magnanimity in victory, his avoidance of vindictiveness and violence, that earned him his plaudits. Pitt the Younger was lauded as the ‘Pilot that weathered the storm’, the man who, although he died with Napoleonic France not yet defeated, had ensured that England would not fall to the threat it presented.

I would suggest that this asymmetry between the value we ascribe to achieving positive outcomes and that we ascribe to avoiding negative ones is a feature, not a bug. It’s how we found the greatest heroes of large scale social projects we have yet seen, and it’s our best chance of finding more such heroes in the future.

Or let me put it another way. Perhaps, as Toby Ord has suggested, we are walking along the edge of a precipice which, if badly traversed, will lead to disaster for humankind. What kind of approach is the right one to take to carrying out such an endeavour? Surely there is only one answer: a conservative approach. One that prioritises good judgment, caution and prudence; one that values avoiding negative outcomes well above achieving positive ones. Moreover, not only would such an approach be sensible in its own terms, but it would also help EA to acquire the kind of popular support that would help it achieve its outcomes.

We have to be realistic here. EA likes to talk about helping all of the sentient beings that will ever exist, but it’s a human institution, likely to fall far short of its aims and fail in the ways that other human institutions have done. But that is no reason to be downhearted: with statesmanship, cautious good judgment and a keen aversion to negative outcomes, a lot of good can yet be done. If EA were to vanish from the face of the Earth having done no more than avoided humankind being eliminated from existence in the next 100 years then it would earn the gratitude of billions and rank among our greatest achievements. A deeply conservative achievement of that kind would be truly admirable. Achieving it would be effective altruism of the best kind.