Getting on a different train: can Effective Altruism avoid collapsing into absurdity?

The train to crazy town

Introduction

Sam Atis, following on from some arguments by Scott Alexander, writes of ‘the train to Crazy Town’.[1] As Sam presents it, there are a series of escalating challenges to utilitarian-style (and more broadly consequentialist-style) reasoning, leading further and further into absurdity. Sam himself bites the bullet on some classic cases, like the transplant problem and the repugnant conclusion, but is put off by some more difficult examples:

Thomas Hurka’s St Petersburg Paradox: Suppose you are offered a deal—you can press a button that has a 51% chance of creating a new world and doubling the total amount of utility, but a 49% chance of destroying the world and all utility in existence. If you want to maximise total expected utility, you ought to press the button—pressing the button has positive expected value. But the problem comes when you are asked whether you want to press the button again and again and again—at each point, the person trying to maximise expected utility ought to agree to press the button, but of course, eventually they will destroy everything.[2]

The Very Repugnant Conclusion: Once [utilitarians] assign some positive value, however small, to the creation of each person who has a weak preference for leading her life rather than no life, then how can they stop short of saying that some large number of such lives can compensate for the creation of lots of dreadful lives, lives in pain and torture that nobody would want to live?[3]

Most people, like Sam, try to get off the train before they reach the end of the line, trying to preserve some but not all of utilitarianism. And so the question is: how far are you willing to go?

As a list of challenges to utilitarianism, I think Sam’s post is lacking: he is very focussed on specific thought experiments, ignoring more theoretical objections that in my view are much more insightful.[4] But as a provocation—how far will you go in your utilitarianism?—I think it’s an extraordinarily useful post. Sam takes it upon himself to pose the following question: ‘what are the principles by which we should decide when to get off the train?’

But something worries me about Sam’s presentation. Who said you could actually get off the train to crazy town? Each additional challenge to utilitarian logic—each stop on the route—does not seem to assume any new premises: every problem is generated by the same basic starting point of impartially weighing up all people’s experiences and preferences against each other. As such, there just might not be any principles you can use to justify biting this bullet but not that bullet—doing so might even be logically incoherent. Sam says that he does want to get off the train eventually, but it’s not clear how he would do that.

Alexander’s original post suggests something closer to this view. He explains why he dislikes the Repugnant Conclusion and various other situations in which ‘longtermist’-style consequentialism goes awry, and then lays out the position he would take to avoid these conclusions ‘[i]f I had to play the philosophy game’. But, he writes, ‘I’m not sure I want to play the philosophy game.’ He’s not confident that his own partial theoretical compromise actually will avoid the absurdity (and rightly so), and he cares more about avoiding absurdity than he does about getting on the train.

Mainstream commentators from outside Effective Altruism have made this point too. Stephen Bush, a Financial Times journalist whose political analysis I respect a lot, reviewed William MacAskill’s book What We Owe the Future a few weeks ago; his primary criticism was that the whole book seemed to be trying to ‘sell’ readers ‘on a thought experiment that even its author doesn’t wholly endorse’. As an example, Bush notes that arguments in the book straightforwardly entail that ‘my choice to remain childless so that my partner and I can fritter away our disposable income’ is immoral. Officially, MacAskill swears off this implication; but this just looks like he ‘explicitly sets out the case for children and then goes “now, of course I’m not saying that”’. MacAskill tries to avoid these inferences, but it is entirely reasonable for someone like Bush to look at where MacAskill’s logic is heading, decide they don’t like it, and reject the whole approach.

In other words, it’s not clear how anyone actually gets off the train to crazy town. Once you allow even a little bit of utilitarianism in, the unpalatable consequences follow immediately. The train might be an express service: once the doors close behind you, you can’t get off until the end of the line.

I want to get off Mr Bones’ Wild Ride

This seems like a pretty significant problem for Effective Altruists. Effective Altruists seemingly want to use a certain kind of utilitarian (or more generally consequentialist) logic for decision-making in a lot of situations; but at the same time, the Effective Altruism movement aims to be broader than pure consequentialism, encompassing a wider range of people and allowing for uncertainty about ethics. As Richard Y. Chappell has put it, they want ‘utilitarianism minus the controversial bits’. Yet it’s not immediately clear how the models and decision-procedures used by Effective Altruists can consistently avoid any of the problems for utilitarianism: as examples above illustrate, it’s entirely possible that even the simplest utilitarian premises can lead to seriously difficult conclusions.

Maybe a few committed ‘EAs’ will bite the relevant bullets. But not everyone will, and this could potentially create bad epistemic foundations for Effective Altruism (if people end up being told to accept premises without worrying about their conclusions) as well as poor social dynamics (as with Alexander believing his position is ‘anti-intellectual’). And beyond the community itself, the general public isn’t stupid: if they can see that this is where Effective Altruist logic leads, they might simply avoid it. This could significantly hamper the effectiveness of Effective Altruist advocacy. In this post, I want to ask if this impression—that Effective Altruism can’t get consistently off the train to crazy town—is correct, and what it might mean if it is.

An impossibility result

Above, I introduced my main idea in an intuitive way, showing how a bunch of different writers have come to similar conclusions. I now want to try to be a bit more formal about it. This section draws heavily on a 1996 article by Tyler Cowen, which is called ‘What Do We Learn from the Repugnant Conclusion?’ but which is much broader in scope than the title would suggest. Cowen is a well-known thinker within Effective Altruism, and Effective Altruists are often interested in population ethics and the repugnant conclusion, but this article and Cowen’s other writings on population ethics (with the exception of the piece he co-authored with Derek Parfit on discounting) seem relatively unknown in these spaces.

Cowen’s argument is similar on the surface to ‘impossibility theorems’ in the population ethics literature, which prove that we cannot coherently combine all the things we intuitively want from a theory of population ethics. But on a deeper level, it’s quite different: it’s about problems for moral theories in general, not just population ethics. In particular, Cowen is discussing moral theories with ‘universal domain’, which just means systematic theories that are able to compare any two possible options. This is as opposed to moral particularism, which opposes the use of general principles in moral thought and favours individual case-by-case judgments, and especially to value pluralism, which is committed to there being multiple incommensurable values and thus insists that sometimes different situations are morally incomparable.[5] Theories with universal domain include almost all consequentialist and deontological theories, as well as some forms of virtue ethics: these are all committed to comparing and weighing up different values (whether by reducing them all to a single overarching value, or by treating them as separate but commensurable ends-in-themselves), and can systematically evaluate all possible options.

Cowen restricts his attention to theories where one of the values that matters for ranking outcomes is utility (whether in its preference-satisfaction, hedonist, or welfarist guises).[6] It’s not clear that this is strictly necessary—at least some theories that ignore utility, though not necessarily all of them, face similar problems anyway—but Cowen focusses on utility for simplicity and clarity. Importantly, this doesn’t overly limit our focus: Cowen’s condition includes all moral views that might support Effective Altruism, including non-consequentialist theories that include an account of impartial, aggregative do-gooding as well as pure consequentialism.[7]

So, the problem is this. Effective Altruism wants to be able to say that things other than utility matter—not just in the sense that they have some moral weight, but in the sense that they can actually be relevant to deciding what to do, not just swamped by utility calculations. Cowen makes the condition more precise, identifying it as the denial of the following claim: given two options, no matter how other morally-relevant factors are distributed between the options, you can always find a distribution of utility such that the option with the larger amount of utility is better. The hope that you can have ‘utilitarianism minus the controversial bits’ relies on denying precisely this claim.

This condition doesn’t aim to make utility irrelevant, such that utilitarian considerations should never change your mind or shift your perspective: it just requires that they can be restrained, with utility co-existing with other valuable ends. It guarantees that utility won’t automatically swamp other factors, like partiality towards family and friends, or personal values, or self-interest, or respect for rights, or even suffering (as in the Very Repugnant Conclusion). This would allow us to respect our intuitions when they conflict with utility, which is just what it means to be able to get off the train to crazy town.

Now, at the same time, Effective Altruists also want to emphasise the relevance of scale to moral decision-making. The central insight of early Effective Altruists was to resist scope insensitivity and to begin systematically examining the numbers involved in various issues. ‘Longtermist’ Effective Altruists are deeply motivated by the idea that ‘the future is vast’: the huge numbers of future people that could potentially exist gives us a lot of reason to try to make the future better. The fact that some interventions produce so much more utility—do so much more good—than others is one of the main grounds for prioritising them. So while it would technically be a solution to our problem to declare (e.g.) that considerations of utility become effectively irrelevant once the numbers get too big, that would be unacceptable to Effective Altruists. Scale matters in Effective Altruism (rightly so, I would say!), and it doesn’t just stop mattering after some point.

So, what other options are there? Well, this is where Cowen’s paper comes in: it turns out, there are none. For any moral theory with universal domain where utility matters at all, either the marginal value of utility diminishes rapidly (asymptotically) towards zero, or considerations of utility come to swamp all other values. The formal reasoning behind this impossibility result can be found in the appendix to Cowen’s paper; it relies on certain order properties of the real numbers. But the same general argument can be made without any mathematical technicality.

Consider two options, A and B,[8] where A is arbitrarily preferable to B along all dimensions—except, possibly, utility. Now imagine we can continually increase the amount of utility in option B, while keeping everything else fixed. At some point in this process, one of two things must occur:

  • These increases in utility eventually become so large in aggregate that their value swamps the value of everything else, and B becomes preferable to A on utility grounds alone.

  • Each additional unit of utility begins to ‘matter’ less and less, with the marginal value of utility diminishing rapidly to zero, such that A remains preferable to B no matter how much utility is added.

This is where the paradoxes that Sam discusses come in: they are concrete examples of exactly this sort of case. Take the Very Repugnant Conclusion as an example. You start with world A, containing a small population who live in bliss with all the good things in life, and world B, containing solely a huge quantity of suffering with nothing else of value. You then begin adding additional utility into world B, in the form of additional lives with imperceptibly positive value—one brief minute of ‘muzak and potatoes’ each. And then the inevitable problem: either the value of all these additional lives, added together, eventually swamps the negative value of suffering; or, eventually, the marginal value of an additional life becomes infinitesimally small, such that there is no number of lives you could add to make B better than A.

I hope the reasoning is clear enough from this sketch. If you are committed to the scope of utility mattering, such that you cannot just declare additional utility de facto irrelevant past a certain point, then there is no way for you to formulate a moral theory that can avoid being swamped by utility comparisons. Once the utility stakes get large enough—and, when considering the scale of human or animal suffering or the size of the future, the utility stakes really are quite large—all other factors become essentially irrelevant, supplying no relevant information for our evaluation of actions or outcomes.

Further discussion and some more cases

This structure does not just apply to population ethics problems, like the repugnant conclusion and the Very Repugnant Conclusion. As Cowen showed in a later paper, the same applies to both Pascal’s mugging and Hurka’s version of the St Petersburg paradox, both of which seem quite different due to their emphasis on probability and risk but which have the same fundamental structure: start with an obvious choice between two options (should I or shouldn’t I give my wallet to this random person who is promising me literally nothing?), then keep adding tiny amounts of utility to the bad option (until the person is promising you huge amounts of utility). Many of the problems of infinite ethics have this structure as well. While this structure doesn’t fit all of the standard challenges to utilitarianism (e.g., the experience machine), it fits many of them, and—relevantly—it fits many of the landmarks on the way to crazy town.

Indeed, in section five Cowen comes close to suggesting a quasi-algorithmic procedure for generating challenges to utilitarianism.[9] You just need a sum over a large number of individually-imperceptible epsilons somewhere in your example, and everything else falls into place. The epsilons can represent tiny amounts of pleasure, or pain, or probability, or something else; the large number can be extended in time, or space, or state-space, or across possible worlds; it can be a one-shot or repeated game. It doesn’t matter. You just need some Σ ε and you can generate a new absurdity: you start with an obvious choice between two options, then keep adding additional epsilons to the worse option until either utility vanishes in importance or utility dominates everything else.

In other words, Cowen can just keep generating more and more absurd examples, and there is no principled way for you to say ‘this far but no further’. As Cowen puts it:

Once values are treated as commensurable, one value may swamp all others in importance and trump their effects… The possibility of value dictatorship, when we must weigh conflicting ends, stands as a fundamental difficulty.

A popular response in the Effective Altruist community to problems that seem to involve something like dogmatism or ‘value dictatorship’—indeed, the response William MacAskill gave when Cowen himself made some of these points in an interview—is to invoke moral uncertainty. If your moral view faces challenges like these, you should downweigh your confidence in it; and then, if you place some weight on multiple moral views, you should somehow aggregate their recommendations, to reach an acceptable compromise between ethical outlooks.

Various theories of moral uncertainty exist, outlining how this aggregation works; but none of them actually escape the issue. The theories of moral uncertainty that Effective Altruists rely on are themselves frameworks for commensurating values and systematically ranking options, and (as such) they are also vulnerable to ‘value dictatorship’, where after some point the choices recommended by utilitarianism come to swamp the recommendations of other theories. In the literature, this phenomenon is well-known as ‘fanaticism’.[10]

Once you let utilitarian calculations into your moral theory at all, there is no principled way to prevent them from swallowing everything else. And, in turn, there’s no way to have these calculations swallow everything without them leading to pretty absurd results. While some of you might bite the bullet on the repugnant conclusion or the experience machine, it is very likely that you will eventually find a bullet that you don’t want to bite, and you will want to get off the train to crazy town; but you cannot consistently do this without giving up the idea that scale matters, and that it doesn’t just stop mattering after some point.

Getting on a different train

Pluralism and universal domain

Is this post a criticism of Effective Altruism? I’m not actually sure. For some personal context: I’ve signed the Giving What We Can pledge, and I’m proud of that; I find large parts of Effective Altruism appealing, although I’m turned off by many other aspects; I think the movement has provably done a lot of good. It’s too easy, and completely uncharitable, to simply write the movement off as inconsistent. But yet, when it comes to the theory of Effective Altruism, I’m not sure how it gets off the ground at all without leading to absurdities. How do I square this?

In his recent interview with MacAskill, Cowen said the following (edited for clarity):

At the big macro level (like the whole world of nature versus humans, ethics of the infinite, and so on) it seems to me utilitarianism doesn’t perform that well. Isn’t the utilitarian part of our calculations only a mid-scale theory? You can ask: does rent control work? Are tariffs good? Utilitarianism is fine there. But otherwise, it just doesn’t make sense…

[W]hy not get off the train a bit earlier? Just say: ‘Well, the utilitarian part of our calculations  is embedded within a particular social context, like, how do we arrange certain affairs of society. But if you try to shrink it down to too small (how should you live your life?) or to too large (how do we deal with infinite ethics on all of nature?) it just doesn’t work. It has to stay embedded in this context.’ Universal domain as an assumption doesn’t really work anywhere, so why should it work for the utilitarian part of our ethics?

… It’s not that there’s some other theory that’s going to tie up all the conundrums in a nice bundle, but simply that there are limits to moral reasoning, and we cannot fully transcend the notion of being partial because moral reasoning is embedded in that context of being partial about some things.

There are two ways to read this suggestion. The first is that Cowen just wants us to accept that, past a certain point, the value of additional utility vanishes quickly to zero: when we zoom out too far, utility becomes de facto meaningless. (This reading is supported especially by the first paragraph I quoted.) But there’s a different way to read his suggestion (which is supported more by the third paragraph I quoted), which is that rather than taking the logic of this own argument at face value, Cowen is urging MacAskill to take a step back and reject one of its presuppositions: universal domain.[11]

If we accept a certain amount of incommensurability between our values, and thus a certain amount of non-systematicity in our ethics, we can avoid the absurdities directly. Different values are just valuable in different ways, and they are not systematically comparable: while sometimes the choices between different values are obvious, often we just have to respond to trade-offs between values with context-specific judgment. On these views, as we add more and more utility to option B, eventually we reach a point where the different goods in A and B are incommensurable and the trade-off is systematically undecidable; as such, we can avoid the problem of utility swallowing all other considerations without arbitrarily declaring it unimportant past a certain point.

MacAskill responds to Cowen by arguing that ‘we should be more ambitious than that with our moral reasoning’. He seems to think that we will eventually find a theory that ‘ties up all the conundrums’—perhaps if we hand it over to ‘specially-trained AIs’ for whom ethics is ‘child’s play’ (as he writes in What We Owe the Future). But it’s not clear that ‘ambition’ has anything to do with it. Even the smartest AI could not eliminate the logical contradictions we face in (say) population ethics; at most, it could give us a recommendation about which bullet to bite. Likewise, Alexander seems to think that (something like) this position is ‘anti-intellectual’. It is unsystematic, to be sure, but it’s no more anti-intellectual than Hume was: it’s not an unprincipled rejection of all thinking, but an attempt to figure out where we run up against the limits of moral theorising.

Such a position would rule out utilitarianism as a general-purpose theory of morality, or even its more limited role as a theory of the (supposed) part of morality philosophers call ‘beneficence’. But it wouldn’t stop us from using utilitarianism as a model for moral thinking, a framework representing certain ways we think about difficult questions. It might be especially relevant to thinking about trade-offs where we have to weigh up costs and benefits—especially if, as Barbara Fried has argued, it is the only rigourous ethical framework that is able to face up to uncertainty and scarcity. But, like all models, it would only be valid within a certain context. Utilitarianism can remain a really important aspect of moral reasoning, just not in the way that we are familiar with from universal moral theories.

To state my own personal view, I think I am probably >60% confident in something like this position being right. Some kind of consequentialist thinking seems pretty applicable in a lot of situations, and to often be very helpful. We can reject universal domain, and thus value commensurability, while retaining this insight. Cowen would not be the first to make this claim: Isaiah Berlin, the most famous 20th century defender of value incommensurability, was convinced that ‘Utilitarian solutions are sometimes wrong, but, I suspect, more often beneficent.’ Utilitarianism is not a general-purpose theory of the good; but it is an important framework that can generate important insights.

And this seems to be all Effective Altruism needs. Holden Karnofsky recently made a call for pluralism within Effective Altruism: the community needs to temper its central ‘ideas/​themes/​memes’ with pluralism and moderation. But Karnofsky argues, further, that the community already does this: ‘My sense is that many EAs’ writings and statements are much more one-dimensional … than their actions.’ In practice, Effective Altruists are not willing to purchase theoretical coherence at the price of absurdity; they place utilitarian reasoning in a pluralist context. They may do this unreflectively, and I think they do it imperfectly; but it is an existence proof of a version of Effective Altruism that accepts that utility considerations are embedded in a wider context, and tempers them with judgment.

The core of Effective Altruism

Effective Altruists can’t be entirely happy with Cowen’s position, of course. They think utilitarian reasoning should be applicable to examples beyond those drawn from economics textbooks—at very least, they think it should be relevant to decisions around donations and career choices.

We can be more precise about what Effective Altruism asks of utilitarian reasoning. Effective Altruism places far more weight than even previous utilitarians had on the optimising or maximising aspects of utilitarianism. This goes back to my previous comments on scale, and opposition to scope-insensitivity: the ‘hard core’ of Effective Altruism is the idea that, at least for most of us, the ethically relevant differences between the options we face are huge, despite the fact that we often tend to act as if they were negligible or unknowable.[12] Given this premise, the appeal of utilitarianism is immediate. Utilitarianism is an optimising framework: its focus is on achieving the best possibilities, rather than simply a selection of acceptable options. This is an immensely (even overwhelmingly) useful feature of the utilitarian framework for Effective Altruists; it gives them reason to use it in their moral reasoning even at large scales. MacAskill should not have challenged Cowen’s ambition; rather, he should have challenged his naïve position that the limits of utilitarianism should be defined with reference to scale.

But, even as Effective Altruists are excited about the optimisation implicit in utilitarianism, they have to be wary about its flip-side: the potential for fanaticism and value dictatorship. Utilitarian reasoning needs to be bounded or restrained in some way. But Cowen’s argument shows that there can be no principled, systematic account of these bounds and restraints. If we hope to represent questions of scope using utilitarian reasoning, without having utility swallow all other values, there will have to be ambiguities, incommensurabilities, and arbitrariness; as I worried when reading Sam’s post, there are no principles we can use to decide when to get off the train to crazy town.

I do not think this is a huge problem. To borrow a point from Bernard Williams, there is no particular principled place to draw a line, but it is nonetheless entirely principled to say we need a line somewhere. And Cowen suggests, rightly I think, that the line is drawn based on the (possibly arbitrary) social contexts within which our moral reasoning is embedded. But the problem is just that Effective Altruism doesn’t have a good account of the context of moral reasoning, and thus no understanding of its own limits.

To be sure, a story is sometimes told (largely unreflectively) about why the ‘hard core’ of Effective Altruism is true even as most people act as if it isn’t; this story could tell us something about the context for scale-sensitive reasoning. It is derived from Derek Parfit’s work,[13] and goes something like this. In the small, pre-modern societies where many of our moral ideas were developed, we could affect only small numbers of people in our communities; in such societies, an ethic that focussed our attention locally and largely ignored scale was reasonable. In the globalised and interconnected modern world, however, human action could (potentially /​ in expectation) affect many millions; we might even be at the ‘hinge of history’. In such a situation, the ‘spread’ of possible actions is much larger: there are way more options available to us, and at the same time there are way more morally relevant differences between the actions. There is thus a mismatch between our values and our reality, and it is incumbent upon us to be more explicit and rigourous in thinking about the scope of our actions, using frameworks oriented towards maximisation.

There are variations on this story, of course, but I hope that some version of it is recognisable to at least some readers. Fleshed out with more details, I think there is every chance it could be a plausible historical narrative. But in Parfit’s work, it was no more than a conjectural just-so story; and it has only got more skeletal and bare-bones since him, leading to a lot of bad ‘explanations’ for why people’s moral judgments about large numbers are (supposedly) unreliable. So while Effective Altruists are committed to their ‘hard core’, they have no good explanation for why it is true—and thus no account of the context, and limits, of their own reasoning.

As it happens, I think the ‘hard core’ of Effective Altruism probably is true. It’s definitely true in the limited realm of charitable donations, where large and identifiable differences in cost-effectiveness between different charities have been empirically validated. It becomes murkier as we move outwards from there—issues of judgment, risk/​reward trade-offs, and unknown unknowns make it less obvious that we can identify and act on interventions that are hugely better than others—but it’s certainly plausible. Yet, while Effective Altruism has made a prominent and potentially convincing case for the importance of maximisation-style reasoning, this style of reasoning is simultaneously dangerous and liable to fanaticism. The only real solution to this problem is a proper understanding of the context and limits of maximisation. And it is here that Effective Altruism has come up short.

Conclusion and takeaways

I believe that Effective Altruism’s use of rigourous, explicit, maximisation-oriented reasoning is both very novel and (often) good. But if Effective Altruists don’t want to end up in crazy town, they need to start getting on a different train. They need a different understanding of their own enterprise, one grounded less in grand systematic theories of morality and more in an account of the modern world and recent history. Precisely because they lack that, Effective Altruists are simultaneously drawn towards and repulsed by the most absurd outer limits of utilitarianism. I think this marks a failure of seriousness; it certainly marks a failure of self-understanding. It marks something the Effective Altruist community needs to rectify.

None of this means abandoning the weirder sides of the movement. Many of the parts of Effective Altruism that are considered ‘weird’ relative to the wider culture are unrelated to the dynamics discussed in this post: for example, Alexander has repeatedly emphasised that concerns about risks from AI are logically independent from the philosophy of ‘longtermism’ or utility calculations, and should be treated separately.

But it does mean getting clearer about what exactly Effective Altruism is, and the contexts in which it makes sense: being more rigourous and explicit about why it is important to use systematic maximisation frameworks, what such frameworks are intended to do, and what countervailing considerations are most important to pay attention to. And this will likely require facing up to the limits of consequentialism, and thinking about situations in which consequentialist reasoning harms moral thinking more than it helps.

I don’t know what the consequences of this might be for Effective Altruism. Maybe it would ‘leave everything as it is’, and have no practical ramifications for the movement; I doubt it. Maybe, as suggested by a recent post by Lukas Gloor which discusses similar themes, it would create space for alternatives to ‘longtermism’, via a rejection of some of the arguments-from-systematicity that underpin it. Maybe it might lead to a rethinking of Effective Altruist approaches to politics, policy, and other contexts where game-theoretic considerations are paramount and well-honed judgment is necessary, and where explicit consequentialism can thus potentially cause serious problems. I don’t know; one can’t typically predict the outcomes of reflection in advance.

Perhaps you, the reader, don’t feel any of this is necessary. Perhaps you follow Alexander: you think that too much sweeping criticism of Effective Altruism has been produced, and the movement should just get on with the object-level business of doing good while being aware of specific ‘anomalies’ that don’t fit with its assumptions which could suggest deeper problems. This is a reasonable position to take. Too much time in the armchair thinking of criticisms is almost never the best way to actually identify the problems in a movement or set of ideas. But the flip-side of this reasoning is that, when an anomaly does arise, Effective Altruism should be able to focus in on it; and it must be open to explanations of the anomaly that are able to unite it with other questions, even if those explanations are critical.

I think that the ‘train to crazy town’ phenomenon, the lack of clarity about whether and when utilitarian reasoning runs out, is just such an anomaly—one that hurts Effective Altruism’s ability to achieve its stated goals (both within and without the movement). I’ve tried to give an explanation that connects this anomaly to other problems in moral philosophy, and potentially suggests a way forward.[14] You may disagree with my explanation; I certainly am not certain of my own core claims. But some diagnosis of the problem is necessary. Absent such a diagnosis, Effective Altruists will keep getting on the train to crazy town, and non–Effective Altruists will continue to recognise this (implicitly or explicitly) and be put off by it. It’s a problem worth engaging with.

  1. ^

    I believe the metaphor is Ajeya Cotra’s, from her appearance on the 80,000 Hours Podcast. But its recent proliferation seems to be down to William MacAskill, who used it in response to one of Tyler Cowen’s arguments on the latter’s podcast.

  2. ^

    Sam Atis, ‘The train to Crazy Town’. Sam attributes this example to Tyler Cowen, since Cowen has referred to it a number of times, but it originates in Thomas Hurka, ‘Value and Population Size’, p.499. Thank you to Cowen for pointing this out to me.

  3. ^

    Christoph Fehige, ‘A Pareto Principle for Possible People’, pp.534–535.

  4. ^

    For instance: John Rawls’ separateness of persons argument (discussion); Bernard Williams’ integrity objection (discussion); Thomas Nagel’s analysis of impartiality (discussion); T. M. Scanlon on aggregation (discussion).

  5. ^

    Nota bene: in his paper, Cowen uses the term ‘pluralism’ to mean something different; nothing turns on this.

  6. ^

    Cowen, writing in a population ethics context, writes about total utility specifically, but the same logic applies to average utility or person-affecting utility, which face similar problems (e.g., the Absurd Conclusion). I will equate utility with total utility for the rest of this post, with this footnote hopefully marking that this is without loss of generality.

  7. ^

    It doesn’t include non-consequentialist theories with no room for purely impartial, aggregative beneficence. But this doesn’t matter much for my purposes, because such theories would not be compatible with Effective Altruism—since Effective Altruism is an attempt to pursue the good most effectively, it doesn’t make complete sense without an independent moral reason to pursue ‘the good’ in aggregate. Two papers on this theme which focus specifically on longtermism are Karri Heikkinen, ‘Strong Longtermism and the Challenge from Anti-Aggregative Moral Views’, and Emma J. Curran, ‘Longtermism, Aggregation, and Catastrophic Risk’ (public draft).

    (As an aside, it is quite common for Effective Altruists to argue that, independent of all of these issues, any acceptable moral theory must include impartial beneficence in order to retain ‘basic moral decency’; but this seems to me to be simply false, as someone with no theory of beneficence can still end up recommending many of the same actions and dispositions, just so long as they recommend them for different reasons. This has recently been discussed in an Effective Altruist context by Lukas Gloor, ‘Population Ethics Without Axiology: A Framework’; for more detailed philosophical discussion, see Philippa Foot, ‘Utilitarianism and the Virtues’.)

  8. ^

    These can be possible actions, or states of affairs, or intentions—whatever you want to evaluate.

  9. ^

    Philosophers might appreciate an analogy with Linda Zagzebski’s great paper on ‘The Inescapability of Gettier Problems’; in a nearby possible world, Cowen might conceivably have written ‘The Inescapability of Repugnant Conclusions’.

  10. ^

    While it might not be immediately obvious that the ‘moral parliament’ framework in particular falls victim to the problem of fanaticism, the issue here is that this framework as introduced is little more than a ‘metaphor’, and even fleshed out versions of the framework fail to meet the universal domain condition (cf. Newberry and Ord, ‘The Parliamentary Approach to Moral Uncertainty’, p.9). Thus, for those who seek to use moral uncertainty as a way to avoid problems for universal-domain moral theories, invoking the moral parliament is equivalent to an admission of defeat. I discuss the option of denying the universal domain assumption in the next section.

  11. ^

    I think the latter reading is closer to the truth: in Cowen’s other paper on the repugnant conclusion, ‘Resolving the Repugnant Conclusion’, he quite explicitly rejects universal domain and only allows moral comparisons to be valid over bounded sets of options. But nothing I say in this section should be taken as representative of Cowen’s actual views.

  12. ^

    Something like this, albeit stated in more explicitly numerical and consequentialist language, is captured in the three premises of Benjamin Todd’s ‘rigourous argument for Effective Altruism’ (about 29 minutes in).

  13. ^

    See (e.g.) Parfit, Reasons and Persons, §31 ‘Rational Altruism’.

  14. ^

    I don’t think I’m suggesting a new ‘paradigm’, whatever overblown meaning that word might have outside of the context of science; I am hoping to make a more modest suggestion than that, just a few new questions and arguments.