As EA embraces more avenues for change, we must change our message

Introduction

A decade ago, when effective altruism (EA) was focused primarily on global poverty and preventable diseases, resource allocation decisions were extremely difficult. Methods like randomized control trials (RCTs) were popularized to compare interventions, but they were far from perfect. Even with a singular standard—save the most lives, or quality-adjusted life years (QALYs), per dollar—EAs hotly debated which charities were most deserving of cash.

Today many other causes beyond global poverty are considered viable EA interventions. These interventions are extremely different from each other. Some are near-term; some are long-term; some are still measured in QALYs; some are based on the probability of human extinction. This means that already difficult resource allocation decisions have gotten much, much harder. In fact, I think that in today’s EA landscape, making these resource allocation decisions requires abandoning some of the philosophical underpinnings of EA.

Revising the ethical backbone of a movement so deeply rooted in philosophy is scary and challenging. Most of you know all too well that Sam Bankman-Fried’s recent actions have prompted the beginnings of a reckoning within the EA community. Although the news about FTX is timely, this post has little to do with Fried—in fact I wrote it before FTX crashed. Instead of specifically reacting to that news, I seek to make the broader claim that long before crypto fortunes were bankrolling billion-dollar EA foundations, there was reason to change EA’s dogma.

In this post, I claim that the maximalist “do the most good” credo is no longer viable to sustain EA as a socio-political movement. Instead, I argue that “do a lot of good” is a more logical and more effective rallying cry for modern EAs.

Part 1: “Do a lot of good” is a logical rallying cry: The new quantification problem

The first part of this post is dedicated to showing that “do a lot of good” is logical and in-line with EA’s overall mission. This line of reasoning is not new. Holden Karnofsky recently made the argument that the EA community is better off jettisoning the maximizing principle and embracing more moderation and pluralism.

Let’s dive in with a very old argument against maximization:

  1. Ought implies can.[1] Or, in other words, if an agent is morally required to perform some action, that action must be possible for the agent to perform.

  2. So, in order to “do the most good”, we must know what action contributes to the most good getting done.

  3. It is impossible to know what action contributes to the most good.

  4. Effective altruists should not always try to perform actions that do the most good, but instead perform actions that they can be reasonably sure do a lot of good.

Premises 1-3 of this argument form the commonly articulated epistemic problem with consequentialism. Premise 4 is what I seek to prove in this section. However, before I get there, I’d like to rehash and expand on some evidence for premise 3.

Early critiques of EA’s global poverty strategy prompted skepticism that EA methodology could accurately determine what causes generate the most welfare per dollar. These critiques included problems with RCTs; the fact that EA ignored smaller, grassroots organizations and longitudinal data; and the argument that it is impossible to quantify quality human well-being. EAs, in large part at GiveWell, debated the legitimacy of these concerns. I do not need to rehash these debates to prove my point that deciding which cause(s) to focus on (tropical diseases or cash transfers or deworming or funding PSAs to go to the doctor) was extremely difficult, even with one, near-term, shared goal and decently trustworthy data.

Today, based on the content of this forum alone, it’s clear that much of the EA community has expanded its goals beyond ending extreme poverty and preventable death from tropical diseases. Increasingly, AI alignment, existential and/​or extinction risk, U.S. foreign policy, poverty relief in the U.S., political campaigns, and many other domains have garnered serious attention from EA.

Discussion around these topics is markedly different from early EA debate chiefly because it is less focused on measuring aggregate welfare increases per dollar. Faced with the decision to support, for example, extinction threats or animal welfare, you cannot perform a simple comparison of QALYs saved. There is no easy way to correlate the aggregate welfare increase brought about by a marginal increase in our asteroid deflection technology to that of saving 100,000,000 male chicks from being ground up at a factory farm. I came up with three main reasons why quantification is more difficult in today’s EA cause landscape:

  1. Disparate causes do not necessarily share common metrics.

  2. Longtermist outcomes are inherently difficult to measure.

  3. Institutional change involves ripple effects that elude both prediction and causal determination.

This list is not mutually exclusive. Comparing disparate causes (reason 1) is often difficult because it involves comparing a long term possibility to a near term certainly (reason 2) or comparing direct relief to institutional change (reason 3). Regardless, I’d like to expand on each reason in turn.

Lack of common metrics:

Perhaps the best example of lack of common metrics is one I have already mentioned: comparing animal welfare to human welfare. Our current understanding of psychology hardly allows us to measure what makes human lives go well. Comparing animal lives to human lives involves a level of understanding of psychology far beyond the limits of modern science. To get around this, EAs have proposed new frameworks to compare causes. For example, Holden Karnofsky suggests that we use three metrics—importance, tractability, and neglect—to prioritize issues. This framework is undeniably insightful and should be consulted as we make resources allocation decisions. However, the importance and tractability inputs are often impossible to compare across today’s EA priorities. Is pandemic preparedness more important than AI alignment? Similarly, is it tractable to prepare ourselves for a lab-grown super bug or ensure AI doesn’t turn against us? I’ve yet to learn of a framework that allows for comparison of efficacy across new EA causes.

Longtermism:

All of this gets even more complicated when one considers “longtermist” interventions. Consider again Karnofsky’s importance metric. Is 8°C of global warming more important than inequitable values being “locked-in” for centuries? I don’t know. In What We Owe The Future, William MacAskill proposes we consider significance, persistence, and contingency as the three main factors for longtermist issues.[2] Although many EA thought leaders exude some epistemic confidence in determining the significance and contingency of longtermist issues, I’m less sure. Toby Ord sets the probability of an extinction-level engineered pathogen this century at 3%.[3] Even if this prediction was made with full omniscience of the present and past, I hesitate to give it much credence. Today, we’ve identified 98% of extinction-threatening asteroids. If you asked the world’s best astrophysicists in 1920 the chances humanity would accomplish this feat within a century, I’d guess many of them might have said 0%.

Institutional reform:

The global poverty debate generally steered clear of institutional reforms (i.e. reforms that address the so-called “root causes” of poverty). This facet of EA has changed dramatically in the last few years. Today, most preferred EA interventions are deeply institutional (e.g. changing the United States’ international aid priorities). This is a sensible shift in strategy, especially because much early criticism of EA was that it ignored the power of institutions.[4] However, embracing institutional change comes at a cost. Distributing bed nets, deworming children, and initiating cash transfers are deeply simple interventions. Success can be quantified in basic terms (# of nets, # of children treated, $ transferred). Success at lobbying the US government to change foreign aid policies cannot be quantified in the same way. Sure, changes in the amount and allocation of aid are plain to see, but it’s next to impossible to determine the causal structure of those changes. Did my advocacy change the minds of the C-suite of Tyson Foods or was it that NYT article last week? This inability for causal determination means that institutional interventions often elude quantification. In other words, in today’s EA landscape, many actors don’t know the amount of good they are doing or what alternative work could do more good.

There are two ways we might go about solving this newly-complicated quantification problem:

The weak (or thin) conclusion:

EAs ought not compare the effectiveness of interventions that do not share common goals (i.e. saving farm animals versus preventing human extinction) and instead allocate resources to the proven “best” intervention within a set of goals determined to be extremely important.

The strong (or thick) conclusion:

EAs should focus much less on impact measurement (even to compare strategies that share a common goal) and instead allocate resources more widely to many interventions they believe (given their epistemic position and personal circumstances) do a lot of good.

I will not endorse either of these conclusions in this piece, but they both merit serious consideration. Having already given what I think is a provocative case for either conclusion in this section, I’ll turn to some counterarguments.

Possible counterarguments:

#1: EA is, at its core, about impact measurement. Even if it’s extremely difficult in the new landscape, we still must quantify the efficacy of various possible interventions to allocate resources.

I agree that impact quantification is indeed the primary differentiator of EA versus other philanthropic movements. But I also think that asking people to “do a lot of good” aligns with this guiding principle. Peter Singer’s early arguments for reform in the philanthropic space were effective in showing many folks that donations to large university endowments or well-established arts programs did not, in fact, do very much good at all. Although I argue that the good done by researching AI alignment strategies cannot be compared to the good done by lobbying against inhumane farming practices, I am quite sure they both have the potential to do a lot of good. Moreover, if, during the course of one’s efforts in either of these spaces, she determines that her work is not doing much good, she ought to stop and try something else. In essence, I think the “do a lot of good” approach allows for evidence-centered work with proven impact without the need to constantly justify why one is working in her chosen space and not for some other cause.


#2: If we accept the weak conclusion, what tools can we use to make resource allocations across disparate causes?

I will leave this as an open question. While it’s true that billionaire EA philanthropists like Cari Tuna and Dustin Moskovits have to grapple with this objection to the weak conclusion, most of us don’t. We each have a certain set of opportunities to do good (based on our material and epistemic circumstances) and given those, we can choose our own altruistic journey. Most of us do not have copious amounts of extra money or time to give to EA causes, so we can use our narrow slice of the pie to chip away at an important, neglected problem and let the large research teams at organizations like Open Philanthropy decide how the rest of the pie is divided.

#3: If we accept the strong conclusion, resource allocation will be impossible without comparing efficacy. We would be returning to the dark ages of philanthropy when money was given seemingly indiscriminately and unworthy organizations were granted billions.

I do not think that this follows from the strong conclusion. A wider and wider range of causes and intervention methods is accepted into the EA umbrella every year. (Most EAs used to only give to GiveWell’s 9-or-so recommended charities—that’s just not true anymore).[5] If you are cynical about impact quantification, you probably consider this intellectual and moral progress. I believe that we, as a community, can allocate resources to a large number of impact-centered organizations without dolling out billions to unproven, inefficient organizations with reckless abandon. In short, we no longer need the extremely demanding “always do the most good” criterion to prevent unworthy interventions from creeping into the EA space.

Part 2: “Do a lot of good” is an effective rallying cry: How to grow a social movement for good

In my opinion, “do the most good” is less effective rhetoric than “do a lot of good”. This claim is based more on my intuitions than science, but here is my basic reasoning.

One interesting feature of consequentialism is that, in an effort to actualize the best outcome (fulfill the most preferences, generate the most utils, etc.), actors ought not always preach what they practice. Imagine your friend is choosing between option A, B, and C. In your consequentialist analysis, A is the most moral action, C is the least, and B falls somewhere in between. If you were in your friend’s shoes, you would choose A. However, your friend assures you that she will not choose A. In this case, all else being equal, you should try to convince her to choose B. This is a strange conclusion. Under a deontological moral theory, you would most likely try to convince your friend to perform action A regardless of whether she’ll actually do it. A is, after all, the right thing to do, and most deontological theories advocate for preaching moral rightness no matter what. Consequentialists, though, in an effort to actualize the best possible outcome, base their preaching on the probabilities that certain outcomes occur given their actions.

The conclusion that consequentialists sometimes ought to preach less than perfectly moral actions is important in EA. Consider two other movements: Giving What We Can and Meatless Monday. William MacAskill, following Peter Singer et al., surely thinks that the extremely rich should give much more than 10% of their income to be perfectly moral. Under his moral calculus, they should probably give almost everything away. However, telling people to give away 99% of their income is not likely to catch on. Instead, MacAskill and others at Giving What We Can chose to advocate for giving 10% of one’s income. This was a position that both sounds reasonable to a large audience and would assuredly change the world if broadly accepted.

Similarly, many vegetarians and vegans tell their friends and coworkers to try “Meatless Mondays”. Most of these animal rights advocates wish that everybody would avoid meat altogether, but asking their friends and coworkers to go cold turkey (literally) would have less impact than first asking for a more modest reduction in meat consumption.

The EA movement was founded on radical ideas that came out of consequentialist reasoning. Although many current members of the movement do not consider themselves hardcore consequentialists, outcome-oriented analysis is still a key tenet of EA. That outcome-oriented thinking should not be limited to deciding what actions we make ourselves. We must also consider the consequences of our private and public rhetoric.

Effective altruists often say things like, “giving to community theater is less effective than giving to global poverty relief, so giving to community theater is wrong”. I suspect that this rhetoric generates little change in the general populace compared with “here are 10 reasons why you should give more to global poverty relief”. Most people, especially people who don’t study philosophy, don’t like being told that their actions are wrong. Telling somebody that an act of kindness (like making frequent donations to a community theater) is actually immoral is likely to end a conversation before any change can be made.

In my analysis, our community has generally adopted this vein of thinking. Most EAs realize that if we want to gain wider reach, we cannot tell people they should live in poverty because they donate so much to Against Malaria. However, as new causes enter the EA space, we’ve been slow to apply this reasoning to our new resource allocation debates. It is not very useful to tell somebody who is passionate about AI alignment that worrying about animal welfare is more effective, so researching AI alignment is immoral. I know that most EAs aren’t actually saying this aloud, but it is frequently implied.

We should continue to debate how resources should be allocated. We should continue to persuade people to donate their time and money to effective interventions. But we should also continue to seek to grow the EA movement. Doing so requires that we accept varying levels of commitment to EA and a wider number of activities that we consider candidates for doing good effectively. Asking folks to “do a lot of good” accomplishes these goals without losing sight of EA’s core mission.


Conclusion

My argument does not imply that everybody in the EA community is an “ineffective altruist” because nobody can prove her chosen intervention is best. In fact, I believe it does the opposite. If we accept that we can’t know for certain how to do the most good, an effective altruist is somebody who—given her epistemic position and individual circumstances—frequently chooses actions (big and small) she reasonably believes do a lot of good on aggregate.

I suspect that some of you are frustrated by this conclusion. If you read this forum, you likely have an EA cause that you think ought to be prioritized. And I do agree that EA is still extremely funding constrained. Therefore, you might be tempted to argue: “well even if I can’t know that my cause does the most good, it does more good than cause X”. This is an unfortunate response for two reasons: 1. it involves a level of epistemic certainty about the future that I think is unfounded[6]; and 2. it implies that other EAs are dedicating their lives to unworthy causes.

I’ll close by saying that I think the EA community is unique and fantastic in many ways. However, just like all other organizations, we ought to work towards a culture of inclusivity. The increasingly broad focus of the movement coupled with scarce resources have the potential to pit us against ourselves. But a broader focus also allows us the opportunity to grow our community and proliferate our shared mission of improving the lives of people and animals. EAs now appreciate that the world faces a wide array of big problems. We can only solve them with a diverse coalition of change-makers, each bringing her own unique passions and perspectives.


I’m a recent philosophy B.A. grad interested in pursuing a career in EA, and this is my first post on the forum. Please reach out if you have any comments/​critiques/​questions; I would love to meet as many folks as possible and engage in any conversations around the future of EA!

  1. ^

    Kant, Immanuel. Critique of Pure Reason. A548/​B576. p. 473.

  2. ^

    pp. 254-5.

  3. ^

    Ord, Toby. The Precipice. 2020. p. 71.

  4. ^

    See Clough 2015: “Effective Altruism’s Political Blindspot”; or Herzog 2015: “(One of) Effective Altruism’s blind spot(s), or: why moral theory needs institutional theory”; or Srinivasan 2015: “Stop the Robot Apocalypse”.

  5. ^

    I’m well aware that the vast majority of EA funding still goes to global health and development, but it’s patently clear that a diversity of interventions are getting more and more attention.

  6. ^

    If you are advocating for a “neartermist” cause, you are taking the long term survival of the human species as too probable, and if you are advocating for a longtermist cause, you may be too confident about your chosen intervention’s effectiveness.