“Doing Good Best” isn’t the EA ideal

Holden recently claimed that EA is about maximizing, but that EA doesn’t suffer very much because we’re often not actually maximizing. I think that both parts are incorrect[1]. I don’t think EA requires maximizing, and it certainly isn’t about maximizing in the naïve sense that it often occurs.

In my view, Effective Altruism as a community has in many or most places gone too far towards this type of maximizing view, and it is causing substantial damage. Holden thinks we’ve mostly avoided the issues, and while I think he’s right to say that many possible extreme problems have been avoided, I think we have, in fact, done poorly because of a maximizing viewpoint.

Is EA about Maximizing?

I will appeal to Will MacAskill’s definition, first.

Effective altruism is:

(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and

(ii) the use of the findings from (i) to try to improve the world.

Part (i) is obviously at least partially about maximizing, in Will’s view. But it is also tentative and cautious, rather than a binary—so even if there is a single maximum, actually doing part (i) well means we want to be very cautious about thinking we’ve identified that single peak. I also think it’s easy to incorrectly think this appeals to utilitarian notions, rather than benficentric ones. Utilitarianism is maximizing, but EA is about maximizing with resources dedicated to that goal. It does not need to be totalizing, and interpreting it as “just utilitarianism” is wrong. Further, I think that many community members are unaware of this, which I see as a critical distinction.

But more importantly, part (ii), the actual practice of effective altruism, is not defined as maximizing. Very clearly, it is instead pragmatic. And pragmatism isn’t compatible with much of what I see in practice when EAs take a maximizing viewpoint. That is, even according to views where we should embrace fully utilitarian maximizing—again, views that are compatible with but not actually embraced by effective altruism as defined—optimizing before you know your goal works poorly.

Before you know your goal exactly, moderate optimization pressure towards even incompletely specified goals that are imperfectly understood usually improves things greatly. That is, you can absolutely do good better even without finishing part (i), and that is what effective altruism has been and should continue to do. But at some point continuing optimization pressure has rapidly declining returns. In fact, over-optimizing can make things worse, so when looking at EA practice, we should be very clear that it’s not about maximizing, and should not be.

Does the Current Degree of Maximizing Work?

It is possible in theory for us to be benefitting from a degree of maximizing, but in practice I think the community has often gone too far. I want to point to some of the concrete downsides, and explore how maximizing has been and is currently damaging to EA. To show this, I will start from exclusivity and elitism, go on to lack of growth, to narrow vision, and then focusing on current interventions. Given that, I will conclude that the “effective” part of EA is pragmatic, and fundamentally should not lead to maximizing, even if you were to embrace a (non-EA) totalizing view.

Maximizing and Elitism

The maximizing paradigm of Effective Altruism often pushes individuals towards narrow goals, ranging from earning to give, to AI safety, to academia, to US or international policy. This is a way for individuals to maximize impact, but leads to elitism, because very few people are ideal for any given job, and most of the areas in question are heavily selected for elite credentials and specific skills. Problems with this have been pointed out before.

It’s also the case that individual maximization is rarely optimal for groups. Capitalism harnesses maximization to provide benefits for everyone, but when it works, that leads to diversity in specializations, not crowding into the single best thing. To the extent that people ask “how can I be maximally impactful,” I think they are asking the wrong question—they are part of a larger group, and part of the world as a whole, and they can’t view their impact as independent from that reality.

I think this is even more unsustainable given the amount of attention EA receives, and also validates an increasingly common, and I think correct criticism, that EA is sometimes pushing against actually important things like fighting climate change, in search of so-called optimal things to focus on. When 1% of college students had heard of EA, directing them towards AI safety instead of climate change might have made sense. When 25% of them have heard of it, we really need to diversify—because we’re far less effective with 1% of the population supporting effective goals than we could be with 25%.

Self-Limiting EA, and Lack of Growth

Maximizing effective altruism is also therefore unfortunately self-limiting, since most people can’t, and shouldn’t, actually work on the single highest leverage thing. To continue the above example, it is absolutely great for 25% of college students to aim to give 1% of their income to effective charities, and wonderful if some choose to give more, or focus their careers on maximizing impact. But it is absolutely disastrous for the community for people to think of EA as a binary, either you’re in, and working on AI safety, biorisk, or working for an EA org, or you are out, and doing something the community seems to disapprove of like working on fighting climate change, or only donating. Because if that happens, we don’t grow. And we need to.

And before people say that we don’t need money, no. Effective altruism is incredibly funding constrained. For example, EA has a nearly-unlimited funding opportunity in the form of Givedirectly, and right now, in 2022, Givewell is still short on donations to fund things that are 8x more effective than that. So the idea that when the movement is growing rapidly, and our funding base is potentially expanding further, we need to save money for later, just in case we might want another billion dollars for AI risk in five years seems myopic.

A maximizing viewpoint can say that we need to be cautious lest we do something wonderful but not maximally so. But in practice, embracing a pragmatic viewpoint, saving money while searching for the maximum seems bad. And Dustin Moskowitz can’t, in general, seem to manage to give his money away fast enough to have his net worth stop increasing. So instead of maximizing, it seems, we could do more things that are less than maximally good, but get more good done.

Narrow Visions

A narrow vision of what is effective is also bad for the community’s ability to work on priorities—even the narrow ones. Even if all we wanted to focus on was AI risk, we have too few graphic designers, writers, mental health professionals, or historians to work on all of the things we think would improve our ability to work on AI safety. There are presumably people who would have pursued PhDs in computer science, and would have been EA-aligned tenure track professors now, but who instead decided to earn-to-give back in 2014. Whoops!

And today, there are definitely people who are deciding not to apply to PhD programs in economics or bioengineering to work on AI risk. Maybe that’s the right call in an individual case, but I’m skeptical that it’s right in general. Maximizing narrowly is already leading to bad outcomes in the narrow domains.

Premature Optimization

Finally, focus on legible and predictable solutions is great when you’re close to the finish line, but in most areas we are not. This means that any optimization is going to be premature, and therefore, perform non optimally.

For example, we fundamentally don’t know what approaches will work for AI safety, we aren’t anywhere near eliminating diseases, we haven’t figured out how to stop cruelty in factory farming, much less replace meat, and the world isn’t yet rich enough that it’s possible to give everyone basic needs without needing political debates.[2] We need to explore, not just exploit—and things like the Open Philanthropy cause exploration contest are great, looking for more immediately exploitable opportunities—but I’d be even happier if it was an ongoing prize for suggestions that lead to funding something. We can and should to be working on picking all the low hanging fruit we find, and looking for more. That’s not maximizing, but it’s improving[3].

What could success of non-maximizing EA look like?

Success, in my view, would make EA look like exercise, or retirement savings. No-one seriously questions them as important parts of the life of people in rich western countries—and there are clear efforts to encourage more people to do it. But very few people are looking to maximize either, and that’s mostly good.

What would Effective Altruism as exercise look like? We still need the equivalent of gyms and parks to encourage people to donate 10% of their income, or think about their career impact. But people could choose whether to work on their cardio, and just give 1%, or be really into exercising, and have debates with friends about the relative importance of animal suffering, mental health, and disease elimination. We would still have sports, for those that are really doing impressive things, and people could care, but not feel like they were wasting their life because they only played pickup basketball twice a week instead of getting into the NBA. The NBA is for maximalists, sports are for everyone.

What would Effective Altruism as retirement saving look like? Serious people would expect everyone to be donating to effective causes as the obvious thing to do, there would be something like default donation schemes for people to give more easily, and they would direct their donations to any of several organizations like Givewell that allocated charitable giving. Those organizations would pursue maximizing impact, just like investors pursue maximizing (risk-adjusted) returns—but the community would not.

“Doing Good Better” makes the argument that people often aren’t doing good well at all, with examples like giving money for playpumps and to give a wish foundation. We should do better, it says, and pay attention to improving the world. That’s not maximizing, it’s simply moving away from an obvious-in-retrospect failure mode. But this has been transformed into “Doing Good Best,” and as I’ve argued, that unjustified in theory, and worse, bad in practice.

  1. ^

    I am being somewhat unfair to him here, especially given his disclaimer that “there’s probably some nuance I’m failing to capture,” and I’m mostly arguing against some of the nuance. But given that I am, in fact, trying to argue against parts of his view, and lots of the implicit conclusion, I’m disagreeing with him, despite the fact that I agree with most of what he says.

  2. ^

    And progress studies doesn’t seem to have found interventions that are money-constrained, rather than politically constrained, though hopefully they identify some, and/​or make progress on removing political constraints to growth in ways that benefit the world.

  3. ^

    There is a valid theoretical argument that we’d end up in a local maximum by doing this, or would run out of resources to do even better things. I just doubt it’s true in practice at the current moment.