The Folly of “EAs Should”

I’ve seen and heard many discussions about what EAs should do. William McAskill has ventured a definition of Effective Altruism, and I think it is instructive. Will notes that “Effective altruism consists of two projects, rather than a set of normative claims.” One consequence of this is that if there are no normative claims, any supposition about what ought to happen based on EA ideas is invalid. This is a technical point, and one which might seem irrelevant to practical concerns, but I think there are some pernicious consequences of some of the normative claims that get made.

So I think we should discuss why “Effective Altruism” implying that there are specific and clear preferable options for “Effective Altruists” is often harmful. Will’s careful definition avoids that harm, and I think should be taken seriously in that regard.

Mistaken Assertions

Claiming something normative given moral uncertainty, i.e. that we may be incorrect, is hard to justify. There are approaches to moral uncertainty that allow a resolution, but if EAs should cooperate, I argue that it may be useful, regardless of normative goals, to avoid normative statements that exclude some viewpoints. This is not because they cannot be justified, but because they can be strategic mistakes. Specifically, we should be wary of making the project exclusive rather than inclusive.

EA is Young, Small, and Weird

EA is very young. Some find this an obvious situation—aren’t most radical movements young? Are most people willing to embrace new ideas young? - but I disagree. Many of the most popular movements sweep across age groups. Environmentalism, Gay rights, and Animal welfare all skewed young, but were increasingly adopted by those of all ages. In part, that is because they allow people to embrace them. There is no widespread belief in environmentalism that doctors have wasted their careers focusing on saving lives at the retail level rather than saving the world. There is little reason that anyone would hesitate the raise the pride flag because they are not doing enough for the movement. But effective altruism is often perceived differently.

To the extent that EAs embrace a single vision (a very limited extent, to be clear,) they often exclude those who differ on details, intentionally or not. “Failing” to embrace longtermism, or (“worse”?) disagreeing about impartiality, is enough to start arguments. Is it any wonder that we have so few people with well-established lives and worldviews willing to consider our project, “the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources”? Nothing about the project is exclusive—it is the community that creates exclusion. And it would be a shame for people to feel useless and excluded.

Of course, allowing more diversity will allow the ideas of effective altruism to spread—but it will also reduce the tension which seems to exist around disagreeing with the orthodoxy. People debate whether EA should be large and welcoming or small and weird. But as John Maxwell suggests, large and weird might be a fine compromise. We see this—the LGBT movement, now widely embraced, famously suggests that people should “let your freak flag fly,” but the phrase dates back to the 60s counterculture. Neither stayed small and weird, and despite each leading to a culture war, each seems to have been, at least in retrospect, very widely embraced. And neither needed to develop a single coherent worldview to get there; no-one can argue that LGBT groups all agree about many issues. And despite the fragmentation and arguments, the key messages came through to the broader public just fine.

EA is Already Fragmented

It may come as a surprise to readers of the forum, but many of those pushing forward the EA project are only involved for their pet causes. Animal welfare activists gladly take money, logistical help, strategic guidance, and moral support to do things they have long wanted to do. AI safety researchers may or may not embrace the values of EA, but agree it’s a good idea to ensure the world doesn’t end in fire. Longtermists, life-extension activists, and biosecurity researchers also have groups and projects which predate EA, and they are happy to have found fellow travelers. None of these are a bad thing.

Even within “central” EA organizations, there are debates about relative priority of different goals. Feel free to disagree with Open Philanthropy’s work that prioritizes US Policy—but it’s one of their main cause areas. (A fact that has shocked more than one person I’ve mentioned it to.) Perhaps you think that longtermism is obviously correct, but Givewell focuses mainly on the short term. We are uncertain, as a community, and conclusions based on the suppositions about key facts are usually unwarranted, at least without clear caveats about the positions needed to make the conclusions.

Human Variety

Different people have different values, and also have different skills and abilities. When optimizing almost any goal in a complex system, this diversity implies that the optimal path involves some degree of diversity of approaches. That is, most goals are better served by having a diversity of skills available. (Note that this is a positive claim about reality, not a normative one.)

In fact, we find that a diversity of skills are useful for Effective Altruism. Web and graphic designers contribute differently than philosophers and researchers, who contribute differently than operations people, who contribute differently than logistics experts for international shipping, financial analysts, etc. Yes, all of these skills can be paid for on the open market, so some are more expensive than others, but value alignment cannot, and the movement benefits greatly from having value-aligned organizations, especially as it grows.

Hearing that being a doctor “isn’t EA” is not just unfortunately dismissive, it’s dead wrong. Among EA priorities, doctors have important roles to play in biosecurity, in longevity research, and in understanding how to implement the logistics of vaccine programs. In a different vein, if I had been involved and followed EA advice, I might have gone for a PhD in economics, which I already knew I wouldn’t enjoy as much as what I did, in public policy. Of course, just as I was graduating it turned out that EA organizations were getting more interested in policy. That was luck for me, but unsurprising at a group level; of course disparate skills are needed. And a movement that pushes acquiring a narrow set of skills will, unsurprisingly, end up with a narrow set of skills.

Conclusion

I’m obviously not opposed to every use of the word should, and there really are many generally applicable recommendations. I’m not sure how many of them are specific to EAs—all humans should get enough sleep, and it’s usually a good idea for younger people to maximize their career capital and preserve options for the future. 80,000 hours seems to tread the balance well, but it seems like many of the readers see “recommended career paths,” and think it’s a far stronger statement than might be intended.

The narrow vision that seems common when I talk to EAs, and non-EAs that have interacted with EAs, is that there are correct answers that we have for others. This is unhelpful. Instead, think of EA mentorship and advice as suggestions for those that want to follow a “priority” career path. At the same time, we should focus more on continuing to build a vision for, and paths to, improve the world. Alongside that, we have a mutable and evolving program for doing so, one that should (and will) be informed and advanced by anyone interested in being involved.

Acknowledgements: Thank you to Edo Arad for useful feedback.