EA Should Rename Itself

tl;dr

tl;dr EA is misnamed. It should rename itself to something else. Aid Opportunism (AO) is one possibility. This could be discussed publicly on the EA Forum. The challenge of renaming EA would raise a host of issues pertaining to ambiguities surrounding EA. The process of trying to find an accurate name for EA would help resolve those ambiguities.

EA is misnamed

“Effective Altruism” is misnamed. The way in which it is misnamed reveals a deep flaw. EAs should care about this. Whether they will or not is based on that deep flaw. So it refers to itself, which is different from EA, which does not refer to itself.

“Effective Altruism” is composed of two words, “Effective” and “Altruism.” The first word is aspirational. That is fine. EA may or may not be effective. But we understand that it wants to be.

The second word is the problem. “Altruism” refers to “the belief in or practice of disinterested and selfless concern for the well-being of others.” That is what Google says. Or it refers to “the principle and moral practice of concern for happiness of other human beings or other animals, resulting in a quality of life both material and spiritual.” That is what Wikipedia says. We can ignore the “spiritual” part. That is a distraction. The point is that altruism is about the nature of the motivation.

EAs are sometimes utilitarians. Utilitarians do not care about the nature of the motivation. A utilitarian may be altruistic. Or a utilitarian may be a sociopath, or otherwise not care about people. But sociopaths are not altruists. That is part of what makes someone a sociopath.

So there is a conflict. EAs can be utilitarians. Utilitarians can be sociopaths. Altruists cannot be sociopaths. This conflict reveals that EA is misnamed. But that may not be clear.

The conflict is not about sociopathy and altruism. EAs can be sociopaths and still called “Altruist,” as in “Effective Altruist,” because “Altruist” in the name could be aspirational. Then they would be sociopaths but aspirationally not sociopaths. This would be like an ineffective person being called “Effective,” as in “Effective Altruism.” There is no problem there. In fact, sociopaths probably should be aspirationally not sociopaths.

The conflict is also not between utilitarianism and altruism. You can be a utilitarian and also an altruist. The first is a moral philosophy. The second is a motivation. A utilitarian may prefer to be an altruist because they believe it will make them maximize utility better. There is no conflict here.

The conflict is also not between utilitarianism and aspirational altruism. You can be utilitarian and do a calculation and conclude that you should be an altruist. You might then engage in loving-kindness meditation and try to become an altruist but not yet have succeeded. In fact, it might be a good idea for utilitarians to engage in loving-kindness meditation.

The conflict, which we have narrowed in on, is between the way EA permits utilitarians who are unrepentant sociopaths and the aspirational altruism encoded in the name “Effective Altruism.” By having it in the name, EA indicates that aspirational altruism is part of EA. This is the same as how aspirational effectiveness is part of EA. If you do not aspire to being effective, you are not an EA. By the same logic, if you do not aspire to being an altruist, you should not be considered an EA. However, aspiring to being an altruist is not part of EA. In EA it is fine to be an unrepentant sociopath so long as you maximize positive impact. This is revealed by the way being an EA is consistent with being a utilitarian who is also a sociopath and who does not aspire to be altruistic.

One might say that being good at causing positive impact requires altruism and not being a sociopath, so actually there is a link. But this view is not popular in EA. At least, utilitarianism is popular in EA and loving-kindness meditation is not. And it is not nearly as popular as being effective. Being effective is essential in EA. Why is aspirational effectiveness essential but aspirational altruism not? Because EA is not about altruism. Altruism is about motivation, for this you can see online definitions. EA is about impact, and for some people, utilitarianism.

So we see that EA is misnamed.

EA can rename itself

It is easy to fix the problem of EA being misnamed. You could just rename it. Yes it would be annoying. But that is ok, even Facebook renamed itself.

It is also pretty easy to figure out what to rename EA to. If utilitarianism is essential, then EA can be renamed EU, or “Effective Utilitarianism.” It is true “EU” is already taken, by the European Union. That may cause some confusion. But “EA” was also already taken, by Electronic Arts. One could measure the relative size of early EA against Electronic Arts and the current size of EA against the European Union and make a spreadsheet. That might be informative.

EA may not consider utilitarianism to be essential. In that case EA could be renamed EC, or “Effective Consequentialism.” This avoids problems with name collisions. (The first google hit for “EC” is “European Commission.” That is probably fine, though one could also make a spreadsheet to check.)

I would propose a different name, Aid Opportunism. That is because EAs are opportunists. This is true and illuminating. And they do want to aid others. This is not tied to motivation. Sociopaths can aid others. It might be worrying. But people may think that EA is worrying, and in precisely this way. So perhaps this is not a problem.

Renaming a movement is a big deal. Probably there should be a discussion. This could happen on the EA Forum. Participants could follow the EA Forum norms and discuss the future of the movement. Choosing the name for the movement would be both emblematic of a joint effort and also impactful.

One may think that names do not matter, and that as a result renaming EA would not be an impactful activity. However, names are very impactful. That is why people and companies change them. Meta was already mentioned. But there are other nearby examples.

One may think that the name “EA” is entrenched and that it is therefore not possible to change it. However, many EAs care about things like “path dependence” and how choices now can affect the very long run. Thus it should be possible for EA at least to change its name. If it can, that is a data point and good. If it cannot, that is a different sort of data point.

One may think that now is a particularly bad time, because EA just became popular. In fact that makes it a particularly good time. Waiting risks entrenchment. Changing now displays EA’s commitment to accuracy in communication, especially while people are watching. It would be a good advertisement. “We are not locked into the past or controlled by external perceptions. We care about accuracy in messaging, especially about the movement.” What a great move. At least, it should be considered.

The actual problem with renaming is that many people will not want to rename the movement. That is because “EA” is good branding. It combines caring and helping, the heart and the head. Any discussion about whether to rename EA will need to grapple with the question of whether EA should be named in a way that is accurate or a way that gives a positive initial impression. This is a hard question, whether one is a utilitarian or not.

An attempt to rename itself will raise many central issues

I said that EA is misnamed and that this would reveal a deep flaw. We are not there yet. But we will see the deep flaw soon.

Just imagine that EA tried to rename itself. This would force a reckoning on many points.

  • Who would make the choice? EA is a movement. But it is also leader-heavy. Would the people choose? Or would it be the leaders? Could Will MacAskill, Holden Karnofsky, and SBF just rename it? What if they like one name and “regular EAs” like another? Movements gain legitimacy via popular support. They are “bottom-up.” Is that what EA is doing? One could imagine the leaders suggest the possibility of renaming, then the people discuss and vote and the leaders choose the preferred option. Or something similar. That could be ideal. Is that what will happen?

  • How much does EA care about accuracy? EA is utilitarian. Utilitarians are not known for requiring accuracy. Of course they may do so when useful. But we all know that is not the point. Yet EA began as charity assessment. Charity assessment requires accuracy. This is a tension. Many EAs prefer the current name because it is good branding. But that comes at the cost of accuracy. How these values should be weighed is unclear.

  • How much should be discussed in public? Discussing how much one should value accuracy is a tough thing to do. It makes you look like you are tricky. But maybe EA is tricky. That would make sense, if it is utilitarian. Maybe utilitarian calculations reveal that one should not be tricky. But that is not universally agreed upon. Discussion thus would involve discussing whether to be tricky. That is a bad look. That means the problem self-applies. Can you discuss in public how much to discuss in public? But if you discuss in private then who is involved? Is it Will, Holden, and SBF? Or should Nick Beckstead and Owen Cotton-Barratt also be there?

  • What is EA anyway? These issues would be no problem if what EA was was clear. But it is not clear. It is not clear whether EA is bottom-up or top-down or in what ways. It is not clear how much EA cares about accuracy. It is not clear how much EA is committed to open discussion or public transparency. This is because it is not clear whether EA is a bottom-up charity assessment movement that cares about accuracy and public accountability for charities or a top-down utility maximizing movement that wants to build AI.

  • Is EA one thing? It is possible EA is not one thing. It may be a bottom-up charity assessment movement. It may be a top-down utility maximizing movement. It may be neither of these. It may be both. It may be one thing as a funnel for another. But if that is true can that be discussed? Some people may propose that EA be renamed CA2AIF, or “Charity Assessment To AI Funnel.” This would make it clear how EA is one thing on the outside and another on the inside. But is that true? And can that be discussed? Perhaps it could be discussed on the EA Forum after being suggested by the movement leaders. And then if people are upset that EA is misleading or is possibly misleading or is thinking about being misleading then that can be discussed too. On the EA Forum.

People will think that renaming EA would be too much of a headache, or that it is not the sort of thing we could reach agreement on. But then “EA” will get locked in as a name. Later generations may think that it is a headache to undo the choices of the past or that they will not be able to reach agreement. But then path dependency wins. We don’t want that. We want a Long Reflection. Maybe we could not rename EA until later when it is more convenient. But that sounds like rationalization. Perhaps there could be a meta discussion. Should we discuss whether we should discuss renaming EA? Of course, that will raise all of the issues above. So it is not easier. Or at least not much.

The challenges of renaming point to a deep problem

The above issues are similar. They all involve an ambiguity. Is EA top-down or bottom-up? It is ambiguous. Is EA truth-focused or effect-focused? Also ambiguous. Is EA a public phenomenon with public accountability or a private initiative with private decision-making? Ambiguous again.

These are ambiguities. But they are not as important as the ambiguity around “Altruism.” We started talking about the name. The name reveals the flaw. Is EA full of caring people who want to do the most good? Or is it full of rational utility maximizers who just care about impact? Maybe it is both. But the way it is both is not an average or a pie chart.

People have gotten at this with the idea of the motte and bailey. But there is a simpler term, and it is “ambiguity.” The ambiguities allow different interpretations. These interpretations can then be exploited. An ambiguously top-down movement can present itself as bottom-up. An ambiguously effect-focused group can present itself as truth-focused. An ambiguously private initiative can present itself as a public phenomenon. And most importantly, an ambiguously impact-focused idea can present itself as arising from moral motivation (“altruism”).

The problem is not the ambiguity per se. Ambiguity can be resolved. For instance, even “ambiguously top-down” is ambiguous. In this context, however, I can resolve the ambiguity by saying that by “ambiguously X” I mean “X but interpretable by others as not-X.”

The problem is also not that the ambiguity is plausibly being exploited. Yes, top-down elements may benefit from a bottom-up presentation and impact-focused individuals with empty hearts may benefit from the brand of “altruism.” And that may be a problem. But this exploitation may serve greater impact. And it is unclear whether that is a good thing.

The problem is rather the fact that the utility of the ambiguity to some makes it hard for anyone to resolve the ambiguity or talk about resolving the ambiguity. So the problem self-applies, which is why it is no help to go meta.

The foregoing is abstract. But we can be concrete. Consider the idea of renaming the movement. The current name is ambiguous. That ambiguity can be exploited. If someone likes having a warm-fuzzy charity-focused exterior and a cold-hard-math X-risk-focused interior, they may prefer that the ambiguity remain, so that it may continue to be exploited. Of course, there is a question about whether they are right. But even the question of whether this should be discussed publicly depends on the answer to important questions that would be part of the discussion. They may prefer that whether they are right not be discussed publicly, because they believe they are right and that public discussion will make it harder to act on that basis and exploit the ambiguity.

To understand this, just think about what will happen if EAs discuss publicly how much it is acceptable to mislead others as part of the cause. The answer is that it will make EA look bad. But how bad should EA be willing to look, in service of truth and transparency? How should such tradeoffs be made?

More simply, there is a deep problem with EA in that the choices it makes may not be able to be subject to reflection in accordance with the standards it promulgates, and that whether that is itself a problem depends on reflection it may not be able to engage in in accordance with the standards it promulgates. There is thus a gulf or divide between those who believe that EA should generally trade in favor of accuracy and public discussion and those who believe EA should engage in brand and perception management, even at the cost of misleading people, to maximize its potential for impact.

The problem does not admit of an easy solution

The problem described above is important. Because it is important it may require a name. Names should be fitting. In this case, we might want to call it the Self-Applying, Hard-to-Discuss, Hard-to-Resolve Issues problem, or SHHI. This is a good name because it is similar to “Shh!” and similar to when someone cuts the “T” off of the end of a particular expletive. Both are pertinent.

However, the name SHHI is unwieldy. We might then want a shorter and snappy name, one that is more memorable but also not misleading. I propose then that we call it the “Meta” problem. This name will serve for the brief time in which the problem will be discussed.

Some will maintain that there is no Meta problem, because if there is a problem, we can simply specify to people what EA is without changing the name. On this proposal we would say, “EA means ‘Effective Altruism,’ and by ‘Altruism’ we mean utility maximization, not anything having to do with ‘being a good person’ or ‘caring about others’ in a normal sense.” This does not work, however. In private contexts, or with technical audiences, counterintuitive definitions can work. This is how I resolved the ambiguity around the use of “ambiguity” above. That works for this essay. It would not work for the general public. The general public will not pay attention to clever definitions. One may not like that. But it is true.

Some will maintain that there is no Meta problem because the ambiguity described is not being exploited. Whether this is the case may be difficult to establish in an essay like this. But it is consonant with my experience. In any event it can be discussed. Maybe the problem will turn out to not be that bad. But I do not think so.

One may maintain that there is no Meta problem because the problem is very abstract and that even the issue with the name of the movement is phrased in an abstract and abstruse way and depends on the careful analysis of uncommon words like “altruism.” This raises the question of whether EA is philosophical. Philosophers like to look at the specific meanings of words and make subtle distinctions. We might add to the list of relevant ambiguities whether EA is philosophical in reality or whether that is just part of its brand.

Once it is acknowledged that there are key ambiguities, that these ambiguities are being exploited, that the question of whether that exploitation is good is challenging and depends on answers to questions that are difficult to discuss, etc. etc., it will be recognized that the Meta problem is real. For most people, this will come from their experience in EA. Many problems are downstream of the Meta problem. One might imagine what it would look like if it is were solved, and if the ambiguities described above were cleared up.

Of course, the Meta problem is not easy to solve. But this should not be surprising. Many people have been trying to solve problems in EA and with EA for years. So the problems that remain are likely to be difficult. But one may take heart. Great things do not falter on easy problems. EA may or may not be great. But people believe that it is, or that it may be.

A proposal and alternative

One way to solve a “meta” problem is to solve a very concrete problem whose solution requires solving the problems that constitute the “meta” problem. That applies in this case. If one solves a concrete problem whose solution requires solving the problems that constitute the Meta problem, one will have solved the Meta problem.

The concrete problem I have proposed solving is that EA is misnamed. To solve this, EA needs to pick a more fitting name. I have begun the discussion above by describing alternatives. I have mentioned Effective Utilitarianism (EU), Effective Consequentialism (EC), and Aid Opportunism (AO).

A full treatment would likely require input from many people. In fact the people who constitute the movement. That is because the name should reflect who the people are. People’s identities are tied up in EA. A new name should reflect the reality of those identities.

In fact, there are many considerations that factor into a name. One would like it to be short and catchy. But in fact how short or catchy a name should be depends on how thoroughly it is supposed to penetrate public consciousness. Shorter and catchier names for things that are meant to become more well-known, clunky or technical names for things that are not. (Compare: “the SHHI problem” vs. “the Meta problem.”)

One would also like a name to be accurate. Here there are many questions to be answered about EA. Is EA consequentialist? Is EA utilitarian? I have argued that EA is not “altruist.” The name should capture the essence of the thing. At least it should in this case. That is the point of accurate communication.

There are other considerations as well. One would like to avoid name collisions (vs. EU). One would like a name to not be too hard to say, with syllable count important (vs. EU and EC). One would also like a name to not rhyme with or evoke untoward subjects. Of course, it is difficult to balance all of these features and in some cases tradeoffs must be made.

Now is a good opportunity for me to plug my own preferred option, namely Aid Opportunism.

“Aid Opportunism” as a name is short (two words, six syllables; vs. “Effective Altruism” which is two words, seven syllables). It is a bit esoteric or weird, which I believe matches the reality of present EA. It handles the “altruism” issue by replacing it with “opportunism”; as noted earlier, I believe this matches more of the actual mentality of EAs — it’s people looking for opportunities to help. It also does not encode either utilitarianism or consequentialism into the DNA of the movement, which I believe would be an improvement, though it does seem to indicate a preference for existing entities rather than one for populating the universe with new entities. So it may be a non-starter on that ground alone.

Most importantly, AO sounds unnatural. More natural to my ears is “Help Opportunism” (HO) or “Benefit Opportunism” (BO). But these both have untoward acronyms. Yesterday when I was writing this I thought to advance “Help Opportunism” rather than “Aid Opportunism,” proposing that perhaps people calling EAs “ho’s” might not be too demoralizing. Terms are sometimes reclaimed, and so one could imagine EAs proudly calling themselves “ho’s,” and perhaps that would be okay. But this seems less plausible than simply going with the less natural-sounding AO.

The obvious alternative to any specific proposal is that there be a broad-based public discussion of the topic of what the EA movement’s name should be. This could be done on the EA Forum. It would in some sense answer the Meta question by tilting in favor of accuracy and transparency. But that is a reflection of my values. Alternatively, EA movement leaders could convene a Leaders Meeting and discuss the issue there.

EA was named once. It can be named again.

Concluding summary

EA has a major problem. I have called this the “Meta” problem. It is hard to solve. But there is a way to solve it, by solving a particular concrete problem whose solution solves the Meta problem.

There is a second problem. This is that EA is misnamed. This can be solved in various ways. But solving it, I propose, will solve the Meta problem by raising all of the problematic ambiguities discussed above.

Notes

[1] EA Criticism and Red Teaming contest

This essay was written for the EA Criticism and Red Teaming contest. Its purpose is to cause readers to change their mind about something important, namely whether EA should be renamed. It is my estimation that very few EAs have been actively thinking about or considering whether EA should be renamed. Also, whether EA should be renamed is important.

The piece is critical in that it takes a critical or questioning stance towards some aspect of EA practice, in particular, what EAs call their movement. It also by extension pertains to practices of exploiting ambiguities, which may or may not be engaged in by some EAs.

The piece discusses important issues, including key unclarities about leadership (“Who would make the choice?”), values (“How much does EA care about accuracy?”), relation to the public (“How much should be discussed in public?”), nature (“What is EA anyway?”), and composition (“Is EA one thing?”). Also, whether the movement should change its name.

The piece is constructive and action-relevant. There is a specific course of action recommended, which is to change the name of the movement. I have suggested AO, but there are other possibilities, such as HO or EC or EU. A change of name may be preceded by a discussion, which could take place on the EA Forum. This course of action is also realistic, because EA has the ability to change its name, and if it does not, then there are more serious concerns about path dependence which must be raised.

The piece is somewhat transparent or legible with respect to epistemic process. I have added notes on that under [2] below. I could add more information if that was valuable.

The piece is, I believe, aware of context in that it does not neglect to respond to any previous work on the topic. Or at least if there is previous work I am not aware of it. But in that case the piece would not be aware. So maybe it is aware or not. I think it is aware. [ADDED, 11th Sep 2022: Though see note [3].]

The piece is novel in that it proposes an original course of action (renaming the movement), an original argument for that course of action being useful in an object-level way (that EA is misnamed and names are important), an original argument for that course of action being useful in a meta-level way (that EA has a Meta problem and that this course of action solves the Meta problem), and original considerations and proposals (e.g., the relation of sociopathy to EA, the suggestion of EU, EC, AO, or HO as new names for the movement).

The piece is focused in that it deals with a small number of objects. Some of the piece focuses on the name of the movement. That part is very focused. Some part deals with what I called the Meta problem. That part is also focused. Several questions are listed, but these are all part of the same problem and serve as illustrations. The same point could have been made more concisely, but it seemed important to make the point clear.

The piece is clear in writing, though the prose is regrettably choppy. It does not punch down in any discernible sense. It is aware of context, I think. It has a scout mindset, at least in that I am happy to update in response to evidence. For more on that see [2] below. It does not have personal attacks, unless listing plausible important people in the movement counts as an attack, which it should not. It is not a diatribe, which is “a forceful and bitter verbal attack against someone or something.” I am also a subject-matter expert who does not typically associate with EA and this essay hopefully shares insights that the essay contest judges have not heard.

I did not read the guides to red-teaming or criticism. I just wrote this essay. It is also not in any of the suggested forms. It is just an explanation of some things wrong with EA and a constructive set of suggestions for how to fix them, with the main suggestion being renaming the movement.

[2] Reasoning transparency and legibility

I am 90% sure that EA is misnamed. I am 50% sure that EA should change its name. I am 70% sure that if EA should be renamed, it should be through a public process. I feel like I am 15% sure that if EA should be renamed, it should be renamed AO, though in reality I am much less sure since there are so many possible names. I could list probabilities for other relevant beliefs, if that would be helpful.

The primary epistemic basis for my claims is the set of arguments given above. In some cases, the arguments are abstract, such as the Unrepentant Sociopathic Utilitarian argument in the first section. In other cases, the arguments rely on my own observations of dynamics within the EA movement. However these may be incorrect or out of date. In most cases, I am relying on EAs to consider either the arguments given in the text or their own experiences with the movement to see whether the claims I am making are correct.

[3] Note on “awareness”

I later noticed that there was another essay that proposed changing EA’s name also because of issues with “altruism.” I did not notice this before. In that sense this essay was not “aware.” See note [2]. I think that is ok, since the other essay was posted just recently. But perhaps not. The essay contest judges will decide.

[4] Upvotes

I have noticed that at the time of writing this note, the linked essay has 164 points (from 96 votes) and 35 comments while this essay has 0 points (from 6 votes) and 0 comments.

[5] Edit history

EDIT, 3rd Sep 2022: Typo fixed.

EDIT, 11th Sep 2022: Note in [1] added. Notes [3], [4] and [5] added.