I personally would feel excited about rebranding “effective altruism” to a less ideological and more ideas-oriented brand (e.g., “global priorities community”, or simply “priorities community”), but I realize that others probably wouldn’t agree with me on this, it would be a costly change, and it may not even be feasible anymore to make the change at this point. OTOH, given that the community might grow much bigger than it currently is, it’s perhaps worth making the change now? I’d love to be proven wrong, of course.
This sounds very right to me.
Another way of putting this argument is that “global priorities (GP)” community is both more likable and more appropriate than “effective altruism (EA)” community. More likable because it’s less self-congratulatory, arrogant, identity-oriented, and ideologically intense.
More appropriate (or descriptive) because it better focuses on large-scale change, rather than individual action, and ideas rather than individual people or their virtues. I’d also say, more controversially, that when introducing EA ideas, I would be more likely to ask the question: “how ought one to decide what to work on?”, or “what are the big problems of our time?” rather than “how much ought one to give?” or “what is the best way to solve problem X?” Moreover, I’d more likely bring up Parfit’s catastrophic risks thought experiment, than Singer’s shallow pond. A more appropriate name could help reduce bait-and-switch dynamics, and help with recruiting people more suited to the jobs that we need done.
If you have a name that’s much more likable and somewhat more appropriate, then you’re in a much stronger position introducing the ideas to new people, whether they are highly-susceptible to them, or less so. So I imagine introducing these ideas as “GP” to a parent, an acquaintance, a donor, or an adjacent student group, would be less of an uphill battle than “EA” in almost all cases.
Apart from likability and appropriateness, the other five of Neumeir’s naming criteria are:
Distinctiveness. EA wins.
Brevity. GP wins. It’s 16 letters rather than 17, and 6 syllables rather than 7.
Easy spelling and punctuation. GP wins. In a word frequency corpus “Global” and “Priorities” feature 93M and 11M times, compared to “Effective” (75M) and “Altruism” (0.4M). Relatedly, “effective altruism” is annoying enough to say that people tend to abbreviate it to “EA”, which is somewhat opaque and exclusionary.
Extendability. GP wins. It’s more natural to use GP than EA to describe non-agents e.g. GP research vs EA research, and “policy prioritisation” is a better extension than “effective policy”, because we’re more about doing the important thing than just doing something well.
Protectability. EA wins, I guess, although note that “global priorities” already leads me exclusively to organisations in the EA community, so probably GP is protectable enough.
Overall, GP looks like a big upgrade. Another thing to keep in mind is that it may be more of an upgrade than it seems based on discussions within the existing community, because it consists of only those who were not repelled by the current “EA” name.
Concretely, what would this mean? Well… instead of EA Global, EA Forum, EA Handbook, EA Funds, EA Wiki, you would probably have GP Summit, GP Forum, (G)P Handbook, (G)P Funds, GP Wiki etc. Obviously, there are some switching costs in regard to the effort of renaming, and of name recognition, but as an originator of two of these things, the names themselves seem like improvements to me—it seems much more useful to go to a summit, or read resources about global priorities, rather than one focused on altruism in abstract. Orgs like OpenPhil/LongView/80k wouldn’t have to change their names at all.
Moreover, changing the name to GP would break the names of some named orgs, it wouldn’t always do that. In fact, the Global Priorities Institute was initially going to be the EA Institute, but the name had to be switched to sound more academically respectable. If the community was renamed as the Global Priorities Community, then GPI would get to be named after the community that it originated from and be academically respectable at the same time, which would be super-awesome. The fact that prioritisation arises more frequently in EA org names than any phrase except for “EA” itself might be telling us something important. Consider: “Rethink Priorities”, “Global Priorities Project”, “Legal Priorities Project”, “Global Priorities Institute”, “Priority Wiki”, “Cause Prioritisation Wiki”.
Another possible disadvantage would be if it made it harder for us to attract our core audience. But to be honest, I think that the people who are super-excited about utilitarianism and rationality are pretty likely to find us anyway, and that having a slightly larger and more respectable-looking community would help with that in some ways anyway.
Finally, renaming can be an opportunity for re-centering the brand and strategy overall. How exactly we might refocus could be controversial, but it would be a valuable opportunity.
So overall, I’d be really excited about a name change!
The current discussion in the comments seems quite centered on “effective altruism vs. global priorities”. I just wanted to highlight that I spent, like, 3 minutes in total thinking about alternative naming options, and feel pretty confident that there are probably quite a few options that work better than “global priorities”. In fact, when renaming CLR, we only came up with the new name after brainstorming many options. So I would really like us to generate a list of >10 great alternatives (i.e. actually viable alternatives) before starting to compare them.
Off the top of my head, I think how we[1] should proceed is something like:
Generate a long list of possible labels
Generate a set of goals we have / criteria for evaluating the labels
Generate a set of broader approaches we could take, such as having different labels that we use for different audiences, or different labels for different segments of the community, or
Then evaluate the labels and approaches (or combinations thereof) against the goals / criteria we came up with
I think the first three actions can/should be done roughly in parallel, and that the fourth should mostly wait till we’ve done the first three. Or we might iterate through “first three actions, then fourth action, then first three actions again …” a few times.
And I’d say this is best done through one or more well-run surveys, as you suggest. Maybe there could first be surveys that ask EAs to generate ideas for labels, goals/criteria, and broader approaches, then ask them to rate given ideas and approaches against given goals/criteria (or maybe that should be split into a followup survey). And then there could be surveys of non-EAs that just skip to that last step (since I imagine it’d be hard for them to come up with useful ideas without context first).
I think a name change might be good, but am not very excited about the “Global Priorities” name. I expect it would attract mostly people interested in seeking power and “having lots of influence” and I would generally expect a community with that name to be very focused on achieving political aims, which I think would be quite catastrophic for the community.
I actually considered this specific name in 2015 while I was working at CEA US as a potential alternative name for the community, but we decided against it at the time for reasons in this space (and because changing names seems hard).
While I’m not sure we’re using terms like “political” and “power” in the same way, as far as I can tell this worry makes a lot of sense to me.
However, I think there is an opposite failure mode: mistakenly believing that because of one’s noble goals and attitudes one is immune to the vices of power, and can safely ignore the art of how to navigate a world that contains conflicting interests.
A key assumption from my perspective is that political and power dynamics aren’t something one can just opt out of. There is a reason why thinkers from Plato over Macchiavelli to Carl Schmitt have insisted that politics is a separate domain that merits special attention (and I’m not saying this as someone who is not particularly sympathetic to any of these three on the object level). [ETA: Actually I’m not sure if Plato says that, and I’m confused why I included him originally. In a sense he may suggest the opposite view since he sometimes compares the state to the individual.]
Internally, community members with influence over more financial or social capital have power over those whose projects depend on such capital. There certainly are different views with respect to how this capital is best allocated, and at least for practical purposes I don’t think these are purely empirical disagreements and instead involve ‘brute differences in interests’.
Externally, EAs have power over beneficiaries when they choose to help some but not others. And a lot of EA projects are relevant to the interests of EA-external actors that form a complex network of partly different and partly aligned interests and different amounts of power over each other. Perhaps most drastically, a lot of EA thought around AI risk is about how to best influence how essentially the whole world will be reshaped (if not an outright plan for how to essentially take over the world).
Therefore, I think we will need to deal with ‘politics’ anyway, and we will attract people who are motivated by seeking power anyway. Non-EA political structures and practice contain a lot of accumulated wisdom on how to navigate conflicting interests while limiting damage from negative-sum interactions, on how to keep the power of individual actors in check, and on how to shape incentives in such a way that power-seeking individuals make prosocial contributions in their pursuit of power. (E.g. my prior is that any head of government in a democracy is at least partly motivated by pursuing power.)
To be clear, I think there are significant problems with these non-EA practices. (Perhaps most notably negative x-risk externalities from international competition.) And if EA can contribute technological or other innovations that help with reducing these problems, I’m all for it.
Yet overall I feel like I more often see EAs make the mistake of naively thinking they can ignore their externally imposed entanglement in political and power dynamics, and that there is nothing to be learned from established ways for how to reign in and shape these dynamics (perhaps because they view established practice and institutions largely as a morass of corruption and incompetence one better steers clear of). E.g. some significant problems I’ve seen at EA orgs could have been avoided by sticking more closely to standard advice of having e.g. a functional board that provides accountability to org leadership.
My best guess is that, on the margin, it would be good to attract more people with a more common-sense perspective on politics and power-seeking as opposed to people who lack the ability or willingness to understand how power operates in the world, and how to best navigate this. If rebranding to “Global Priorites” would have that effect (which I think I’m less confident in than you), then I’d count that as a reason for rebranding (though I doubt it would be among the top 5 most important pro or con reasons).
I’m noticing I don’t fully understand the way in which you think “Global Priorities” would attract power-seekers, or what you mean by that. Like, I have a vague sense that you’re probably right, but I don’t see the direct connection yet. Would be very interested in more elaboration on this.
I mean, I just imagine what kind of person would be interested, and it would mostly be the kind of person who is ambitious, though not necessarily competent, and would seek out whatever opportunities or clubs there are that are associated with the biggest influence over the world, or sound the highest status, have the most prestige, or sound like would be filled with the most powerful people. I have met many of those people, and a large fraction of high-status opportunities that don’t also strongly select for merit seem filled with them.
Currently both EA and Rationality are weird in a way that is not immediately interesting to people who follow that algorithm, which strikes me as quite good. In universities, when I’ve gone to things that sounded like “Global Priorities” seminars, I mostly met lots of people with a political science degree, or MBA’s, really focusing on how they can acquire more power and the whole conversation being very status oriented.
The Defense Professor’s fingers idly spun the button, turning it over and over. “Then again, only a very few folk ever do anything interesting with their lives. What does it matter to you if they are mostly witches or mostly wizards, so long as you are not among them? And I suspect you will not be among them, Miss Davis; for although you are ambitious, you have no ambition.”
“That’s not true!” said Tracey indignantly. “And what’s it mean?”
Professor Quirrell straightened from where he had been leaning against the wall. “You were Sorted into Slytherin, Miss Davis, and I expect that you will grasp at any opportunity for advancement which falls into your hands. But there is no great ambition that you are driven to accomplish, and you will not make your opportunities. At best you will grasp your way upward into Minister of Magic, or some other high position of unimportance, never breaking the bounds of your existence.”
—HPMOR, Chapter 70, Self-Actualization (part 5)
Added: The following is DEFINITELY NOT a strong argument, but just kind of an associative point. I think that Voldemort (both the real one from JK Rowling and also the one in HPMOR) would be much more likely to decide that he and his Death Eaters should have “Global Priorities” meetings than “Effective Altruist” meetings. (“We’re too focus on taking over the British Ministry for Magic, we need to also focus on our Global Priorities.“) In that way I think the former phrase has a more general connotation of ”taking power and changing the world” in a way the latter does not.
I think this is a good point. That said, I imagine it’s quite hard to really tell.
Empirical data could be really useful to get here. Online experimentation in simple cases, or maybe we could even have some University chapters try out different names and see if we can infer any substantial differences.
1) I’m convinced that a “GP” community would attract somewhat more power-seeking people. But they might be more likely to follow (good) social norms than the current consequentialist crowd. Moreover, we would be heading toward a more action-oriented and less communal group, which could reduce the attraction to manipulative people. And today’s community is older and more BS-resistant with some legibly-trustworthy leaders. But you seem to think there would be a big and harmful net effect—can you explain?
2) assuming that “GP” is too intrinsically political, can you think of any alternatives that have some of its advantages of “GP” without that disadvantage?
we would be heading toward a more action-oriented and less communal group, which could reduce the attraction to manipulative people
I don’t expect a brand change to “Global Priorities” to bring in more action-oriented people. I expect fewer people would donate money themselves, for instance, they would see it as cute but obviously not having any “global” impact, and therefore below them.
(I think it was my inner Quirrell / inner cynic that wrote some of this comment, but I stand by it as honestly describing a real effect that I anticipate.)
But we would also be heading toward a more action-oriented and less communal group, which could reduce the attraction to manipulative people
I don’t understand this. We would be trending towards seeking more power, which would further attract power-seekers. We have already substantially gone down this path. You might have different models of what attracts manipulative people. My model is doing visibly power-seeking and high-status work is one of the most common attractors.
Moreover, today’s community is older and more BS-resident with some legibly-trustworthy leaders.
I think we have overall become substantially less BS-resistant as we have grown and have drastically increased the surface area of the community, though it depends a bit on the details.
But you seem to think there would be a big and harmful net effect—can you explain?
Yep, I would be up for doing that, but alas won’t have time for it this week. It seemed better to leave a comment voicing my concerns at all, even if I don’t have time to explain them in-depth, though I do apologize for not having the time to explain them in full.
I don’t understand this. We would be trending towards seeking more power, which would further attract power-seekers. We have already substantially gone down this path. You might have different models of what attracts manipulative people. My model is doing visibly power-seeking and high-status work is one of the most common attractors.
I’m concerned about people seeking power in order to mistreat, mislead, or manipulate others (cult-like stuff), as seems more likely in a social community, and less likely in a group of people who share interests in actually doing things in the world. I’m in favour of people gaining influence, all things equal!
Alas, I think that isn’t actually what tends to attract the most competent manipulative people. Random social communities might attract incompetent or average-competence manipulative people, but those are much less of a risk than the competent ones. In general, professional communities, in particular ones aiming for relatively unconditional power, strike me as having a much higher density of manipulative people than random social communities.
I also think when I go into my models here, the term “manipulative” feels somewhat misleading, but it would take me a while longer to explain alternative phrasings.
TBC, this feels like a bit of a straw man of my actual view, which is that power and communality jointly contribute to risks of cultishness and manipulativeness.
I think the “global priorities” label fails to escape several of the problems that Jonas argued the EA brand has. In particular, it sounds arrogant for someone to say that they’re trying to figure out global priorities. If I heard of a global priorities forum or conference, I’d expect it to have pretty strong links with the people actually responsible for implementing global decisions; if it were actually just organised by a bunch of students, then they’d seem pretty self-aggrandizing.
The “priorities” part may also suggest to others that they’re not a priority. I expect “the global priorities movement has decided that X is not a priority” seems just as unpleasant to people pursuing X as “the effective altruism movement has decided that X is not effective”.
Lastly, “effective altruism” to me suggests both figuring out what to do, and then doing it. Whereas “global priorities” only has connotations of the former.
Well, my default opinion is that we should keep things as they are; I don’t find the arguments against “effective altruism” particularly persuasive, and name changes at this scale are pretty costly.
Insofar as people want to keep their identities small, there are already a bunch of other terms they can use—like longtermist, or environmentalist, or animal rights advocate. So it seems like the point of having a term like EA on top of that is to identify a community. And saying “I’m part of the effective altruism community” softens the term a bit.
around half of the participants (including key figures in EA) said that they don’t self-identify as “effective altruists”
This seems like the most important point to think about; relatedly, I remember being surprised when I interned at FHI and learned how many people there don’t identify as effective altruists. It seems indicative of some problem, which seems worth pursuing directly. As a first step, it’d be good to hear more from people who have reservations about identifying as an effective altruist. I’ve just made a top-level question about it, plus an anonymous version—if that describes you, I’d be interested to see your responses!
Great comment. To these points I would also add (or maybe just summarize some of the points you made) that “global priorities” seems to have more empirical/world-focused connotations to me, whereas “effective altruism” sounds a lot more philosophical/ideological to me.
E.g. I agree that “global priorities” suggests questions like “what are the big challenges of our time?”, which I like a lot more than e.g. “how altruistic should we be?”, “is there something like ‘true altruism’?” or whichever other thing “effective altruism” makes people first think of.
Of course, I agree that ultimately the project of doing as much good as we can involves both empirical and philosophical questions. But relative to today, I think we’d be better equipped to execute that project well with a stronger emphasis on empirical and practical questions and less emphasis on abstract philosophy. (Though to be fair to the EA label, the status quo is more due to founder effects rather than due to the name differentially attracting philosophers.)
It seems worth noting that all of those orgs/wikis are focused on producing or collecting research, not on more directly acting on the world. This is of course a key part of EA, but not the whole of it.
In line with that, I think that “global priorities”, “global priorities community”, or similar terms sound like they’re mostly about working out what the global priorities are and less about actually acting on those answers. EA is already often perceived as too research-focused (though I’m not saying I agree with those perceptions myself), so it might be good to avoid things that would exacerbate that.
I like this style of thinking, but I don’t think it pushes in the direction that you suggest. EA entities with “priorities” in the name disproportionately work on surveys and policy, whereas those with “EA” in the name tend to be communal or meta, e.g. EA Forum, EA Global, EA Handbook, and CEA. Groups that act in the world tend to have neither, like GWWC, AMF, OpenAI.
On balance, I think “global priorities” connotes more concreteness and action-orientation than “EA”, which is more virtue- and identity- oriented. If I was wrong on this, it would partly convince me.
I guess I intended my comment above to make three claims:
It is empirically true that those orgs/wikis you noted as having “priorities” in their names are focused on producing or collecting research, not on more directly acting on the world
Separately, to me, “global priorities” does seem to have connotations of working out what the global priorities are and less about actually acting on those answers.
Claim 1 seems to be in line with claim 2.
But I think claim 1 wasn’t the basis for claim 2; I already felt those connotations before you named those orgs, though of course I had already heard of the orgs.
But I don’t see these claims as super important, because:
We can just run a bunch of surveys and see what connotations other people perceive
Action-oriented vs research-oriented is just one of many relevant dimensions
“global priorities” is just one alternative name
I guess I see the small value of my comment as quickly highlighting small reasons to doubt your initial views and therefore additional reasons to gather more options, consider our goals/criteria/desiderata more (I like that your comment lists some general goals for names), and run a bunch of surveys.
I was just reflecting on the term ‘global priorities’. I think to me it sounds like it’s asking “what should the world do”, in contrast to “what should I do”. The latter is far mode, the former is near. I think that staying near mode while thinking about improving the world is pretty tough. I think when people fail, they end making recommendations that could only work in-principle if everyone coordinates at the same time, and also as a result shape their speech to focus on signaling to achieve these ends, and often walk off a cliff of abstraction. I think when people stay in near mode, they focus on opportunities that do not require coordination, but opportunities they can personally achieve. I think that EAs caring very much about whether they actually helped someone with their donation has been one of the healthier epistemic things for the community. Though I do not mean to argue it should be held as a sacred value.
For example, I think the question “what should the global priority be on helping developing countries” is naturally answered by talking broadly about the West helping Africa build a thriving economy, talk about political revolution to remove corruption in governments, talk about what sorts of multi-billion dollar efforts could take place like what the Gates Foundation should do. This is a valuable conversation that has been going on for decades/centuries.
I think the question “what can I personally do to help people in Africa” is more naturally answered by providing cost-effectiveness estimates for marginal thousands of dollars to charities like AMF. This is a valuable conversation that I think has has orders of magnitude less effort put into it outside the EA community. It’s a standard idea in economics that you can reliably get incredibly high returns on small marginal investments, and I think it is these kind of investments that the EA community has been much more successful at finding, and has managed to exploit to great effect.
“global priorities (GP)” community is… more appropriate than “effective altruism (EA)” community… More appropriate (or descriptive) because it better focuses on large-scale change, rather than individual action
Anyway, I was surprised to read you say that, in direct contrast to what I was thinking, and I think how I have often thought of Effective Altruism.
Extendability. GP wins. It’s more natural to use GP than EA to describe non-agents e.g. GP research vs EA research, and “policy prioritisation” is a better extension than “effective policy”, because we’re more about doing the important thing than just doing something well.
But it seems like GP is harder to extend to agents specifically? Currently, I can say “I’m an [EA / effective altruist / aspiring EA]”. That sounds a bit arrogant, but probably less so than saying “I’m a global priority” :P
Obviously that’s not the label we’d use for individuals, but I’m not sure the alternative. Some ideas that seem bad:
Global prioritist
GP (obviously that acronym is already taken, and in any case it’d just expand out to things like “I’m a global priority” or “we’re global priorities”)
Member of the global priorities community (way too long)
(In any case, as Jonas notes, our focus for now should probably be on brainstorming ideas rather than pitting them against each other so far. So this comment may not be very important.)
I kinda think that “I’m an EA/he’s an EA/etc” is mega-cringey (a bad combo of arrogant + opaque acryonym + tribal) , and that deprecating it is a feature, rather than a bug.
Though you can just say “I’m interested in / I work on global priorities / I’m in the prioritisation community”, or anything that you would say about the AI safety community, for example.
I kinda think that “I’m an EA/he’s an EA/etc” is mega-cringey (a bad combo of arrogant + opaque acryonym + tribal)
It sounds like you think it’s bad that people have identified their lives with trying to help people as much as they can? Like, people like Julia Wise and Toby Ord shouldn’t have made it part of their life identity to do the most good they can do. They shouldn’t have said “I’m that sort of person” but they should have said “This is one of my interests”.
I also find that a bit cringy. To me, the issue is saying “I have SUCCEEDED at being effective at altruism”, which feels like a high bar and somewhat arrogant to explicitly admit to
By a similar token, one could replace “I’m/He’s an EA” with “I’m/He’s interested in effective altruism”, which would at least somewhat reduce the problems you note.
People usually don’t do this, which I think is because we naturally gravitate towards shorter phrases. I guess this could be seen as a downside of the fact that the current phrase can be conveniently shortened.
But, of course, the ability to shorten also has an upside (saving time and space).
I often say/write and hear/read things like “EAs are often interested in …”, “One mistake some EAs make is...”, etc. This is more common than me referring to myself as an EA, and somewhat less at risk of seeming arrogant (though it still can). I think expanding all such uses of “EAs” to “people interested in global priorities” would be a hassle (though not necessarily net negative).
“I’m interested in global priorities” and “I work on global priorities” also seem kind-of arrogant, bland, and/or weirdly vague to me. Maybe like a parody of vacuous business speak.
Not sure how common this perception would be—we should run a survey.
(Though I feel I should emphasise that I just see these as small reasons to doubt your views, which therefore pushes in favour of gathering more options, considering our goals/criteria/desiderata more, and running a bunch of surveys. My intention isn’t really to definitively argue against “global priorities”.)
ETA: I just saw that Will Bradshaw already said things quite similar to what I said here, but a bit more concisely...
Yeah, I’m much more sympathetic to concerns with “effective altruist” than with “effective altruism”, and it doesn’t seem like GP does any better in that regard – all the solutions you could apply here (“I’m a member of the global priorities community”, “I’m interested in global priorities”) also apply to EA.
Maybe the fact that the short forms are so awkward for GP is part of the idea? Like, EA has this very attractive but somewhat problematic personalised form (“effective altruist”); GP’s personalised forms are all unattractive, so you avoid the problematic attractor?
But it still seems that, if personalised forms are a big part of the concern (which I think they are), this is a good argument in favour of keeping looking. Which was Jonas’s proposal anyway.
This sounds very right to me.
Another way of putting this argument is that “global priorities (GP)” community is both more likable and more appropriate than “effective altruism (EA)” community. More likable because it’s less self-congratulatory, arrogant, identity-oriented, and ideologically intense.
More appropriate (or descriptive) because it better focuses on large-scale change, rather than individual action, and ideas rather than individual people or their virtues. I’d also say, more controversially, that when introducing EA ideas, I would be more likely to ask the question: “how ought one to decide what to work on?”, or “what are the big problems of our time?” rather than “how much ought one to give?” or “what is the best way to solve problem X?” Moreover, I’d more likely bring up Parfit’s catastrophic risks thought experiment, than Singer’s shallow pond. A more appropriate name could help reduce bait-and-switch dynamics, and help with recruiting people more suited to the jobs that we need done.
If you have a name that’s much more likable and somewhat more appropriate, then you’re in a much stronger position introducing the ideas to new people, whether they are highly-susceptible to them, or less so. So I imagine introducing these ideas as “GP” to a parent, an acquaintance, a donor, or an adjacent student group, would be less of an uphill battle than “EA” in almost all cases.
Apart from likability and appropriateness, the other five of Neumeir’s naming criteria are:
Distinctiveness. EA wins.
Brevity. GP wins. It’s 16 letters rather than 17, and 6 syllables rather than 7.
Easy spelling and punctuation. GP wins. In a word frequency corpus “Global” and “Priorities” feature 93M and 11M times, compared to “Effective” (75M) and “Altruism” (0.4M). Relatedly, “effective altruism” is annoying enough to say that people tend to abbreviate it to “EA”, which is somewhat opaque and exclusionary.
Extendability. GP wins. It’s more natural to use GP than EA to describe non-agents e.g. GP research vs EA research, and “policy prioritisation” is a better extension than “effective policy”, because we’re more about doing the important thing than just doing something well.
Protectability. EA wins, I guess, although note that “global priorities” already leads me exclusively to organisations in the EA community, so probably GP is protectable enough.
Overall, GP looks like a big upgrade. Another thing to keep in mind is that it may be more of an upgrade than it seems based on discussions within the existing community, because it consists of only those who were not repelled by the current “EA” name.
Concretely, what would this mean? Well… instead of EA Global, EA Forum, EA Handbook, EA Funds, EA Wiki, you would probably have GP Summit, GP Forum, (G)P Handbook, (G)P Funds, GP Wiki etc. Obviously, there are some switching costs in regard to the effort of renaming, and of name recognition, but as an originator of two of these things, the names themselves seem like improvements to me—it seems much more useful to go to a summit, or read resources about global priorities, rather than one focused on altruism in abstract. Orgs like OpenPhil/LongView/80k wouldn’t have to change their names at all.
Moreover, changing the name to GP would break the names of some named orgs, it wouldn’t always do that. In fact, the Global Priorities Institute was initially going to be the EA Institute, but the name had to be switched to sound more academically respectable. If the community was renamed as the Global Priorities Community, then GPI would get to be named after the community that it originated from and be academically respectable at the same time, which would be super-awesome. The fact that prioritisation arises more frequently in EA org names than any phrase except for “EA” itself might be telling us something important. Consider: “Rethink Priorities”, “Global Priorities Project”, “Legal Priorities Project”, “Global Priorities Institute”, “Priority Wiki”, “Cause Prioritisation Wiki”.
Another possible disadvantage would be if it made it harder for us to attract our core audience. But to be honest, I think that the people who are super-excited about utilitarianism and rationality are pretty likely to find us anyway, and that having a slightly larger and more respectable-looking community would help with that in some ways anyway.
Finally, renaming can be an opportunity for re-centering the brand and strategy overall. How exactly we might refocus could be controversial, but it would be a valuable opportunity.
So overall, I’d be really excited about a name change!
I really liked this comment, thanks!
The current discussion in the comments seems quite centered on “effective altruism vs. global priorities”. I just wanted to highlight that I spent, like, 3 minutes in total thinking about alternative naming options, and feel pretty confident that there are probably quite a few options that work better than “global priorities”. In fact, when renaming CLR, we only came up with the new name after brainstorming many options. So I would really like us to generate a list of >10 great alternatives (i.e. actually viable alternatives) before starting to compare them.
This seems like a really good point.
Off the top of my head, I think how we[1] should proceed is something like:
Generate a long list of possible labels
Generate a set of goals we have / criteria for evaluating the labels
Generate a set of broader approaches we could take, such as having different labels that we use for different audiences, or different labels for different segments of the community, or
Then evaluate the labels and approaches (or combinations thereof) against the goals / criteria we came up with
I think the first three actions can/should be done roughly in parallel, and that the fourth should mostly wait till we’ve done the first three. Or we might iterate through “first three actions, then fourth action, then first three actions again …” a few times.
And I’d say this is best done through one or more well-run surveys, as you suggest. Maybe there could first be surveys that ask EAs to generate ideas for labels, goals/criteria, and broader approaches, then ask them to rate given ideas and approaches against given goals/criteria (or maybe that should be split into a followup survey). And then there could be surveys of non-EAs that just skip to that last step (since I imagine it’d be hard for them to come up with useful ideas without context first).
[1] I’m not sure who the relevant “we” is.
I think a name change might be good, but am not very excited about the “Global Priorities” name. I expect it would attract mostly people interested in seeking power and “having lots of influence” and I would generally expect a community with that name to be very focused on achieving political aims, which I think would be quite catastrophic for the community.
I actually considered this specific name in 2015 while I was working at CEA US as a potential alternative name for the community, but we decided against it at the time for reasons in this space (and because changing names seems hard).
While I’m not sure we’re using terms like “political” and “power” in the same way, as far as I can tell this worry makes a lot of sense to me.
However, I think there is an opposite failure mode: mistakenly believing that because of one’s noble goals and attitudes one is immune to the vices of power, and can safely ignore the art of how to navigate a world that contains conflicting interests.
A key assumption from my perspective is that political and power dynamics aren’t something one can just opt out of. There is a reason why thinkers from Plato over Macchiavelli to Carl Schmitt have insisted that politics is a separate domain that merits special attention (and I’m not saying this as someone who is not particularly sympathetic to any of these three on the object level). [ETA: Actually I’m not sure if Plato says that, and I’m confused why I included him originally. In a sense he may suggest the opposite view since he sometimes compares the state to the individual.]
Internally, community members with influence over more financial or social capital have power over those whose projects depend on such capital. There certainly are different views with respect to how this capital is best allocated, and at least for practical purposes I don’t think these are purely empirical disagreements and instead involve ‘brute differences in interests’.
Externally, EAs have power over beneficiaries when they choose to help some but not others. And a lot of EA projects are relevant to the interests of EA-external actors that form a complex network of partly different and partly aligned interests and different amounts of power over each other. Perhaps most drastically, a lot of EA thought around AI risk is about how to best influence how essentially the whole world will be reshaped (if not an outright plan for how to essentially take over the world).
Therefore, I think we will need to deal with ‘politics’ anyway, and we will attract people who are motivated by seeking power anyway. Non-EA political structures and practice contain a lot of accumulated wisdom on how to navigate conflicting interests while limiting damage from negative-sum interactions, on how to keep the power of individual actors in check, and on how to shape incentives in such a way that power-seeking individuals make prosocial contributions in their pursuit of power. (E.g. my prior is that any head of government in a democracy is at least partly motivated by pursuing power.)
To be clear, I think there are significant problems with these non-EA practices. (Perhaps most notably negative x-risk externalities from international competition.) And if EA can contribute technological or other innovations that help with reducing these problems, I’m all for it.
Yet overall I feel like I more often see EAs make the mistake of naively thinking they can ignore their externally imposed entanglement in political and power dynamics, and that there is nothing to be learned from established ways for how to reign in and shape these dynamics (perhaps because they view established practice and institutions largely as a morass of corruption and incompetence one better steers clear of). E.g. some significant problems I’ve seen at EA orgs could have been avoided by sticking more closely to standard advice of having e.g. a functional board that provides accountability to org leadership.
My best guess is that, on the margin, it would be good to attract more people with a more common-sense perspective on politics and power-seeking as opposed to people who lack the ability or willingness to understand how power operates in the world, and how to best navigate this. If rebranding to “Global Priorites” would have that effect (which I think I’m less confident in than you), then I’d count that as a reason for rebranding (though I doubt it would be among the top 5 most important pro or con reasons).
I agree that changing names is hard and costly (you can’t do it often), something that definitely should be taken into account.
I’m noticing I don’t fully understand the way in which you think “Global Priorities” would attract power-seekers, or what you mean by that. Like, I have a vague sense that you’re probably right, but I don’t see the direct connection yet. Would be very interested in more elaboration on this.
I mean, I just imagine what kind of person would be interested, and it would mostly be the kind of person who is ambitious, though not necessarily competent, and would seek out whatever opportunities or clubs there are that are associated with the biggest influence over the world, or sound the highest status, have the most prestige, or sound like would be filled with the most powerful people. I have met many of those people, and a large fraction of high-status opportunities that don’t also strongly select for merit seem filled with them.
Currently both EA and Rationality are weird in a way that is not immediately interesting to people who follow that algorithm, which strikes me as quite good. In universities, when I’ve gone to things that sounded like “Global Priorities” seminars, I mostly met lots of people with a political science degree, or MBA’s, really focusing on how they can acquire more power and the whole conversation being very status oriented.
Thanks, I find that helpful, and agree that’s a dangerous dynamic, and could be exacerbated by such a name change.
—HPMOR, Chapter 70, Self-Actualization (part 5)
Added: The following is DEFINITELY NOT a strong argument, but just kind of an associative point. I think that Voldemort (both the real one from JK Rowling and also the one in HPMOR) would be much more likely to decide that he and his Death Eaters should have “Global Priorities” meetings than “Effective Altruist” meetings. (“We’re too focus on taking over the British Ministry for Magic, we need to also focus on our Global Priorities.“) In that way I think the former phrase has a more general connotation of ”taking power and changing the world” in a way the latter does not.
I think this is a good point. That said, I imagine it’s quite hard to really tell.
Empirical data could be really useful to get here. Online experimentation in simple cases, or maybe we could even have some University chapters try out different names and see if we can infer any substantial differences.
Interesting.
1) I’m convinced that a “GP” community would attract somewhat more power-seeking people. But they might be more likely to follow (good) social norms than the current consequentialist crowd. Moreover, we would be heading toward a more action-oriented and less communal group, which could reduce the attraction to manipulative people. And today’s community is older and more BS-resistant with some legibly-trustworthy leaders. But you seem to think there would be a big and harmful net effect—can you explain?
2) assuming that “GP” is too intrinsically political, can you think of any alternatives that have some of its advantages of “GP” without that disadvantage?
I don’t expect a brand change to “Global Priorities” to bring in more action-oriented people. I expect fewer people would donate money themselves, for instance, they would see it as cute but obviously not having any “global” impact, and therefore below them.
(I think it was my inner Quirrell / inner cynic that wrote some of this comment, but I stand by it as honestly describing a real effect that I anticipate.)
I don’t understand this. We would be trending towards seeking more power, which would further attract power-seekers. We have already substantially gone down this path. You might have different models of what attracts manipulative people. My model is doing visibly power-seeking and high-status work is one of the most common attractors.
I think we have overall become substantially less BS-resistant as we have grown and have drastically increased the surface area of the community, though it depends a bit on the details.
Yep, I would be up for doing that, but alas won’t have time for it this week. It seemed better to leave a comment voicing my concerns at all, even if I don’t have time to explain them in-depth, though I do apologize for not having the time to explain them in full.
I’m concerned about people seeking power in order to mistreat, mislead, or manipulate others (cult-like stuff), as seems more likely in a social community, and less likely in a group of people who share interests in actually doing things in the world. I’m in favour of people gaining influence, all things equal!
Alas, I think that isn’t actually what tends to attract the most competent manipulative people. Random social communities might attract incompetent or average-competence manipulative people, but those are much less of a risk than the competent ones. In general, professional communities, in particular ones aiming for relatively unconditional power, strike me as having a much higher density of manipulative people than random social communities.
I also think when I go into my models here, the term “manipulative” feels somewhat misleading, but it would take me a while longer to explain alternative phrasings.
TBC, this feels like a bit of a straw man of my actual view, which is that power and communality jointly contribute to risks of cultishness and manipulativeness.
nods My concerns have very little to do with cultishness, so my guess is we are talking about very different concerns here.
I think the “global priorities” label fails to escape several of the problems that Jonas argued the EA brand has. In particular, it sounds arrogant for someone to say that they’re trying to figure out global priorities. If I heard of a global priorities forum or conference, I’d expect it to have pretty strong links with the people actually responsible for implementing global decisions; if it were actually just organised by a bunch of students, then they’d seem pretty self-aggrandizing.
The “priorities” part may also suggest to others that they’re not a priority. I expect “the global priorities movement has decided that X is not a priority” seems just as unpleasant to people pursuing X as “the effective altruism movement has decided that X is not effective”.
Lastly, “effective altruism” to me suggests both figuring out what to do, and then doing it. Whereas “global priorities” only has connotations of the former.
What kinds of names do you think would convey the notion of prioritised action while being less self-aggrandising?
Well, my default opinion is that we should keep things as they are; I don’t find the arguments against “effective altruism” particularly persuasive, and name changes at this scale are pretty costly.
Insofar as people want to keep their identities small, there are already a bunch of other terms they can use—like longtermist, or environmentalist, or animal rights advocate. So it seems like the point of having a term like EA on top of that is to identify a community. And saying “I’m part of the effective altruism community” softens the term a bit.
This seems like the most important point to think about; relatedly, I remember being surprised when I interned at FHI and learned how many people there don’t identify as effective altruists. It seems indicative of some problem, which seems worth pursuing directly. As a first step, it’d be good to hear more from people who have reservations about identifying as an effective altruist. I’ve just made a top-level question about it, plus an anonymous version—if that describes you, I’d be interested to see your responses!
Great comment. To these points I would also add (or maybe just summarize some of the points you made) that “global priorities” seems to have more empirical/world-focused connotations to me, whereas “effective altruism” sounds a lot more philosophical/ideological to me.
E.g. I agree that “global priorities” suggests questions like “what are the big challenges of our time?”, which I like a lot more than e.g. “how altruistic should we be?”, “is there something like ‘true altruism’?” or whichever other thing “effective altruism” makes people first think of.
Of course, I agree that ultimately the project of doing as much good as we can involves both empirical and philosophical questions. But relative to today, I think we’d be better equipped to execute that project well with a stronger emphasis on empirical and practical questions and less emphasis on abstract philosophy. (Though to be fair to the EA label, the status quo is more due to founder effects rather than due to the name differentially attracting philosophers.)
It seems worth noting that all of those orgs/wikis are focused on producing or collecting research, not on more directly acting on the world. This is of course a key part of EA, but not the whole of it.
In line with that, I think that “global priorities”, “global priorities community”, or similar terms sound like they’re mostly about working out what the global priorities are and less about actually acting on those answers. EA is already often perceived as too research-focused (though I’m not saying I agree with those perceptions myself), so it might be good to avoid things that would exacerbate that.
I like this style of thinking, but I don’t think it pushes in the direction that you suggest. EA entities with “priorities” in the name disproportionately work on surveys and policy, whereas those with “EA” in the name tend to be communal or meta, e.g. EA Forum, EA Global, EA Handbook, and CEA. Groups that act in the world tend to have neither, like GWWC, AMF, OpenAI.
On balance, I think “global priorities” connotes more concreteness and action-orientation than “EA”, which is more virtue- and identity- oriented. If I was wrong on this, it would partly convince me.
I guess I intended my comment above to make three claims:
It is empirically true that those orgs/wikis you noted as having “priorities” in their names are focused on producing or collecting research, not on more directly acting on the world
Separately, to me, “global priorities” does seem to have connotations of working out what the global priorities are and less about actually acting on those answers.
Claim 1 seems to be in line with claim 2.
But I think claim 1 wasn’t the basis for claim 2; I already felt those connotations before you named those orgs, though of course I had already heard of the orgs.
But I don’t see these claims as super important, because:
We can just run a bunch of surveys and see what connotations other people perceive
Action-oriented vs research-oriented is just one of many relevant dimensions
“global priorities” is just one alternative name
I guess I see the small value of my comment as quickly highlighting small reasons to doubt your initial views and therefore additional reasons to gather more options, consider our goals/criteria/desiderata more (I like that your comment lists some general goals for names), and run a bunch of surveys.
OK, what names would we expect to promote action-orientation if “GP” wouldn’t?
I do not know. Let me try generating names for a minute. Sorry. These will be bad.
“Marginal World Improvers”
”Civilizational Engineers”
”Black Swan Farmers”
“Ethical Optimizers”
”Heavy-Tail People”
Okay I will stop now.
A friend’s “names guy” once suggested calling the EA movement “Unfuck the world”...
We can begin here.
EA popsci would be fun!
§1. The past was totally fucked.
§2. Bioweapons are fucked.
§3. AI looks pretty fucked.
§4. Are we fucked?
§5. Unfuck the world!
I will resist the temptation to further expand that list.
“Hello, I’m an Effective Altruist.”
“Hello, I’m a world-unfucker.”
Honestly, I think the second one might be more action-oriented. And less likely to attract status-seekers. Alright, I’m convinced, let’s do it :)
I was just reflecting on the term ‘global priorities’. I think to me it sounds like it’s asking “what should the world do”, in contrast to “what should I do”. The latter is far mode, the former is near. I think that staying near mode while thinking about improving the world is pretty tough. I think when people fail, they end making recommendations that could only work in-principle if everyone coordinates at the same time, and also as a result shape their speech to focus on signaling to achieve these ends, and often walk off a cliff of abstraction. I think when people stay in near mode, they focus on opportunities that do not require coordination, but opportunities they can personally achieve. I think that EAs caring very much about whether they actually helped someone with their donation has been one of the healthier epistemic things for the community. Though I do not mean to argue it should be held as a sacred value.
For example, I think the question “what should the global priority be on helping developing countries” is naturally answered by talking broadly about the West helping Africa build a thriving economy, talk about political revolution to remove corruption in governments, talk about what sorts of multi-billion dollar efforts could take place like what the Gates Foundation should do. This is a valuable conversation that has been going on for decades/centuries.
I think the question “what can I personally do to help people in Africa” is more naturally answered by providing cost-effectiveness estimates for marginal thousands of dollars to charities like AMF. This is a valuable conversation that I think has has orders of magnitude less effort put into it outside the EA community. It’s a standard idea in economics that you can reliably get incredibly high returns on small marginal investments, and I think it is these kind of investments that the EA community has been much more successful at finding, and has managed to exploit to great effect.
Anyway, I was surprised to read you say that, in direct contrast to what I was thinking, and I think how I have often thought of Effective Altruism.
But it seems like GP is harder to extend to agents specifically? Currently, I can say “I’m an [EA / effective altruist / aspiring EA]”. That sounds a bit arrogant, but probably less so than saying “I’m a global priority” :P
Obviously that’s not the label we’d use for individuals, but I’m not sure the alternative. Some ideas that seem bad:
Global prioritist
GP (obviously that acronym is already taken, and in any case it’d just expand out to things like “I’m a global priority” or “we’re global priorities”)
Member of the global priorities community (way too long)
(In any case, as Jonas notes, our focus for now should probably be on brainstorming ideas rather than pitting them against each other so far. So this comment may not be very important.)
I kinda think that “I’m an EA/he’s an EA/etc” is mega-cringey (a bad combo of arrogant + opaque acryonym + tribal) , and that deprecating it is a feature, rather than a bug.
Though you can just say “I’m interested in / I work on global priorities / I’m in the prioritisation community”, or anything that you would say about the AI safety community, for example.
It sounds like you think it’s bad that people have identified their lives with trying to help people as much as they can? Like, people like Julia Wise and Toby Ord shouldn’t have made it part of their life identity to do the most good they can do. They shouldn’t have said “I’m that sort of person” but they should have said “This is one of my interests”.
I also find that a bit cringy. To me, the issue is saying “I have SUCCEEDED at being effective at altruism”, which feels like a high bar and somewhat arrogant to explicitly admit to
But:
By a similar token, one could replace “I’m/He’s an EA” with “I’m/He’s interested in effective altruism”, which would at least somewhat reduce the problems you note.
People usually don’t do this, which I think is because we naturally gravitate towards shorter phrases. I guess this could be seen as a downside of the fact that the current phrase can be conveniently shortened.
But, of course, the ability to shorten also has an upside (saving time and space).
I often say/write and hear/read things like “EAs are often interested in …”, “One mistake some EAs make is...”, etc. This is more common than me referring to myself as an EA, and somewhat less at risk of seeming arrogant (though it still can). I think expanding all such uses of “EAs” to “people interested in global priorities” would be a hassle (though not necessarily net negative).
“I’m interested in global priorities” and “I work on global priorities” also seem kind-of arrogant, bland, and/or weirdly vague to me. Maybe like a parody of vacuous business speak.
Not sure how common this perception would be—we should run a survey.
(Though I feel I should emphasise that I just see these as small reasons to doubt your views, which therefore pushes in favour of gathering more options, considering our goals/criteria/desiderata more, and running a bunch of surveys. My intention isn’t really to definitively argue against “global priorities”.)
ETA: I just saw that Will Bradshaw already said things quite similar to what I said here, but a bit more concisely...
Yeah, I’m much more sympathetic to concerns with “effective altruist” than with “effective altruism”, and it doesn’t seem like GP does any better in that regard – all the solutions you could apply here (“I’m a member of the global priorities community”, “I’m interested in global priorities”) also apply to EA.
Maybe the fact that the short forms are so awkward for GP is part of the idea? Like, EA has this very attractive but somewhat problematic personalised form (“effective altruist”); GP’s personalised forms are all unattractive, so you avoid the problematic attractor?
But it still seems that, if personalised forms are a big part of the concern (which I think they are), this is a good argument in favour of keeping looking. Which was Jonas’s proposal anyway.
(Or, of course, we could cut the arrogance down by just saying “I’m an early-career aspiring global priority.”)