GREAT post! Such a fantastic and thorough explanation of a truly troubling issue! Thank you for this.
We definitely need to distinguish what I call the various “flavors” of EA. And we have many options for how to organize this.
Personally, I’m torn because, on one hand, I want to bring everyone together still under an umbrella movement, with several “branches” within it. However, I agree that, as you note, this situation feels much more like the differences between the rationality community and the EA community: “the two have a lot of overlap but they are not the same thing and there are plenty of people who are in one but not the other. There’s also no good umbrella term that encompasses both.” The EA movement and the extinction risk prevention movement are absolutely different.
And anecdotally, I really want to note that the people who are emphatically in one camp but not the other are very different people. So while I often want to bring them together harmoniously in a centralized community, I’ve honestly noticed that the two groups don’t relate as well and sometimes even argue more than they collaborate. (Again, just anecdotal evidence here — nothing concrete, like data.) It’s kind of like the people who represent these movements don’t exactly speak the same language and don’t exactly always share the same perspectives, values, worldviews, or philosophies. And that’s OK! Haha, it’s actually really necessary I think (and quite beautiful, in its own way).
I love the parallel you’ve drawn between the rationality community and the EA community. It’s the perfect example: people have found their homes among 2 different forums and different global events and different organizations. People in the two communities have shared views and can reasonably expect people within to be familiar with the ideas, writings, and terms from within the group. (For example, someone in EA can reasonably expect someone else who claims any relation to the movement/community to know about 80000 Hours and have at least skimmed the “key ideas” article. Whereas people in the rationality community would reasonably expect everyone within it to have familiarity with HPMOR. But it’s not necessarily expected that these expectations would carry across communities and among both movements.)
You’ve also pointed out amazing examples of how the motivations behind people’s involvement vary greatly. And that’s one of the strongest arguments for distinguishing communities; there are distinct subsets of core values. So, again, people don’t always relate to each other across the wide variety of diverse cause areas and philosophies. And that’s OK :)
Let’s give people a proper home — somewhere they feel like they truly belong and aren’t constantly questioning if they belong. Anecdotally I’ve seen so many of my friends struggle to witness the shifts in focus across EA to go towards X risks like AI. My friends can’t relate to it; they feel like it’s a depressing way to view this movement and dedicate their careers. So much so that we often stop identifying with the community/movement depending on how it’s framed and contextualized. (If I were in a social setting where one person [Anne] who knows about EA was explaining it to someone [Kyle] who had never heard of it and that person [Kyle] then asked me if I am “an EA”, then I might vary my response, depending on the presentation that the first person [Anne] provided. If the provided explanation was really heavy handed with an X-risk/AI focus, I’d probably have a hard time explaining that I work for an EA org that is actually completely unrelated… Or I might just say something like “I love the people and the ideas” haha)
I’m extra passionate about this because I have been preparing a forum post called either “flavors of EA” or “branches of EA” that would propose this same set of ideas! But you’ve done such a great job painting a picture of the root issues. I really hope this post and its ideas gain traction. (I’m not gonna stop talking about it until it does haha) Thanks ParthThaya
Thanks for the kind words! Your observations that “people who are emphatically in one camp but not the other are very different people” matches my beliefs here as well. It seems intuitively evident to me that most of the people who want to help the less fortunate aren’t going to be attracted to, and often will be repelled by, a movement that focuses heavily on longtermism. And that most of the people who want to solve big existential problems aren’t going to be interested in EA ideas or concepts (I’ll use Elon Musk and Dominic Cummings are my examples here again).
GREAT post! Such a fantastic and thorough explanation of a truly troubling issue! Thank you for this.
We definitely need to distinguish what I call the various “flavors” of EA. And we have many options for how to organize this.
Personally, I’m torn because, on one hand, I want to bring everyone together still under an umbrella movement, with several “branches” within it. However, I agree that, as you note, this situation feels much more like the differences between the rationality community and the EA community: “the two have a lot of overlap but they are not the same thing and there are plenty of people who are in one but not the other. There’s also no good umbrella term that encompasses both.” The EA movement and the extinction risk prevention movement are absolutely different.
And anecdotally, I really want to note that the people who are emphatically in one camp but not the other are very different people. So while I often want to bring them together harmoniously in a centralized community, I’ve honestly noticed that the two groups don’t relate as well and sometimes even argue more than they collaborate. (Again, just anecdotal evidence here — nothing concrete, like data.) It’s kind of like the people who represent these movements don’t exactly speak the same language and don’t exactly always share the same perspectives, values, worldviews, or philosophies. And that’s OK! Haha, it’s actually really necessary I think (and quite beautiful, in its own way).
I love the parallel you’ve drawn between the rationality community and the EA community. It’s the perfect example: people have found their homes among 2 different forums and different global events and different organizations. People in the two communities have shared views and can reasonably expect people within to be familiar with the ideas, writings, and terms from within the group. (For example, someone in EA can reasonably expect someone else who claims any relation to the movement/community to know about 80000 Hours and have at least skimmed the “key ideas” article. Whereas people in the rationality community would reasonably expect everyone within it to have familiarity with HPMOR. But it’s not necessarily expected that these expectations would carry across communities and among both movements.)
You’ve also pointed out amazing examples of how the motivations behind people’s involvement vary greatly. And that’s one of the strongest arguments for distinguishing communities; there are distinct subsets of core values. So, again, people don’t always relate to each other across the wide variety of diverse cause areas and philosophies. And that’s OK :)
Let’s give people a proper home — somewhere they feel like they truly belong and aren’t constantly questioning if they belong. Anecdotally I’ve seen so many of my friends struggle to witness the shifts in focus across EA to go towards X risks like AI. My friends can’t relate to it; they feel like it’s a depressing way to view this movement and dedicate their careers. So much so that we often stop identifying with the community/movement depending on how it’s framed and contextualized. (If I were in a social setting where one person [Anne] who knows about EA was explaining it to someone [Kyle] who had never heard of it and that person [Kyle] then asked me if I am “an EA”, then I might vary my response, depending on the presentation that the first person [Anne] provided. If the provided explanation was really heavy handed with an X-risk/AI focus, I’d probably have a hard time explaining that I work for an EA org that is actually completely unrelated… Or I might just say something like “I love the people and the ideas” haha)
I’m extra passionate about this because I have been preparing a forum post called either “flavors of EA” or “branches of EA” that would propose this same set of ideas! But you’ve done such a great job painting a picture of the root issues. I really hope this post and its ideas gain traction. (I’m not gonna stop talking about it until it does haha) Thanks ParthThaya
Thanks for the kind words! Your observations that “people who are emphatically in one camp but not the other are very different people” matches my beliefs here as well. It seems intuitively evident to me that most of the people who want to help the less fortunate aren’t going to be attracted to, and often will be repelled by, a movement that focuses heavily on longtermism. And that most of the people who want to solve big existential problems aren’t going to be interested in EA ideas or concepts (I’ll use Elon Musk and Dominic Cummings are my examples here again).