Yeah, fair question, though I think both estimating the numerator and the denominator is tricky. Probably your estimate that I know very roughly ~150-250 EAs is approximately right. But I’d be nervous about a conclusion of “this problem only affects 1 in 50, so it’s pretty rare/not a big deal,” both because I think the 3-5 number is more about specific people I’ve been interacting with a lot recently who directly inspired this post (so there could be plenty more I just know less about), and because there’s also a lot of room for interpretation of how strongly people resonate with different parts of this / how completely they’ve disengaged from the community / etc.
Helen
Love the analogy of “f**k you money” to “I respectfully disagree with your worldview social capital” or “I respectfully disagree with your worldview concrete achievements that you cannot ignore”!
Before writing the post, I was maybe thinking of 3-5 people who have experienced different versions of this? And since posting I have heard from at least 3 more (depending how you count) who have long histories with EA but felt the post resonated with them.
So far the reactions I’ve got suggest that there are quite a lot of people who are more similar to me (still engage somewhat with EA, feel some distance but have a hard time articulating why). That might imply that this group is a larger proportion than the group that totally disengages… but the group that totally disengages wouldn’t see an EA forum post, so I’m not sure :)
A mix! Some things I feel or have felt myself; some paraphrases of things I’ve heard from others; some ~basically made up (based on vibes/memories from conversations); some ~verbatim from people who reviewed the post.
I’m delighted that you went ahead and shared that the tone felt off to you! Thank you. You’re right that I didn’t really run this by any newcomers, so that’s on me.
(By way of explanation, but not excuse: I mostly wrote the piece while thinking of the main audience as being people who were already partway through the disillusionment pipeline—but then towards the end edited in more stuff that was relevant to newcomers, and didn’t adjust who I ran it by to account for that.)
Leaning into EA Disillusionment
I like this! Thanks for sharing it.
Another analogy I’ve been playing around with* is “having an impact isn’t a sprint or a marathon—it’s an endurance hunt.” Things I like about this include:
You’re not competing against other people—you’re trying to succeed at something that (a) is much less structured and (b) may or may not actually be possible
Probably the best strategy isn’t to run a constant, steady pace—depending on how the hunt is going at any given moment, it may suddenly be really valuable to run flat out for a stretch, or it may be fine to maintain a steady jog, walk, or even stop
There isn’t a clear, pre-defined route you should be following—instead, you have to constantly be making the tradeoff between going hard in the direction you think is correct, vs. reorienting yourself and deciding to make a small or large course correction (e.g. based on stopping to read tracks or something)
It’s also not clear how far or how fast you’ll need to go over the course of the hunt, so you need to budget your energy with that uncertainty in mind (which could include consciously making the choice to risk going too hard for a chance at an amazing yield)
If you do go too hard and exhaust or injure yourself, you’re not only hurting your own ability to finish the hunt and participate in future hunts, you’re also likely causing teammates to have to abandon the hunt to help you
I think your analogy about breathing carries over too—just like on a hike, if you’re hunting in a group then no one is helped by you pretending you have more stamina than you do.
Two flaws with this analogy are 1) it’s not the friendliest for vegetarians, lol and 2) there seems to be some controversy over whether persistence hunting is even a thing? Hiking is much better on both of those points!
*Read: started drafting a post, then let it languish for months
The phrase “hard-core EAs” does more harm than good
Oops, thanks—I’ll delete this.
I am in contact with a couple of other funding sources who would take recommendations from me seriously, but this fund is the place I have most direct control over.
Both Matts are long-time earn-to-givers, so they each make grants/donations from their own earnings as well as working with this fund.
Long-Term Future Fund AMA
This is a great comment. If I were to rewrite this post now, I would make sure to include these.
Also, going back to a conversation with you: if I were to rewrite, I would also try to make it clearer that I’m not trying to give a formal definition of Effective Altruism (which is what it sounds like in the post), just trying to change the feeling or connotations around it, and how we think about it.
This is awesome, Ryan! Well done on working so hard to pull it together, and on actually pulling it off.
I think it’s fair to say that “aspiring” doesn’t quite fit for you. The point of that word being there is to reduce the strength of the claim: you’re focused on being effective, you’re trying hard to be effective, but to say that you are effective is different.
Maybe the slightly poor epistemology doesn’t matter enough to make up for the much clearer name… I’m not sure.
You can easily say that Effective Altruism answers a question. The question is, “What should I do with my life?” and the answer is, “As much good as possible (or at least a decent step in that direction).”
I think this is the key part of our disagreement—I don’t think this is the case—and I’ve answered more fully in my comment in reply to Kerry. Would love to hear your thoughts there.
Great comment, thanks Kerry. To your first point:
...it seems to me that EA is answering a question. The question is “what should I do with my life” and the answer is “do the most good with the resources available to me.”
I’m really glad you stated this clearly (and it’s the same idea as in pappubahry’s comment). If this were the core idea of EA, then I agree that this whole post would be incorrect.
Is it the core idea though? None of the introductions I linked to above mention anything about what one “should” do. Certainly there are several EA organisations that are linked to spreading the idea of EA & motivating more people to donate, but that seems to me to be easily explained by:
The ease with which resources can be turned into life-improvements (“ease” referring to convenience, speed, low information barriers) compared to just about any other time in human history.
The stable instrumental goal of trying to spread one’s own values, to make it more likely they are fulfilled.
My impression is not that the organisations in question (which are made up of aspiring effective altruists, or people interested in Effective Altruism, or whatever) see some kind of terminal value in persuading others to dedicate their lives to helping others. Certainly I find the idea of this (persuade others to do good with their resources) being a core motivating philosophy of my life very off-putting.
One of the things I love about EA (or perhaps just my interpretation of EA) is that it’s driven by curiosity and compassion, not moralising.
--
For your second point:
I think what you’ve said actually splits into two things:
a) Should we promote having an EA identity, and b) Should people who have that identity call themselves “effective altruists”
I think you’re right about a), and about the huge benefits of community, signalling, self-signalling, commitment etc that come with making Effective Altruism part of one’s identity.
But I don’t think it necessarily follows that the name “effective altruists” is the best way to refer to oneself, and one of the reasons I wrote this post was to point out the downsides of using that phrase.
I particularly care about the first impressions of people who have the potential to have a large impact on the world—who I expect will generally be more analytical, better informed and more sceptical than the typical person. In my experience organising EA Melbourne, this kind of person is often really put off by a group of people who just get together every few weeks to talk about stuff, and who call themselves both effective and altruistic. They are also put off if people in that group claim (as lots do, initially) that maximising your earnings and donating to global health charities is the best way to improve the world.
I think it’s really important that our memes don’t get stuck on one object-level strategy like that.
(I do wish I could think of another identifier that’s as pithy as “effective altruist” though.)
What do you think?
- Oct 18, 2014, 10:29 AM; 1 point) 's comment on Effective Altruism is a Question (not an ideology) by (
Effective Altruism is a Question (not an ideology)
Some current things that are trying to push on “differential progress”, if I understand you right:
Political lobbying to promote regulation of biotech & nanotech research, a la CEA’s/FHI’s Global Priorities Project
AI safety research
Animal rights activism (where the emphasis is on values shift/expanding circles)
Does that look right? What else would you add?
(Paul, I think I’ve heard you talk before about trying to improve institutional quality—do you know of anyone you think is doing this well?)
Do you have any thoughts about how to juggle timing when different opportunities will arise at different times? For example, if applying for jobs & university places at the same time, the response times will be very different.
The obvious strategy is to delay the decision as long as possible, but it’s hard to know how to trade off confirmed options that will expire against potential options you haven’t heard from yet.
One EA friend I talked to about this said he tried to do this, then found that when it came down to it he couldn’t bear to let an opportunity slide while waiting for others, so just took the first thing he got.
Hm… thinking in terms of 2 types of claim doesn’t seem like much of an improvement over thinking in terms of 1 type of claim, honestly. I was not at all trying to say “there are some things we’re really sure of and some things we’re not.” Rather, I was trying to point out that EA is associated with a bunch of different ideas; how solid the footing of each idea is varies a lot, but how those ideas are discussed often doesn’t account for that. And by “how solid” I don’t just mean on a 1-dimensional scale from less to more solid—more like, the relevant evidence and arguments and intuition and so on all vary a ton, so it’s not just a matter of dialing up or down the hedging.
A richer framing for describing this that I like a lot is Holden’s “avante-garde effective altruism” (source):
I don’t think it has to be that complicated to work this mindset into how we think and talk about EA in general. E.g. you can start with “There’s reason to believe that different approaches to doing good vary a ton in how much they actually help, so it’s worth spending time and thought on what you’re doing,” then move to “For instance, the massive income gap between countries means that if you’re focusing on reducing poverty, your dollar goes further overseas,” and then from there to “And when people think even more about this, like the EA community has done, there are some more unintuitive conclusions that seem pretty worthy of consideration, for instance...” and then depending on the interaction, there’s space to share ideas in a more contextualized/nuanced way.
That seems like a big improvement over the current default, which seems to be “Hi, we’re the movement of people who figure out how to do the most good, here are the 4 possibilities we’ve come up with, take your pick,” which I agree wouldn’t be improved by “here are the ones that are definitely right, here are the ones we’re not sure about.”