(even though they don’t seem to directly convince people to become EAs)
I want to flag that in general, convincing people to become EAs, or more precisely, creating cool spaces for more people to get into EA, is a thing that people in the community do actually do a lot. I did this myself a few years ago, by starting an EA group at my university. I’m guessing there might be several hundred EA community builders around the world, it’s just that they don’t generally focus on wealthy individuals specifically.[1]
Do you know why it tends to be difficult to convince people?
A general answer to this is that the core ideas of EA have some significant inferential distance for most people, that is, there’s usually a lot of context that you need to explain to someone before they get why EAs care so much about anti-malaria bednets, animal suffering or AI Safety. It’s also the case that some EA conclusions tend to be very counterintuitive and run against people’s previously held beliefs, adding to the difficulty. [2]
It can be much easier to pitch them in any of these individual cause areas, but this means you trade-off generality: maybe you get someone to care about animal suffering, but they end up donating to organizations that are much less impactful than the EA standard.[3]
And going into more speculative territory[4], I think people with a lot of money might be more skeptical of people wanting their money, which kinda makes sense. Philanthropists tend to be one-issue donors: they think about something that is meaningful to them (like education, homelessness, or dogs) and then tend to focus their donations heavily in that issue. Persuading them otherwise means not only explaining EA ideas, but also making them realize that they should stop doing what they’re doing, which is hard.
And would I be able to contact any of these 3 people you know? No worries if not!
Let me see what I can do, I’ve sent you a message through the forum!
PD: I want to push back on something I’ve said earlier: “My impression is that if you find a tractable way of doing this consistently, then you probably should”.
I should probably add some nuance: you shouldn’t pursue a job just because you think it’s a priori impactful. Whether you like this job, whether you feel like you could do it sustainably, is very important. And EA shouldn’t necessarily determine your entire life, there’s balance to be had. It’s also obviously very important to check this possibility against others, you shouldn’t just dig into the first impactful job you find.
This is not to say there isn’t value to this. I think you can often convince people that they should donate to big funds (say, the Animal Welfare Fund), but this tends to be a tougher sell. In some cases, like say, the Lead Exposure Elimination Project, the EA context might be completely unnecessary for potential funders.
You’re right, there are lots of EA community builders, just they might not be specifically focused on wealthy people. Sorry about that! I edited my post if it helps.
Also, thanks for all the links! Inferential Distance is an idea I’ve considered before but couldn’t put into words until now—one of the main ones being that someone has to expand their moral circle before any conversation about altruism can be had.
And now that you mention it, creating cool spaces for people to join EA feels to me possibly more effective than convincing specific wealthy individuals (multiplier effect again?).
I’ll stop asking questions here because you’ve explained lots of great stuff, and I don’t want to wear you out :)
(I’m sure there’s lots of other stuff you’re involved with, being a community builder!)
But if you get any more ideas spur of the moment, feel free to DM me!
I want to flag that in general, convincing people to become EAs, or more precisely, creating cool spaces for more people to get into EA, is a thing that people in the community do actually do a lot. I did this myself a few years ago, by starting an EA group at my university. I’m guessing there might be several hundred EA community builders around the world, it’s just that they don’t generally focus on wealthy individuals specifically.[1]
A general answer to this is that the core ideas of EA have some significant inferential distance for most people, that is, there’s usually a lot of context that you need to explain to someone before they get why EAs care so much about anti-malaria bednets, animal suffering or AI Safety. It’s also the case that some EA conclusions tend to be very counterintuitive and run against people’s previously held beliefs, adding to the difficulty. [2]
It can be much easier to pitch them in any of these individual cause areas, but this means you trade-off generality: maybe you get someone to care about animal suffering, but they end up donating to organizations that are much less impactful than the EA standard.[3]
And going into more speculative territory[4], I think people with a lot of money might be more skeptical of people wanting their money, which kinda makes sense. Philanthropists tend to be one-issue donors: they think about something that is meaningful to them (like education, homelessness, or dogs) and then tend to focus their donations heavily in that issue. Persuading them otherwise means not only explaining EA ideas, but also making them realize that they should stop doing what they’re doing, which is hard.
Let me see what I can do, I’ve sent you a message through the forum!
PD: I want to push back on something I’ve said earlier: “My impression is that if you find a tractable way of doing this consistently, then you probably should”.
I should probably add some nuance: you shouldn’t pursue a job just because you think it’s a priori impactful. Whether you like this job, whether you feel like you could do it sustainably, is very important. And EA shouldn’t necessarily determine your entire life, there’s balance to be had. It’s also obviously very important to check this possibility against others, you shouldn’t just dig into the first impactful job you find.
I do think a lot of people have done this in ad-hoc basis though.
See The explanatory obstacle of EA for some concrete examples.
This is not to say there isn’t value to this. I think you can often convince people that they should donate to big funds (say, the Animal Welfare Fund), but this tends to be a tougher sell. In some cases, like say, the Lead Exposure Elimination Project, the EA context might be completely unnecessary for potential funders.
I’m not an expert on this, please don’t take my guesses too seriously!
You’re right, there are lots of EA community builders, just they might not be specifically focused on wealthy people. Sorry about that! I edited my post if it helps.
Also, thanks for all the links! Inferential Distance is an idea I’ve considered before but couldn’t put into words until now—one of the main ones being that someone has to expand their moral circle before any conversation about altruism can be had.
And now that you mention it, creating cool spaces for people to join EA feels to me possibly more effective than convincing specific wealthy individuals (multiplier effect again?).
I’ll stop asking questions here because you’ve explained lots of great stuff, and I don’t want to wear you out :)
(I’m sure there’s lots of other stuff you’re involved with, being a community builder!)
But if you get any more ideas spur of the moment, feel free to DM me!