Note: I have been involved with EA since 2-3 months only, so my ideas may not be accurate.
One approach is to target people involved in social issues who believe in some of the more popular EA concept(s).
Climate Change Out of all the EA priority areas, climate change is arguably the most popular one (among non-EAs/general population). Quite a few people (among non-EAs) work on climate change because they think it’s the most pressing problem. They believe in some of the more well-known EA concepts like:
1. Using supply/demand concept to choose a social issue (aka neglectedness).
2. The fact that climate change needs to be fixed quickly while other social issues can be solved later also (similar to idea of existential risk).
On the whole, more involved groups appear to prioritise Global Poverty and Climate Change less and longtermist causes more. ~ EA 2019 Survey
However, I must point out that Global Poverty is ranked the most popular EA cause area, followed by Climate Change. I suspect this is due to a lot of people in the EA movement having joined recently, and taking some time to understand EA’s ideas on cause prioritization.
Similarly, it may be efficient to target people who are involved in nuclear security (they share ideas of existential risk and sudden catastrophes are more important than catastrophes that build up over time).
Essentially, we are looking for people who are working on a particular social cause because of logical reasons. This greatly increases their chance of being a fit with the ideas of EA, since this approach captures both the effective and altruism aspects of EA.
First things first, I’m also relatively new to EA (approximately 8 months) and I think that it’s of great value to take into consideration the ideas of new community members who still have a kind of ‘outsider view’ on things.
By in large, I agree and I actually started working on strategies to target people who are involved in relevant cause areas or might be more open to EA’s concepts of expanding the circle of morality.
There a few assumptions that we can be the base of building this strategy:
Communities that have a moral underpinning:
Might be more inclined to be interested in effective altruism in general.
Might be more open to long-term moral arguments, and possibly more easily convinced with them.
Might already have a relatively ‘expanded moral circle’ (e.g, animal welfare activists, climate change activists). This can make expanding their moral circle easier than with other people.
Attracting people that are already interested in one of EA’s cause areas with content that relates to that cause area can help build credibility with them, and make them feel more comfortable, This, in turn, can enhance the openness and willing of those people to read further about other EA causes
Existing communities enable us to reach a great number of people semi-organically and with a low cost.
Having said that, I think we should be careful with popular causes like climate change and animal welfare, the reason is that a respectful amount of the people who support these causes do so for reasons that are not suitable with EA, don’t really have reasoning for their views, or are even aggressive towards people who think differently.
It’s completely anecdotal but yesterday when I mapped relevant facebook communities I noticed some groups explicitly state they do shaming to meat-eaters, or are conspiracy-based.
I agree with the points you make. Additionally, I think it is easier to find EA’s among altruism-related-communities (e.g. climate change, factory farming, etc) rather than effective/logic-related-communities (e.g. philosophers, engineers, scientists). This is because people willing to devote their career to altruistic causes are rare, while quite a lot of people think and reason logically.
Also, I’d love to know of any surveys or research that tries to find correlations between what EAs were doing pre-knowledge-of-EA and post-knowledge-of-EA. Or, what are the opinions on EA among people from different industries or subject-of-study.
Note: I have been involved with EA since 2-3 months only, so my ideas may not be accurate.
One approach is to target people involved in social issues who believe in some of the more popular EA concept(s).
Climate Change
Out of all the EA priority areas, climate change is arguably the most popular one (among non-EAs/general population).
Quite a few people (among non-EAs) work on climate change because they think it’s the most pressing problem. They believe in some of the more well-known EA concepts like:
1. Using supply/demand concept to choose a social issue (aka neglectedness).
2. The fact that climate change needs to be fixed quickly while other social issues can be solved later also (similar to idea of existential risk).
However, I must point out that Global Poverty is ranked the most popular EA cause area, followed by Climate Change. I suspect this is due to a lot of people in the EA movement having joined recently, and taking some time to understand EA’s ideas on cause prioritization.
Similarly, it may be efficient to target people who are involved in nuclear security (they share ideas of existential risk and sudden catastrophes are more important than catastrophes that build up over time).
Essentially, we are looking for people who are working on a particular social cause because of logical reasons. This greatly increases their chance of being a fit with the ideas of EA, since this approach captures both the effective and altruism aspects of EA.
Hi Prabhat!
First things first, I’m also relatively new to EA (approximately 8 months) and I think that it’s of great value to take into consideration the ideas of new community members who still have a kind of ‘outsider view’ on things.
By in large, I agree and I actually started working on strategies to target people who are involved in relevant cause areas or might be more open to EA’s concepts of expanding the circle of morality.
There a few assumptions that we can be the base of building this strategy:
Communities that have a moral underpinning:
Might be more inclined to be interested in effective altruism in general.
Might be more open to long-term moral arguments, and possibly more easily convinced with them.
Might already have a relatively ‘expanded moral circle’ (e.g, animal welfare activists, climate change activists). This can make expanding their moral circle easier than with other people.
Attracting people that are already interested in one of EA’s cause areas with content that relates to that cause area can help build credibility with them, and make them feel more comfortable, This, in turn, can enhance the openness and willing of those people to read further about other EA causes
Existing communities enable us to reach a great number of people semi-organically and with a low cost.
Having said that, I think we should be careful with popular causes like climate change and animal welfare, the reason is that a respectful amount of the people who support these causes do so for reasons that are not suitable with EA, don’t really have reasoning for their views, or are even aggressive towards people who think differently.
It’s completely anecdotal but yesterday when I mapped relevant facebook communities I noticed some groups explicitly state they do shaming to meat-eaters, or are conspiracy-based.
Hi Asaf,
I agree with the points you make. Additionally, I think it is easier to find EA’s among altruism-related-communities (e.g. climate change, factory farming, etc) rather than effective/logic-related-communities (e.g. philosophers, engineers, scientists). This is because people willing to devote their career to altruistic causes are rare, while quite a lot of people think and reason logically.
Also, I’d love to know of any surveys or research that tries to find correlations between what EAs were doing pre-knowledge-of-EA and post-knowledge-of-EA. Or, what are the opinions on EA among people from different industries or subject-of-study.