the argument does not hold if those groups are only for people who are very familiar with EA thinking
I think when creating most groups/sub-communities it’s important that there is a filter to make sure people have an understanding of EA, otherwise it can become an average group for that cause area rather than a space for people who have an interest in EA and that specific cause, and are looking for EA related conversations.
But the likelihood that I would have changed my cause area because other causes are more important to work on would have been smaller. This could be because it is less likely to come across good arguments for other causes as not that many people around me have an incentive to point me towards those resources.
I think most people who have an interest in EA also hold uncertainty about their moral values, the tractability of various interventions and which causes are most important. It can be easy sometimes to pigeonhole people with particular causes depending on where they work or donate but I don’t meet many people who only care about one cause, and the EA survey had similar results.
If people are able to come across well reasoned arguments for interventions within a cause area they care about, I think it’s more likely that they’ll stick around. As most of the core EA material (newsletters, forum, FB) has reference to multiple causes, it will be hard to avoid these ideas. Especially if they are also in groups for their career/interests/location.
I think the bigger risk is losing people who instantly bounce from EA when it doesn’t even attempt to answer their questions rather than the risk of people not getting exposed to other ideas. If EA doesn’t have cause groups then there’s probably a higher chance of someone just going to another movement that does allow conversation in that area.
“Maybe pretty early on, it just became obvious that there wasn’t a lot of value in preaching to people on a topic that they weren’t necessarily there for, and that I had a lot of thoughts on the conversations people were already having. Then I think one thing you can do to share any reasoning system, but it works particularly well for effective altruism is just to apply it consistently, in a principled way, to problems that people care about. Then, they’ll see whether your tools look like useful tools. If they do, then they’ll be interested in learning more about that. I think my ideal effective altruist movement, and obviously this trade off against lots of other things and I don’t know that we can be doing more of it on the margin. My ideal effective altruist movement had insightful nuanced, productive, takes on lots and lots of other things so that people could be like, “Oh, I see how effective altruists have tools for answering questions. “I want the people who have tools for answering questions to teach me about those tools. I want to know what they think the most important questions are. I want to sort of learn about their approach.”
I think when creating most groups/sub-communities it’s important that there is a filter to make sure people have an understanding of EA, otherwise it can become an average group for that cause area rather than a space for people who have an interest in EA and that specific cause, and are looking for EA related conversations.
I think most people who have an interest in EA also hold uncertainty about their moral values, the tractability of various interventions and which causes are most important. It can be easy sometimes to pigeonhole people with particular causes depending on where they work or donate but I don’t meet many people who only care about one cause, and the EA survey had similar results.
If people are able to come across well reasoned arguments for interventions within a cause area they care about, I think it’s more likely that they’ll stick around. As most of the core EA material (newsletters, forum, FB) has reference to multiple causes, it will be hard to avoid these ideas. Especially if they are also in groups for their career/interests/location.
I think the bigger risk is losing people who instantly bounce from EA when it doesn’t even attempt to answer their questions rather than the risk of people not getting exposed to other ideas. If EA doesn’t have cause groups then there’s probably a higher chance of someone just going to another movement that does allow conversation in that area.
This quote from an 80,000 Hours interview with Kelsey Piper phrases it much better.
“Maybe pretty early on, it just became obvious that there wasn’t a lot of value in preaching to people on a topic that they weren’t necessarily there for, and that I had a lot of thoughts on the conversations people were already having. Then I think one thing you can do to share any reasoning system, but it works particularly well for effective altruism is just to apply it consistently, in a principled way, to problems that people care about. Then, they’ll see whether your tools look like useful tools. If they do, then they’ll be interested in learning more about that. I think my ideal effective altruist movement, and obviously this trade off against lots of other things and I don’t know that we can be doing more of it on the margin. My ideal effective altruist movement had insightful nuanced, productive, takes on lots and lots of other things so that people could be like, “Oh, I see how effective altruists have tools for answering questions. “I want the people who have tools for answering questions to teach me about those tools. I want to know what they think the most important questions are. I want to sort of learn about their approach.”