(thank you for writing this, my comment is related to Denkenberger)A consideration against the creation of groups around cause areas if they are open for younger people (not only senior professionals) who are also new to EA: (the argument does not hold if those groups are only for people who are very familiar with EA thinking—of course among others those groups could also make work and coordination more effective)
It might be that this leads to a misallocation of people and resources in EA as the cost of switching focus or careers increases with this network.
If those cause groups existed two years ago, I would have either joined the “Global Poverty Group” or the “Climate Change group” (for sure not the “Animal Welfare group” for instance) (or with some probability also a general EA group). Most of my EA friends and acquaintances would have focused on the same cause area (maybe I would have been better skilled and knowledgable about them now which is important). But the likelihood that I would have changed my cause area because other causes are more important to work on would have been smaller. This could be because it is less likely to come across good arguments for other causes as not that many people around me have an incentive to point me towards those resources. Switching the focus of my work would also be costly in a selfish sense as one would not see all the acquaintances and friends from the monthly meetups/skypes or career workshops of my old cause area anymore.
I think that many people in EA become convinced over time to focus on the longterm. If we reasonably believe that these are rational decision, then changing cause areas and ways of working on the most pressing problems (direct work, lobby work, community building, ETG) several times during one’s life is one of the most important things when trying to maximize impact and should be as cheap as possible for individuals and hence encouraged. That means that the cost of information provision of other cause areas and the private costs of switching should be reduced. I think that this might be difficult with potential cause area groups (especially in smaller cities with less EAs in general).
(Maybe this is similar to the fact that many Uni groups try to not do concrete career advice before students have engaged in cause prioritization discussion. Otherwise, people bind themselves too early to cause areas which seem intuitively attractive or fit to the perceived identity of the person or some underlying beliefs they hold and never questioned ( “AI seems important I have watched SciFi movies” “I am altruistic so I will help the to reduce poverty” “Capitalism causes poverty hence I wont do earning to give”).)
the argument does not hold if those groups are only for people who are very familiar with EA thinking
I think when creating most groups/sub-communities it’s important that there is a filter to make sure people have an understanding of EA, otherwise it can become an average group for that cause area rather than a space for people who have an interest in EA and that specific cause, and are looking for EA related conversations.
But the likelihood that I would have changed my cause area because other causes are more important to work on would have been smaller. This could be because it is less likely to come across good arguments for other causes as not that many people around me have an incentive to point me towards those resources.
I think most people who have an interest in EA also hold uncertainty about their moral values, the tractability of various interventions and which causes are most important. It can be easy sometimes to pigeonhole people with particular causes depending on where they work or donate but I don’t meet many people who only care about one cause, and the EA survey had similar results.
If people are able to come across well reasoned arguments for interventions within a cause area they care about, I think it’s more likely that they’ll stick around. As most of the core EA material (newsletters, forum, FB) has reference to multiple causes, it will be hard to avoid these ideas. Especially if they are also in groups for their career/interests/location.
I think the bigger risk is losing people who instantly bounce from EA when it doesn’t even attempt to answer their questions rather than the risk of people not getting exposed to other ideas. If EA doesn’t have cause groups then there’s probably a higher chance of someone just going to another movement that does allow conversation in that area.
“Maybe pretty early on, it just became obvious that there wasn’t a lot of value in preaching to people on a topic that they weren’t necessarily there for, and that I had a lot of thoughts on the conversations people were already having. Then I think one thing you can do to share any reasoning system, but it works particularly well for effective altruism is just to apply it consistently, in a principled way, to problems that people care about. Then, they’ll see whether your tools look like useful tools. If they do, then they’ll be interested in learning more about that. I think my ideal effective altruist movement, and obviously this trade off against lots of other things and I don’t know that we can be doing more of it on the margin. My ideal effective altruist movement had insightful nuanced, productive, takes on lots and lots of other things so that people could be like, “Oh, I see how effective altruists have tools for answering questions. “I want the people who have tools for answering questions to teach me about those tools. I want to know what they think the most important questions are. I want to sort of learn about their approach.”
(thank you for writing this, my comment is related to Denkenberger)A consideration against the creation of groups around cause areas if they are open for younger people (not only senior professionals) who are also new to EA: (the argument does not hold if those groups are only for people who are very familiar with EA thinking—of course among others those groups could also make work and coordination more effective)
It might be that this leads to a misallocation of people and resources in EA as the cost of switching focus or careers increases with this network.
If those cause groups existed two years ago, I would have either joined the “Global Poverty Group” or the “Climate Change group” (for sure not the “Animal Welfare group” for instance) (or with some probability also a general EA group). Most of my EA friends and acquaintances would have focused on the same cause area (maybe I would have been better skilled and knowledgable about them now which is important). But the likelihood that I would have changed my cause area because other causes are more important to work on would have been smaller. This could be because it is less likely to come across good arguments for other causes as not that many people around me have an incentive to point me towards those resources. Switching the focus of my work would also be costly in a selfish sense as one would not see all the acquaintances and friends from the monthly meetups/skypes or career workshops of my old cause area anymore.
I think that many people in EA become convinced over time to focus on the longterm. If we reasonably believe that these are rational decision, then changing cause areas and ways of working on the most pressing problems (direct work, lobby work, community building, ETG) several times during one’s life is one of the most important things when trying to maximize impact and should be as cheap as possible for individuals and hence encouraged. That means that the cost of information provision of other cause areas and the private costs of switching should be reduced. I think that this might be difficult with potential cause area groups (especially in smaller cities with less EAs in general).
(Maybe this is similar to the fact that many Uni groups try to not do concrete career advice before students have engaged in cause prioritization discussion. Otherwise, people bind themselves too early to cause areas which seem intuitively attractive or fit to the perceived identity of the person or some underlying beliefs they hold and never questioned ( “AI seems important I have watched SciFi movies” “I am altruistic so I will help the to reduce poverty” “Capitalism causes poverty hence I wont do earning to give”).)
I think when creating most groups/sub-communities it’s important that there is a filter to make sure people have an understanding of EA, otherwise it can become an average group for that cause area rather than a space for people who have an interest in EA and that specific cause, and are looking for EA related conversations.
I think most people who have an interest in EA also hold uncertainty about their moral values, the tractability of various interventions and which causes are most important. It can be easy sometimes to pigeonhole people with particular causes depending on where they work or donate but I don’t meet many people who only care about one cause, and the EA survey had similar results.
If people are able to come across well reasoned arguments for interventions within a cause area they care about, I think it’s more likely that they’ll stick around. As most of the core EA material (newsletters, forum, FB) has reference to multiple causes, it will be hard to avoid these ideas. Especially if they are also in groups for their career/interests/location.
I think the bigger risk is losing people who instantly bounce from EA when it doesn’t even attempt to answer their questions rather than the risk of people not getting exposed to other ideas. If EA doesn’t have cause groups then there’s probably a higher chance of someone just going to another movement that does allow conversation in that area.
This quote from an 80,000 Hours interview with Kelsey Piper phrases it much better.
“Maybe pretty early on, it just became obvious that there wasn’t a lot of value in preaching to people on a topic that they weren’t necessarily there for, and that I had a lot of thoughts on the conversations people were already having. Then I think one thing you can do to share any reasoning system, but it works particularly well for effective altruism is just to apply it consistently, in a principled way, to problems that people care about. Then, they’ll see whether your tools look like useful tools. If they do, then they’ll be interested in learning more about that. I think my ideal effective altruist movement, and obviously this trade off against lots of other things and I don’t know that we can be doing more of it on the margin. My ideal effective altruist movement had insightful nuanced, productive, takes on lots and lots of other things so that people could be like, “Oh, I see how effective altruists have tools for answering questions. “I want the people who have tools for answering questions to teach me about those tools. I want to know what they think the most important questions are. I want to sort of learn about their approach.”