Interesting observations. I only have one thought that I don’t see mentioned in the comments.
I see EA as something that is mostly useful when you are deciding how you want to do good. After you figured it out, there is little reason to continue engaging with it. [1] Under this model of EA, the fact that engagement with EA is not growing would only mean that the number of people who are deciding how to do good at any given time is not growing. But that is not what we want to maximize. We want to maximize the number of people actually working on doing good. I think that EA fields like AI safety and effective animal advocacy have been growing though I don’t know. But I think this model of EA is only partially correct.
E.g., Once someone figures out that they want to be an animal advocate, or AI safety researcher, or whatever, there is little reason for them to engage with EA. E.g., I am an animal advocacy researcher and I would probably barely visit the EA forum if there was an effective animal advocacy forum (I wish there was). Possibly one exception is earning-to-give, because there is always new information that can help decide where to give most effectively, and EA community is a good place to discuss that. But even that has diminishing returns. Once you figured out your general strategy or cause, you may need to engage with EA less.
I completely agree with this, thank you for writing it up! This is also an issue I have with some elements of the ‘drifting’ debate—I’m not too fussed whether someone stays involved in the EA community (though I think it can be good to check whether there have been new insights), I care about people actually still doing good.
It didn’t appear in our coding scheme as a distinct category, but particularly within the “diminishing returns” category below, and also in response to the question about barriers to further involvement in the EA community below that, there were a decent number of comments expressing the view that they were interested in having impact and weren’t interested in being involved in the EA community.
It’s probably unnecessary but I tried to think of a metaphor that would help to visualize this as that helps me to understand things. Here is the best one I have. You want to maximize the number of people partying in your house. You observe that the number of people in the landing room is constant and conclude that the number of people partying is not growing. (Landing room in this metaphor is EA). But that is only because people are entering the landing room, and then going to party in different rooms (different rooms are different cause areas). So the fact that the number of people in the landing room is constant might mean that the party is growing at a constant rate. Or perhaps even the growth rate is increasing, but we also learnt how to get people out of the landing room into other rooms quicker which is good.
That’s one way to see it, but I thought that ideally you’re supposed to keep considering all the possible “interventions” you can personally do to help moral patients. That is, if the most effective cause that matches your skills (and is neglected, etc etc) changes, you’re supposed to switch.
In practice that does not happen much, because skills and experience in one area are most useful in the same area, and because re-thinking your career constantly is tiring and even depressing; but it could be that way.
If it was that way, people who have decided on their cause area (for the next say, 5 years) should still call themselves EAs.
Interesting observations. I only have one thought that I don’t see mentioned in the comments.
I see EA as something that is mostly useful when you are deciding how you want to do good. After you figured it out, there is little reason to continue engaging with it. [1] Under this model of EA, the fact that engagement with EA is not growing would only mean that the number of people who are deciding how to do good at any given time is not growing. But that is not what we want to maximize. We want to maximize the number of people actually working on doing good. I think that EA fields like AI safety and effective animal advocacy have been growing though I don’t know. But I think this model of EA is only partially correct.
E.g., Once someone figures out that they want to be an animal advocate, or AI safety researcher, or whatever, there is little reason for them to engage with EA. E.g., I am an animal advocacy researcher and I would probably barely visit the EA forum if there was an effective animal advocacy forum (I wish there was). Possibly one exception is earning-to-give, because there is always new information that can help decide where to give most effectively, and EA community is a good place to discuss that. But even that has diminishing returns. Once you figured out your general strategy or cause, you may need to engage with EA less.
I completely agree with this, thank you for writing it up! This is also an issue I have with some elements of the ‘drifting’ debate—I’m not too fussed whether someone stays involved in the EA community (though I think it can be good to check whether there have been new insights), I care about people actually still doing good.
This sentiment came up a fair amount in the [2019 EA Survey data](https://forum.effectivealtruism.org/posts/F6PavBeqTah9xu8e4/ea-survey-2019-series-community-information#Changes_in_level_of_interest_in_EA__Qualitative_Data) about reasons why people had decreased levels of interest in EA over the last 12 months.
It didn’t appear in our coding scheme as a distinct category, but particularly within the “diminishing returns” category below, and also in response to the question about barriers to further involvement in the EA community below that, there were a decent number of comments expressing the view that they were interested in having impact and weren’t interested in being involved in the EA community.
It’s probably unnecessary but I tried to think of a metaphor that would help to visualize this as that helps me to understand things. Here is the best one I have. You want to maximize the number of people partying in your house. You observe that the number of people in the landing room is constant and conclude that the number of people partying is not growing. (Landing room in this metaphor is EA). But that is only because people are entering the landing room, and then going to party in different rooms (different rooms are different cause areas). So the fact that the number of people in the landing room is constant might mean that the party is growing at a constant rate. Or perhaps even the growth rate is increasing, but we also learnt how to get people out of the landing room into other rooms quicker which is good.
That’s one way to see it, but I thought that ideally you’re supposed to keep considering all the possible “interventions” you can personally do to help moral patients. That is, if the most effective cause that matches your skills (and is neglected, etc etc) changes, you’re supposed to switch.
In practice that does not happen much, because skills and experience in one area are most useful in the same area, and because re-thinking your career constantly is tiring and even depressing; but it could be that way.
If it was that way, people who have decided on their cause area (for the next say, 5 years) should still call themselves EAs.