Summary: I think the post mostly holds up. The post provided a number of significant, actionable findings, which have since been replicated in the most recent EA Survey and in OpenPhil’s report. We’ve also been able to extend the findings in a variety of ways since then. There was also one part of the post that I don’t think holds up, which I’ll discuss in more detail.
The post highlighted (among other things):
People first hear about EA from a fairly wide variety of sources, rather than a small number dominating. Even the largest source, personal contacts, only accounted for about 14% of EAs.
In the most recent years, a large number of sources seem to be recruiting fairly similar numbers of EAs (LessWrong, SlateStarCodex, books, podcasts, local EA groups all 5-8%).
Nevertheless, there were some large differences between sources (e.g. personal contacts, 80,000 Hours and LessWrong have historically recruited about 10x more than some other outreach sources).
Routes into EA have changed across time with 80,000 Hours recruiting many more EAs in recent years. Local EA Groups also increased as a recruitment source in recent years.
We also examined qualitative comments offering more detail about how people got into EA. This provided more details about some of the categories. For example, within the books category, influence seemed to be spread fairly evenly across EA books. In contrast, Podcasts were quite heavily dominated by Sam Harris’s podcasts and the TED Talk category almost exclusively contained reference to Peter Singer’s TED talk.
There are differences in the pattern of results for what factors were important for getting involved in EA, in contrast to where individuals first hear of EA.
There are significant (and sometimes large) differences in routes into EA and influences on involvement based on gender and race. In particular, personal contacts and local groups seemed particularly important for non-male respondents, perhaps suggesting that providing opportunities for personal connection is important in this regard.
There are only slight differences in where high vs low engagement EAs first heard about EA (for example, a larger number of highly engaged EAs first heard from a personal contact or local group). There are more and larger differences in what factors were important for high/low engagement EAs getting involved in EA (e.g. personal contacts and local groups were more commonly selected by the highly engaged).
Implications
I think these findings have a lot of implications for the EA community. Many of them seem fairly obvious/straightforward implications of the data (i.e. which factors have or have not been important historically). This is not to imply that there aren’t important caveats and complications to the interpretation of this information, just that the direct implications of some of the results (e.g. a lot of people being recruited by 80,000 Hours compared to some other sources) are fairly straightforward. Important factors influencing the interpretation of this information would include methodological ones (e.g. whether the survey recruits more people from 80,000 Hours and this influences the result) and substantive ones about the community (e.g. how many resources are spent on or by 80,000 Hours, what is the average impact of people recruited from different sources (which is indirectly addressed by the data itself). In the main EA Survey posts we consciously err on the side of not stating some of these implications, aiming instead to to present the data neutrally and let it speak for itself. I think something important would be lost if the EA Survey series lost this neutrality (i.e. if it came to be associated with advocating for a particular policy, then people who disagreed with this policy might be less likely to take/trust the EA Survey), but this is definitely not without its costs (and it also relies on an assumption that other movement builders are working to draw out these implications).
One area where we discussed the possible implications of the results more than usual, while still holding back from making any specific policy proposals, is the finding that there seemed to be relatively little difference in the average engagement of EAs who first heard of EA from different sources. One might expect that certain sources would recruit a much larger proportion of highly engaged EAs than others, such that, even though some sources recruit a larger total number of EAs others recruit a larger number of highly engaged EAs. One might even speculate that the number of EAs recruited and the proportion of highly engaged EAs recruited would be negatively correlated, if one supposes either that broader outreach leads to less highly engaged recruits (on average) or that all diminishing returns mean that later recruits from a source tend to be less engaged than the ‘low hanging fruit’.
I think the 2019 EA Survey results (and likewise the 2018 and 2020 results) present evidence suggesting that there are not particularly large differences between recruitment routes in this regard. This is suggested by the analysis showing that there were a small number of significant, but slight, differences in the proportion low/high engagement EAs recruited from different sources. To give a sense of the magnitude of the differences: there were 164 highly engaged who first heard about EA from a personal contact, whereas we would expect only 151 if there were no difference in engagement across different routes into EA (a difference of 13 people).
Although I think the substantive conclusion stands up, one aspect of the presentation which I would change is the graph below:
This was an indirect replication of a graph we included in 2018. This offers a neat, simple visual representation of the relationship between the number of total EAs and number of highly engaged EAs recruited from each source, but it risks being misleading because the two variables (total EAs recruited and highly engaged EAs recruited) are not independent (the total includes the highly engaged EAs). As such, we’d expect there to be some correlation between the two, simply in virtue of this fact (how much depends, inter alia, on how many highly engaged EAs there are in the total population).
As it happens, when we repeated the analysis looking at the relationship between the number of low engagement EAs and the number of highly engaged EAs independently (which I think is a less intuitive thing to consider), we found a very similar pattern of results. Nevertheless, although the simplicity of this presentation is nice, I think it’s generally better to just not include any variants of this graph (and we dropped this graph from EAS 2020) and just include analyses of whether the proportion or average level of engagement varies across different recruitment sources. Unfortunately, I think these are less intuitive and less striking (see e.g. the models included here), so I do worry that they are less likely to inform decisions.
New findings since 2019
I think most of these findings have only gained further support through being replicated in our 2020 post.
Since 2019 these results have also been supported by OpenPhil’s 2020 survey of “a subset of people doing (or interested in) longtermist priority work”. OpenPhil found very similar patterns to the EA Survey 2019’s results despite using different categories and a different analysis (for one thing, OP’s allowed percentages to sum to more than 100% whereas ours did not).
In addition, part of this difference is likely explained by the fact that OP’s post compared their select highly engaged sample to the full EA Survey sample. If we limit our analysis to only self-reported very highly engaged EAs the gaps shrink further (i.e. highly engaged EAs were more likely to select personal contact and less likely to select SlateStarCodex.
Overall, I have been surprised by the extent to which OP’s highly impactful longtermist data aligns with the EA Survey data, when adjusting for relevant factors like engagement.
Summary: I think the post mostly holds up. The post provided a number of significant, actionable findings, which have since been replicated in the most recent EA Survey and in OpenPhil’s report. We’ve also been able to extend the findings in a variety of ways since then. There was also one part of the post that I don’t think holds up, which I’ll discuss in more detail.
The post highlighted (among other things):
People first hear about EA from a fairly wide variety of sources, rather than a small number dominating. Even the largest source, personal contacts, only accounted for about 14% of EAs.
In the most recent years, a large number of sources seem to be recruiting fairly similar numbers of EAs (LessWrong, SlateStarCodex, books, podcasts, local EA groups all 5-8%).
Nevertheless, there were some large differences between sources (e.g. personal contacts, 80,000 Hours and LessWrong have historically recruited about 10x more than some other outreach sources).
Routes into EA have changed across time with 80,000 Hours recruiting many more EAs in recent years. Local EA Groups also increased as a recruitment source in recent years.
We also examined qualitative comments offering more detail about how people got into EA. This provided more details about some of the categories. For example, within the books category, influence seemed to be spread fairly evenly across EA books. In contrast, Podcasts were quite heavily dominated by Sam Harris’s podcasts and the TED Talk category almost exclusively contained reference to Peter Singer’s TED talk.
There are differences in the pattern of results for what factors were important for getting involved in EA, in contrast to where individuals first hear of EA.
There are significant (and sometimes large) differences in routes into EA and influences on involvement based on gender and race. In particular, personal contacts and local groups seemed particularly important for non-male respondents, perhaps suggesting that providing opportunities for personal connection is important in this regard.
There are only slight differences in where high vs low engagement EAs first heard about EA (for example, a larger number of highly engaged EAs first heard from a personal contact or local group). There are more and larger differences in what factors were important for high/low engagement EAs getting involved in EA (e.g. personal contacts and local groups were more commonly selected by the highly engaged).
Implications
I think these findings have a lot of implications for the EA community. Many of them seem fairly obvious/straightforward implications of the data (i.e. which factors have or have not been important historically). This is not to imply that there aren’t important caveats and complications to the interpretation of this information, just that the direct implications of some of the results (e.g. a lot of people being recruited by 80,000 Hours compared to some other sources) are fairly straightforward. Important factors influencing the interpretation of this information would include methodological ones (e.g. whether the survey recruits more people from 80,000 Hours and this influences the result) and substantive ones about the community (e.g. how many resources are spent on or by 80,000 Hours, what is the average impact of people recruited from different sources (which is indirectly addressed by the data itself). In the main EA Survey posts we consciously err on the side of not stating some of these implications, aiming instead to to present the data neutrally and let it speak for itself. I think something important would be lost if the EA Survey series lost this neutrality (i.e. if it came to be associated with advocating for a particular policy, then people who disagreed with this policy might be less likely to take/trust the EA Survey), but this is definitely not without its costs (and it also relies on an assumption that other movement builders are working to draw out these implications).
One area where we discussed the possible implications of the results more than usual, while still holding back from making any specific policy proposals, is the finding that there seemed to be relatively little difference in the average engagement of EAs who first heard of EA from different sources. One might expect that certain sources would recruit a much larger proportion of highly engaged EAs than others, such that, even though some sources recruit a larger total number of EAs others recruit a larger number of highly engaged EAs. One might even speculate that the number of EAs recruited and the proportion of highly engaged EAs recruited would be negatively correlated, if one supposes either that broader outreach leads to less highly engaged recruits (on average) or that all diminishing returns mean that later recruits from a source tend to be less engaged than the ‘low hanging fruit’.
I think the 2019 EA Survey results (and likewise the 2018 and 2020 results) present evidence suggesting that there are not particularly large differences between recruitment routes in this regard. This is suggested by the analysis showing that there were a small number of significant, but slight, differences in the proportion low/high engagement EAs recruited from different sources. To give a sense of the magnitude of the differences: there were 164 highly engaged who first heard about EA from a personal contact, whereas we would expect only 151 if there were no difference in engagement across different routes into EA (a difference of 13 people).
Although I think the substantive conclusion stands up, one aspect of the presentation which I would change is the graph below:
This was an indirect replication of a graph we included in 2018. This offers a neat, simple visual representation of the relationship between the number of total EAs and number of highly engaged EAs recruited from each source, but it risks being misleading because the two variables (total EAs recruited and highly engaged EAs recruited) are not independent (the total includes the highly engaged EAs). As such, we’d expect there to be some correlation between the two, simply in virtue of this fact (how much depends, inter alia, on how many highly engaged EAs there are in the total population).
As it happens, when we repeated the analysis looking at the relationship between the number of low engagement EAs and the number of highly engaged EAs independently (which I think is a less intuitive thing to consider), we found a very similar pattern of results. Nevertheless, although the simplicity of this presentation is nice, I think it’s generally better to just not include any variants of this graph (and we dropped this graph from EAS 2020) and just include analyses of whether the proportion or average level of engagement varies across different recruitment sources. Unfortunately, I think these are less intuitive and less striking (see e.g. the models included here), so I do worry that they are less likely to inform decisions.
New findings since 2019
I think most of these findings have only gained further support through being replicated in our 2020 post.
Since 2019 these results have also been supported by OpenPhil’s 2020 survey of “a subset of people doing (or interested in) longtermist priority work”. OpenPhil found very similar patterns to the EA Survey 2019’s results despite using different categories and a different analysis (for one thing, OP’s allowed percentages to sum to more than 100% whereas ours did not).
In addition, part of this difference is likely explained by the fact that OP’s post compared their select highly engaged sample to the full EA Survey sample. If we limit our analysis to only self-reported very highly engaged EAs the gaps shrink further (i.e. highly engaged EAs were more likely to select personal contact and less likely to select SlateStarCodex.
Overall, I have been surprised by the extent to which OP’s highly impactful longtermist data aligns with the EA Survey data, when adjusting for relevant factors like engagement.