I kind of feel like the most important version of a survey like this would be certain subsets of people (eg, tech, policy, animal welfare).
We agree these would be valuable surveys to conduct (and we’d be happy to conduct them if someone wants to fund us to do so). But they’d be very different kinds of surveys. Large representative surveys like this do allow us to generate estimates for relatively niche subsets of the population, but if you are interested in a very small subset of people (e.g. those working in animal welfare), it would be better to run a separate targeted survey.
Also why didn’t you call out that the more people know what EA is, the less they seem to like it? Or was that difference not statistically significant?
(“Sentiment towards EA among those who had heard of it was positive (51% positive vs. 38% negative among those stringently aware, and 70% positive, vs. 22% negative among those permissively aware).”
This comparison wouldn’t strictly make sense for a few reasons:
The permissive vs stringent classifications are not about whether people know more about EA, but about our confidence, based on their response, that the person has encountered EA. So a very specific response, which reveals clear awareness of EA, but which was overtly factually mistaken could count as stringent, whereas a less specific response which leaves it less clear that the person has encountered EA might only reach the bar for permissive.
The two categories are not independent. Every stringent response also passes the bar for the permissive categorisation.
A response which referred to a connection between FTX/SBF and EA would be sufficient to meet our stringent classification, because if the person knows about such a (putative) connection, then they have clearly encountered EA (even if their overall conception might be very limited or mistaken). This means that the stringent category is particularly likely to contain people aware of FTX and more than half of the stringently classified respondents who expressed a negative sentiment about EA mentioned FTX.
Considering the two groups as independent, there are only 34 and 39 exclusively permissive and stringent respondents, respectively, meaning small sample sizes for a comparison of the two groups.
I do think it is noteable that sentiment is more postive among those who did not report awareness of EA, and responded to a particular presentation of EA, compared to sentiment among those who were classified as having encountered EA. However, this is also not a straightforward comparison: the composition of these groups is different, and the people who did not claim awareness were responding only to one particular presentation of EA. More research would be required to assess whether learning more about EA leads to people having more negative opinions about it.
Addressing only the results reported in this post, rather than the survey as a whole:
How many people in the US public are aware of effective altruism and other key EA-related orgs, public figures etc.
What people’s attitudes towards effective altruism are, among those who have encountered it
What people’s attitudes are towards effective altruism (when described) among those who have not encountered it
How these differ across different subgroups
And, in the future, we will also be assessing whether these are changing across time (we have reported the results of some surveys on these questions previously, but this is the first formal wave of the Pulse iteration)
Thanks again! I guess I’m just trying to understand why these metrics are important or how they are important. Why does it matter how many people in the US have heard of EA or how they feel about it? What is the underlying question the survey and its year over-end fellows are trying to get at? Eg, is it trying to measure how well CEA is performing in terms of whether its programs are making a difference in the populace?
I think these questions are relevant in a variety of ways:
Whether overall public awareness is high or low seems relevant to outreach in various ways, in different scenarios.
For example, this came up just a few days here in a discussion of outreach. In addition to knowing overall sentiment, knowing the overall level of awareness of EA is important, since it informs us about the importance and potential for change in sentiment (e.g., in this case, it seems very few people are even aware of EA at all, so even if negative sentiment had increased, its scope would be limited).
In general, after major public events pertaining to EA (like FTX), we might want to know whether these have affected awareness of EA (for good or ill), so we can respond accordingly.
Knowing the overall level of awareness of EA in the population (the ‘top of the funnel’) also informs us about the shape of the funnel, and how many people drop out after the first exposure stage, which is relevant to assessing how many people are interested in EA (as it is currently presented).
Still more generally, if we have any sense of what the ideal growth rate or size of EA should be (decision-makers’ views on this are explored in the forthcoming results from Meta Coordination Forum Survey), then we presumably want to know where the actual growth rate or size falls relative to that.
Knowing about how awareness of EA varies across different groups is also relevant to our outreach.
For example, it could inform us about which groups we should be targeting more heavily to ensure we reach those groups.
It could also help identify which groups we are trying to reach but failing to make aware of EA (for whatever reason).
Moreover, if we know that some groups are more heavily represented in the EA community, then knowing how many people from those groups have heard of EA in the first place informs us about what point in the funnel the problem is (people not hearing about EA, hearing about it but not liking it, hearing about it, joining the community and then dropping out etc.). Our data does suggest some such disparities at the level of first-awareness for both race and gender.
Knowing about public sentiment towards EA seems directly relevant for outreach.
For example, post-FTX there was much discussion about whether the EA brand had become so toxic that we should simply abandon it (which would have entailed huge costs, even if it had been the right thing to do on balance). I won’t elaborate too much on this since it seems relatively straightforward.
Knowing about difference in sentiment across groups is also relevant.
For example, if sentiment dramatically differed between men and women, or other demographics, this would potentially suggest the need for change (whether in terms of our messaging or features of the community etc.
One move which is sometimes made to suggest that these things aren’t relevant, is to say that we only need to be concerned about awareness and attitudes among certain specific groups (e.g. policymakers or elite students). But even if we think that knowing about awareness and attitudes towards EA among certain groups is highly important, it doesn’t suggest that broader public attitudes are not important.
For example, even in cases where EA were supported by elites (of whatever kind) action may be difficult in the face of broad, public opposition.
The attitudes of elites (or whatever other specific, narrow group we think is of interest) and broader public opinion are not completely autonomous, so broader awareness and attitudes are likely to penetrate whatever other group we’re interested in.
I think we actually are interested in the awareness, attitudes and involvement of a broader public, not just specific narrow groups, particularly in the long-term. At the least, some subsets of EA are interested in this, even if other subsets of EA actors might be focused more narrowly on particular groups.[1]
As a practical matter, it’s also worth bearing in mind that large representative surveys like this can generate estimate for some niche subgroups, for example, just not really niche ones like elite policymakers), particularly with larger sample sizes.
We agree these would be valuable surveys to conduct (and we’d be happy to conduct them if someone wants to fund us to do so). But they’d be very different kinds of surveys. Large representative surveys like this do allow us to generate estimates for relatively niche subsets of the population, but if you are interested in a very small subset of people (e.g. those working in animal welfare), it would be better to run a separate targeted survey.
This comparison wouldn’t strictly make sense for a few reasons:
The permissive vs stringent classifications are not about whether people know more about EA, but about our confidence, based on their response, that the person has encountered EA. So a very specific response, which reveals clear awareness of EA, but which was overtly factually mistaken could count as stringent, whereas a less specific response which leaves it less clear that the person has encountered EA might only reach the bar for permissive.
The two categories are not independent. Every stringent response also passes the bar for the permissive categorisation.
A response which referred to a connection between FTX/SBF and EA would be sufficient to meet our stringent classification, because if the person knows about such a (putative) connection, then they have clearly encountered EA (even if their overall conception might be very limited or mistaken). This means that the stringent category is particularly likely to contain people aware of FTX and more than half of the stringently classified respondents who expressed a negative sentiment about EA mentioned FTX.
Considering the two groups as independent, there are only 34 and 39 exclusively permissive and stringent respondents, respectively, meaning small sample sizes for a comparison of the two groups.
I do think it is noteable that sentiment is more postive among those who did not report awareness of EA, and responded to a particular presentation of EA, compared to sentiment among those who were classified as having encountered EA. However, this is also not a straightforward comparison: the composition of these groups is different, and the people who did not claim awareness were responding only to one particular presentation of EA. More research would be required to assess whether learning more about EA leads to people having more negative opinions about it.
Very detailed and thorough response, thank you!
Last question if you have time: what questions was this survey trying to answer?
Addressing only the results reported in this post, rather than the survey as a whole:
How many people in the US public are aware of effective altruism and other key EA-related orgs, public figures etc.
What people’s attitudes towards effective altruism are, among those who have encountered it
What people’s attitudes are towards effective altruism (when described) among those who have not encountered it
How these differ across different subgroups
And, in the future, we will also be assessing whether these are changing across time (we have reported the results of some surveys on these questions previously, but this is the first formal wave of the Pulse iteration)
Thanks again! I guess I’m just trying to understand why these metrics are important or how they are important. Why does it matter how many people in the US have heard of EA or how they feel about it? What is the underlying question the survey and its year over-end fellows are trying to get at? Eg, is it trying to measure how well CEA is performing in terms of whether its programs are making a difference in the populace?
I think these questions are relevant in a variety of ways:
Whether overall public awareness is high or low seems relevant to outreach in various ways, in different scenarios.
For example, this came up just a few days here in a discussion of outreach. In addition to knowing overall sentiment, knowing the overall level of awareness of EA is important, since it informs us about the importance and potential for change in sentiment (e.g., in this case, it seems very few people are even aware of EA at all, so even if negative sentiment had increased, its scope would be limited).
In general, after major public events pertaining to EA (like FTX), we might want to know whether these have affected awareness of EA (for good or ill), so we can respond accordingly.
Knowing the overall level of awareness of EA in the population (the ‘top of the funnel’) also informs us about the shape of the funnel, and how many people drop out after the first exposure stage, which is relevant to assessing how many people are interested in EA (as it is currently presented).
Still more generally, if we have any sense of what the ideal growth rate or size of EA should be (decision-makers’ views on this are explored in the forthcoming results from Meta Coordination Forum Survey), then we presumably want to know where the actual growth rate or size falls relative to that.
Knowing about how awareness of EA varies across different groups is also relevant to our outreach.
For example, it could inform us about which groups we should be targeting more heavily to ensure we reach those groups.
It could also help identify which groups we are trying to reach but failing to make aware of EA (for whatever reason).
Moreover, if we know that some groups are more heavily represented in the EA community, then knowing how many people from those groups have heard of EA in the first place informs us about what point in the funnel the problem is (people not hearing about EA, hearing about it but not liking it, hearing about it, joining the community and then dropping out etc.). Our data does suggest some such disparities at the level of first-awareness for both race and gender.
Knowing about public sentiment towards EA seems directly relevant for outreach.
For example, post-FTX there was much discussion about whether the EA brand had become so toxic that we should simply abandon it (which would have entailed huge costs, even if it had been the right thing to do on balance). I won’t elaborate too much on this since it seems relatively straightforward.
Knowing about difference in sentiment across groups is also relevant.
For example, if sentiment dramatically differed between men and women, or other demographics, this would potentially suggest the need for change (whether in terms of our messaging or features of the community etc.
One move which is sometimes made to suggest that these things aren’t relevant, is to say that we only need to be concerned about awareness and attitudes among certain specific groups (e.g. policymakers or elite students). But even if we think that knowing about awareness and attitudes towards EA among certain groups is highly important, it doesn’t suggest that broader public attitudes are not important.
For example, even in cases where EA were supported by elites (of whatever kind) action may be difficult in the face of broad, public opposition.
The attitudes of elites (or whatever other specific, narrow group we think is of interest) and broader public opinion are not completely autonomous, so broader awareness and attitudes are likely to penetrate whatever other group we’re interested in.
I think we actually are interested in the awareness, attitudes and involvement of a broader public, not just specific narrow groups, particularly in the long-term. At the least, some subsets of EA are interested in this, even if other subsets of EA actors might be focused more narrowly on particular groups.[1]
As a practical matter, it’s also worth bearing in mind that large representative surveys like this can generate estimate for some niche subgroups, for example, just not really niche ones like elite policymakers), particularly with larger sample sizes.
Just to chime in as someone doing professional community building—these surveys are very useful for all of the reasons David just gave.