Addressing only the results reported in this post, rather than the survey as a whole:
How many people in the US public are aware of effective altruism and other key EA-related orgs, public figures etc.
What people’s attitudes towards effective altruism are, among those who have encountered it
What people’s attitudes are towards effective altruism (when described) among those who have not encountered it
How these differ across different subgroups
And, in the future, we will also be assessing whether these are changing across time (we have reported the results of some surveys on these questions previously, but this is the first formal wave of the Pulse iteration)
Thanks again! I guess I’m just trying to understand why these metrics are important or how they are important. Why does it matter how many people in the US have heard of EA or how they feel about it? What is the underlying question the survey and its year over-end fellows are trying to get at? Eg, is it trying to measure how well CEA is performing in terms of whether its programs are making a difference in the populace?
I think these questions are relevant in a variety of ways:
Whether overall public awareness is high or low seems relevant to outreach in various ways, in different scenarios.
For example, this came up just a few days here in a discussion of outreach. In addition to knowing overall sentiment, knowing the overall level of awareness of EA is important, since it informs us about the importance and potential for change in sentiment (e.g., in this case, it seems very few people are even aware of EA at all, so even if negative sentiment had increased, its scope would be limited).
In general, after major public events pertaining to EA (like FTX), we might want to know whether these have affected awareness of EA (for good or ill), so we can respond accordingly.
Knowing the overall level of awareness of EA in the population (the ‘top of the funnel’) also informs us about the shape of the funnel, and how many people drop out after the first exposure stage, which is relevant to assessing how many people are interested in EA (as it is currently presented).
Still more generally, if we have any sense of what the ideal growth rate or size of EA should be (decision-makers’ views on this are explored in the forthcoming results from Meta Coordination Forum Survey), then we presumably want to know where the actual growth rate or size falls relative to that.
Knowing about how awareness of EA varies across different groups is also relevant to our outreach.
For example, it could inform us about which groups we should be targeting more heavily to ensure we reach those groups.
It could also help identify which groups we are trying to reach but failing to make aware of EA (for whatever reason).
Moreover, if we know that some groups are more heavily represented in the EA community, then knowing how many people from those groups have heard of EA in the first place informs us about what point in the funnel the problem is (people not hearing about EA, hearing about it but not liking it, hearing about it, joining the community and then dropping out etc.). Our data does suggest some such disparities at the level of first-awareness for both race and gender.
Knowing about public sentiment towards EA seems directly relevant for outreach.
For example, post-FTX there was much discussion about whether the EA brand had become so toxic that we should simply abandon it (which would have entailed huge costs, even if it had been the right thing to do on balance). I won’t elaborate too much on this since it seems relatively straightforward.
Knowing about difference in sentiment across groups is also relevant.
For example, if sentiment dramatically differed between men and women, or other demographics, this would potentially suggest the need for change (whether in terms of our messaging or features of the community etc.
One move which is sometimes made to suggest that these things aren’t relevant, is to say that we only need to be concerned about awareness and attitudes among certain specific groups (e.g. policymakers or elite students). But even if we think that knowing about awareness and attitudes towards EA among certain groups is highly important, it doesn’t suggest that broader public attitudes are not important.
For example, even in cases where EA were supported by elites (of whatever kind) action may be difficult in the face of broad, public opposition.
The attitudes of elites (or whatever other specific, narrow group we think is of interest) and broader public opinion are not completely autonomous, so broader awareness and attitudes are likely to penetrate whatever other group we’re interested in.
I think we actually are interested in the awareness, attitudes and involvement of a broader public, not just specific narrow groups, particularly in the long-term. At the least, some subsets of EA are interested in this, even if other subsets of EA actors might be focused more narrowly on particular groups.[1]
As a practical matter, it’s also worth bearing in mind that large representative surveys like this can generate estimate for some niche subgroups, for example, just not really niche ones like elite policymakers), particularly with larger sample sizes.
Addressing only the results reported in this post, rather than the survey as a whole:
How many people in the US public are aware of effective altruism and other key EA-related orgs, public figures etc.
What people’s attitudes towards effective altruism are, among those who have encountered it
What people’s attitudes are towards effective altruism (when described) among those who have not encountered it
How these differ across different subgroups
And, in the future, we will also be assessing whether these are changing across time (we have reported the results of some surveys on these questions previously, but this is the first formal wave of the Pulse iteration)
Thanks again! I guess I’m just trying to understand why these metrics are important or how they are important. Why does it matter how many people in the US have heard of EA or how they feel about it? What is the underlying question the survey and its year over-end fellows are trying to get at? Eg, is it trying to measure how well CEA is performing in terms of whether its programs are making a difference in the populace?
I think these questions are relevant in a variety of ways:
Whether overall public awareness is high or low seems relevant to outreach in various ways, in different scenarios.
For example, this came up just a few days here in a discussion of outreach. In addition to knowing overall sentiment, knowing the overall level of awareness of EA is important, since it informs us about the importance and potential for change in sentiment (e.g., in this case, it seems very few people are even aware of EA at all, so even if negative sentiment had increased, its scope would be limited).
In general, after major public events pertaining to EA (like FTX), we might want to know whether these have affected awareness of EA (for good or ill), so we can respond accordingly.
Knowing the overall level of awareness of EA in the population (the ‘top of the funnel’) also informs us about the shape of the funnel, and how many people drop out after the first exposure stage, which is relevant to assessing how many people are interested in EA (as it is currently presented).
Still more generally, if we have any sense of what the ideal growth rate or size of EA should be (decision-makers’ views on this are explored in the forthcoming results from Meta Coordination Forum Survey), then we presumably want to know where the actual growth rate or size falls relative to that.
Knowing about how awareness of EA varies across different groups is also relevant to our outreach.
For example, it could inform us about which groups we should be targeting more heavily to ensure we reach those groups.
It could also help identify which groups we are trying to reach but failing to make aware of EA (for whatever reason).
Moreover, if we know that some groups are more heavily represented in the EA community, then knowing how many people from those groups have heard of EA in the first place informs us about what point in the funnel the problem is (people not hearing about EA, hearing about it but not liking it, hearing about it, joining the community and then dropping out etc.). Our data does suggest some such disparities at the level of first-awareness for both race and gender.
Knowing about public sentiment towards EA seems directly relevant for outreach.
For example, post-FTX there was much discussion about whether the EA brand had become so toxic that we should simply abandon it (which would have entailed huge costs, even if it had been the right thing to do on balance). I won’t elaborate too much on this since it seems relatively straightforward.
Knowing about difference in sentiment across groups is also relevant.
For example, if sentiment dramatically differed between men and women, or other demographics, this would potentially suggest the need for change (whether in terms of our messaging or features of the community etc.
One move which is sometimes made to suggest that these things aren’t relevant, is to say that we only need to be concerned about awareness and attitudes among certain specific groups (e.g. policymakers or elite students). But even if we think that knowing about awareness and attitudes towards EA among certain groups is highly important, it doesn’t suggest that broader public attitudes are not important.
For example, even in cases where EA were supported by elites (of whatever kind) action may be difficult in the face of broad, public opposition.
The attitudes of elites (or whatever other specific, narrow group we think is of interest) and broader public opinion are not completely autonomous, so broader awareness and attitudes are likely to penetrate whatever other group we’re interested in.
I think we actually are interested in the awareness, attitudes and involvement of a broader public, not just specific narrow groups, particularly in the long-term. At the least, some subsets of EA are interested in this, even if other subsets of EA actors might be focused more narrowly on particular groups.[1]
As a practical matter, it’s also worth bearing in mind that large representative surveys like this can generate estimate for some niche subgroups, for example, just not really niche ones like elite policymakers), particularly with larger sample sizes.
Just to chime in as someone doing professional community building—these surveys are very useful for all of the reasons David just gave.