Thanks for sharing all of this new data—it is very interesting! (Note that in my earlier response, I had nothing to go on besides the 2020 result you have already published, which indicated that the plots you included in your first comment were drawn from a far wider sample of EA-affiliated people than what we were probing in our survey, which I still believe is true. Correct me if I’m wrong!)
Many of these new results you share here, while extremely interesting in their own right, are still not apples-to-apples comparisons for the same reasons we’ve already touched on.[1]
It is not particularly surprising to me that we are asking people meaningfully different questions and getting meaningfully different results given how generally sensitive respondents in psychological research are to variations in item phrasing. (We can of course go back and forth about which phrasing is better/more actionable/etc, but this is orthogonal to the main question of whether these are reasonable apples-to-apples comparisons.)
The most recent data you have that you mention briefly at the end of your response seems far more relevant in my view. It seems like both of the key results you are taking issue with here (cause prioritization and lukewarm longtermism views) you found yourself to some degree in these results (which it’s also worth noting was sampled at the same time as our data, rather than 2 or 4 years ago):
Your result 1:
The responses within the Cause Prioritization category which did not explicitly refer to too much focus on AI, were focused on insufficient attention being paid to other causes, primarily animals and GHD.
Our result 1:
We specifically find the exact same two cause areas, animals and GHD, as being considered the most promising to currently pursue.
Your result 2 (listed as the first reason for dissatisfaction with the EA community):
Focus on AI risks/x-risks/longtermism: Mainly a subset of the cause prioritization category, consisting of specific references to an overemphasis on AI risk and existential risks as a cause area, as well as longtermist thinking in the EA community.
Our result 2:
We specifically find that our sample is overall normally distributed with a slight negative skew (~35% disagree, ~30% agree) that EAs’ recent shift towards longtermism is positive.
I suppose having now read your newest report (which I was not aware of before conducting this project), I actually find myself less clear on why you are as surprised as you seem to be by these results given that they essentially replicate numerous object-level findings you reported only ~2 months ago.
(Want to flag that I would lend more credence in terms of guiding specific action to your prioritization results than to our ‘how promising...’ results given your significantly larger sample size and more precise resource-related questions. But this does not detract from also being able to make valid and action-guiding inferences from both of the results I include in this comment, of which we think there are many as we describe in the body of this post. I don’t think there is any strong reason to ignore or otherwise dismiss out of hand what we’ve found here—we simply sourced a large and diverse sample of EAs, asked them fairly basic questions about their views on EA-related topics, and reported the results for the community to digest and discuss.)
- ^
One further question/hunch I have in this regard is that the way we are quantifying high vs. low engagement is almost certainly different (is your sample self-reporting this/do you give them any quantitative criteria for reporting this?), which adds an additional layer of distance between these results.
Agreed.
It seems like you find the descriptor ‘lukewarm’ to be specifically problematic—I am considering changing the word choice of the ‘headline result’ accordingly given this exchange. (I originally chose to use the word ‘lukewarm’ to reflect the normal-but-slightly-negative skew of the results I’ve highlighted previously. I probably would have used ‘divided’ if our results looked bimodal, but they do not.)
What seems clear from this is that the hundreds of actively involved EAs we sampled are not collectively aligned (or ‘divided’ or ‘collectively-lukewarm’ or however you want to describe it) on whether increased attention to longtermist causes represents a positive change in the community—despite systematically mispredicting numerous times that the sample would respond more positively. I will again refer to the relevant result to ensure any readers appreciate how straightforwardly this interpretation follows from the result—
(~35% don’t think positive shift, ~30% do; ~45% don’t think primary focus, ~25% do. ~250 actively involved EAs sampled from across 10+ cause areas.)
This division/lukewarmness/misalignment represents a foundational philosophical disagreement about how to go about doing the most good and seemed pretty important for us to highlight in the write-up. It is also worth emphasizing that we personally care very much about causes like AI risk and would have hoped to see stronger support for longtermism in general—but we did not find this, much to our surprise (and to the surprise of the hundreds of participants who predicted the distributions would look significantly different as can be seen above).
As noted in the post, we definitely think follow-up research is very important for fleshing out all of these findings, and we are very supportive of all of the great work Rethink Priorities has done in this space. Perhaps it would be worthwhile at some point in the future to attempt to collaboratively investigate this specific question to see if we can’t better determine what is driving this pattern of results.
(Also, to be clear, I was not insinuating the engagement scale is invalid—looks completely reasonable to me. Simply pointing out that we are quantifying engagement differently, which may further contribute to explaining why our related but distinct analyses yielded different results.)
Thanks again for your engagement with the post and for providing readers with really interesting context throughout this discussion :)