Thanks for sharing all of this new data—it is very interesting! (Note that in my earlier response, I had nothing to go on besides the 2020 result you have already published, which indicated that the plots you included in your first comment were drawn from a far wider sample of EA-affiliated people than what we were probing in our survey, which I still believe is true. Correct me if I’m wrong!)
Many of these new results you share here, while extremely interesting in their own right, are still not apples-to-apples comparisons for the same reasons we’ve already touched on.[1]
It is not particularly surprising to me that we are asking people meaningfully different questions and getting meaningfully different results given how generally sensitive respondents in psychological research are to variations in item phrasing. (We can of course go back and forth about which phrasing is better/more actionable/etc, but this is orthogonal to the main question of whether these are reasonable apples-to-apples comparisons.)
The most recent data you have that you mention briefly at the end of your response seems far more relevant in my view. It seems like both of the key results you are taking issue with here (cause prioritization and lukewarm longtermism views) you found yourself to some degree in these results (which it’s also worth noting was sampled at the same time as our data, rather than 2 or 4 years ago):
Your result 1:
The responses within the Cause Prioritization category which did not explicitly refer to too much focus on AI, were focused oninsufficient attention being paid to other causes, primarily animals and GHD.
Our result 1:
We specifically find the exact same two cause areas, animals and GHD, as being considered the most promisingto currently pursue.
Your result 2 (listed as the first reason for dissatisfaction with the EA community):
Focus on AI risks/x-risks/longtermism: Mainly a subset of the cause prioritization category, consisting of specific references to an overemphasis on AI risk and existential risks as a cause area, as well as longtermist thinking in the EA community.
Our result 2:
We specifically find that our sample is overall normally distributed with a slight negative skew (~35% disagree, ~30% agree) that EAs’ recent shift towards longtermism is positive.
I suppose having now read your newest report (which I was not aware of before conducting this project), I actually find myself less clear on why you are as surprised as you seem to be by these results given that they essentially replicate numerous object-level findings you reported only ~2 months ago.
(Want to flag that I would lend more credence in terms of guiding specific action to your prioritization results than to our ‘how promising...’ results given your significantly larger sample size and more precise resource-related questions. But this does not detract from also being able to make valid and action-guiding inferences from both of the results I include in this comment, of which we think there are many as we describe in the body of this post. I don’t think there is any strong reason to ignore or otherwise dismiss out of hand what we’ve found here—we simply sourced a large and diverse sample of EAs, asked them fairly basic questions about their views on EA-related topics, and reported the results for the community to digest and discuss.)
One further question/hunch I have in this regard is that the way we are quantifying high vs. low engagement is almost certainly different (is your sample self-reporting this/do you give them any quantitative criteria for reporting this?), which adds an additional layer of distance between these results.
It is not particularly surprising to me that we are asking people meaningfully different questions and getting meaningfully different results…
the main question of whether these are reasonable apples-to-apples comparisons.)
We agree that our surveys asked different questions. I’m mostly not interested in assessing which of our questions are the most ‘apples-to-apples comparisons’, since I’m not interested in critiquing your results per se. Rather, I’m interested in what we should conclude about the object-level questions given our respective results (e.g. is the engaged EA community lukewarm and longtermism, and prioritises preference for global poverty and animal welfare, or is the community divided on these views, with the most engaged more strongly prioritising longtermism?).
in my earlier response, I had nothing to go on besides the 2020 result you have already published, which indicated that the plots you included in your first comment were drawn from a far wider sample of EA-affiliated people than what we were probing in our survey, which I still believe is true. Correct me if I’m wrong!)
I would just note that in my original response I showed the how the results varied across the full range of engagement levels, which I think offers more insight into how the community’s views differ across groups, than just looking at one sub-group.
One further question/hunch I have in this regard is that the way we are quantifying high vs. low engagement is almost certainly different (is your sample self-reporting this/do you give them any quantitative criteria for reporting this?), which adds an additional layer of distance between these results.
The engagement scale is based on self-identification, but the highest engagement level is characterised with reference to “helping to lead an EA group or working at an EA-aligned organization”. You can read more about our different measures of engagement and how they cohere here. Crucially, I also presented results specifically for EA org employees and people doing EA work so concerns about the engagement scale specifically do not seem relevant.
The most recent data you have that you mention briefly at the end of your response seems far more relevant in my view. It seems like both of the key results you are taking issue with here (cause prioritization and lukewarm longtermism views) you found yourself to some degree in these results
I respond to these two points below:
Your result 1:
The responses within the Cause Prioritization category which did not explicitly refer to too much focus on AI, were focused oninsufficient attention being paid to other causes, primarily animals and GHD.
We specifically find the exact same two cause areas, animals and GHD, as being considered the most promising to currently pursue.
I don’t think this tells us much about which causes people think most promising overall. The result you’re referring to is looking only at the 22% of respondents who mentioned Cause prioritization as a reason for dissatisfaction with EA andwere not one of the 16% of people who mentioned excessive focus on x-risk as a cause for dissatisfaction (38 respondents, of which 8 mentioned animals, 4 mentioned Global poverty and 7 mentioned another cause (the rest mentioned something other than a specific cause area)).
Our footnote mentioning this was never intended to indicate which causes are overall judged most promising, just to clarify how our ‘Cause prioritization’ and ‘Excessive focus on AI’ categories differed. (As it happens, I do think our results suggest Global Poverty and Animal Welfare are the highest rated non-x-risk cause areas, but they’re not prioritised more highly than all x-risk causes).
Your result 2 (listed as the first reason for dissatisfaction with the EA community):
Focus on AI risks/x-risks/longtermism: Mainly a subset of the cause prioritization category, consisting of specific references to an overemphasis on AI risk and existential risks as a cause area, as well as longtermist thinking in the EA community.
Our results show that, among people dissatisfied with EA, Cause prioritisation (22%) and Focus on AI risks/x-risks/longtermism (16%) are among the most commonly mentioned reasons.[1] I should also emphasise that ‘Focus on AI risks/x-risks/longtermism’ is not the first reason for dissatisfaction with the EA community, it’s the fifth.
I think both our sets of results show that (at least) a significant minority believe that the community has veered too much in the direction of AI/x-risk/longtermism. But I don’t think that either sets of results show that the community overall is lukewarm on longtermism. I think the situation is better characterised as division between people who are more supportive of longtermist causes[2] (whose support has been growing), and those who are more supportive of neartermist causes.
I don’t think there is any strong reason to ignore or otherwise dismiss out of hand what we’ve found here—we simply sourced a large and diverse sample of EAs, asked them fairly basic questions about their views on EA-related topics, and reported the results for the community to digest and discuss.)
I certainly agree that my comment here have only addressed one specific set of results to do with cause prioritisation, and that people should assess the other results on their own merits!
As we have emphasised elsewhere, we’re using “longtermist” and “neartermist” as a shorthand, and don’t think that the division is necessarily explained by longtermism per se (e.g. the groupings might be explained by epistemic attitudes towards different kinds of evidence).
I think both our sets of results show that (at least) a significant minority believe that the community has veered too much in the direction of AI/x-risk/longtermism.
Agreed.
But I don’t think that either sets of results show that the community overall is lukewarm on longtermism. I think the situation is better characterised as division between people who are more supportive of longtermist causes (whose support has been growing), and those who are more supportive of neartermist causes.
It seems like you find the descriptor ‘lukewarm’ to be specifically problematic—I am considering changing the word choice of the ‘headline result’ accordingly given this exchange. (I originally chose to use the word ‘lukewarm’ to reflect the normal-but-slightly-negative skew of the results I’ve highlighted previously. I probably would have used ‘divided’ if our results looked bimodal, but they do not.)
What seems clear from this is that the hundreds of actively involved EAs we sampled arenotcollectively aligned (or ‘divided’ or ‘collectively-lukewarm’ or however you want to describe it) on whether increased attention to longtermist causes represents a positive change in the community—despite systematically mispredicting numerous times that the sample would respond more positively. I will again refer to the relevant result to ensure any readers appreciate how straightforwardly this interpretation follows from the result—
(~35% don’t think positive shift, ~30% do; ~45% don’t think primary focus, ~25% do. ~250 actively involved EAs sampled from across 10+ cause areas.)
This division/lukewarmness/misalignment represents a foundational philosophical disagreement about how to go about doing the most good and seemed pretty important for us to highlight in the write-up. It is also worth emphasizing that we personally care very much about causes like AI risk and would have hoped to see stronger support for longtermism in general—but we did not find this, much to our surprise (and to the surprise of the hundreds of participants who predicted the distributions would look significantly different as can be seen above).
As noted in the post, we definitely think follow-up research is very important for fleshing out all of these findings, and we are very supportive of all of the great work Rethink Priorities has done in this space. Perhaps it would be worthwhile at some point in the future to attempt to collaboratively investigate this specific question to see if we can’t better determine what is driving this pattern of results.
(Also, to be clear, I was not insinuating the engagement scale is invalid—looks completely reasonable to me. Simply pointing out that we are quantifying engagement differently, which may further contribute to explaining why our related but distinct analyses yielded different results.)
Thanks again for your engagement with the post and for providing readers with really interesting context throughout this discussion :)
It seems like you find the descriptor ‘lukewarm’ to be specifically problematic—I am considering changing the word choice of the ‘headline result’ accordingly given this exchange. (I originally chose to use the word ‘lukewarm’ to reflect the normal-but-slightly-negative skew of the results I’ve highlighted previously. I probably would have used ‘divided’ if our results looked bimodal, but they do not.)
I don’t think our disagreement is to do with the word “lukewarm”. I’d be happy for the word “lukewarm” to be replaced with “normal but slightly negative skew” or “roughly neutral, but slightly negative” in our disagreement. I’ll explain where I think the disagreement is below.
Here’s the core statement which I disagreed with:
EAs have lukewarm [normal but slightly-negative skew] views about longtermism
Result: EAs (actively involved across 10+ cause areas) generally seem to think that AI risk and x-risk are less promising cause areas than ones like global health and development and animal welfare
The first point of disagreement concerned this claim:
“EAs (actively involved across 10+ cause areas) generally seem to think that AI risk and x-risk are less promising cause areas than ones like global health and development and animal welfare”
If we take “promising” to mean anything like prioritise / support / believe should receive a larger amount of resources / believe is more impactful etc., then I think this is a straightforward substantive disagreement: I think whatever way we slice ‘active involvement’, we’ll find more actively involved EAs prioritise X-risk more.
As we discussed above, it’s possible that “promising” means something else. But I personally do not have a good sense of in what way actively involved EAs think AI and x-risk are less promising than GHD and animal welfare.[1]
EAs have lukewarm [normal but slightly-negative skew] views about longtermism
Concerning this claim, I think we need to distinguish (as I did above), between:
What do people think of ‘longtermism’? / What do people think about allocations to or prioritisation of longtermist causes?
What do people think of EA’s shift more towards longtermist causes?
Regarding the first of these questions, your second result shows slight disagreement with the claim “I think longtermist causes should be the primary focus in effective altruism”. I agree that a reasonable interpretation of this result, taken in isolation, is that the actively involved EA community is slightly negative regarding longtermism. But taking into account other data, like our cause prioritisation data which shows actively engaged EAs strongly prioritise x-risk causes or result suggesting slight agreement with an abstract statement of longtermism, I’m more sceptical. I wonder if what explains the difference is people’s response to the notion of these causes being the “primary focus”, rather than their attitudes towards longtermist causes per se.[2] If so, these responses need not indicate that the actively involved community leans slightly negative towards longtermism.
In any case, this question largely seems to me to reduce to the question of what people’s actual cause prioritisation is + what their beliefs are about abstract longtermism, discussed above.
Regarding the question of EA’s attitudes towards the “overall shift towards longtermist causes”, I would also say that, taken in isolation, it’s reasonable to interpret your result as showing that actively involved EAs are lean slightly negative towards EA’s shift towards longtermism.Again, our cause prioritisation results suggesting strong and increasing prioritisation of longtermist causes by more engaged EAs across multiple surveys gives me pause. But the main point I’ll make (which suggests a potential conciliatory way to reconcile these results) is to observe that attitudes towards the “overall shift towards longtermist causes” may not reflect attitudes towards longtermism per se. Perhaps people are Neutral/Agnostic regarding the “overall shift”, despite personally prioritising longtermist causes, because they are Agnostic about what people in the rest of the community should do. Or perhaps people think that the shift overall has been mishandled (whatever their cause prioritisation). If so the results may be interesting, regarding EAs’ attitudes towards this “shift” but not regarding their overall attitudes towards longtermism and longtermist causes.
Thanks again for your work producing these results and responding to these comments!
As I noted, I could imagine “promising” connoting something like new, young, scrappy cause areas (such that an area could be more “promising” even if people support it less than a larger established cause area). I could sort of see this fitting Animal Welfare (though it’s not really a new cause area), but it’s hard for me to see this applying to Global Health/Global Poverty which is a very old, established and large cause area.
For example, people might think EA should not have a “primary focus”, but remain a ‘cause-neutral’ movement (even though they prioritise longtermist cause most strongly and think they should get most resources). Or people might think we should split resources across causes for some other reason, despite favouring longtermism.
Thanks for sharing all of this new data—it is very interesting! (Note that in my earlier response, I had nothing to go on besides the 2020 result you have already published, which indicated that the plots you included in your first comment were drawn from a far wider sample of EA-affiliated people than what we were probing in our survey, which I still believe is true. Correct me if I’m wrong!)
Many of these new results you share here, while extremely interesting in their own right, are still not apples-to-apples comparisons for the same reasons we’ve already touched on.[1]
It is not particularly surprising to me that we are asking people meaningfully different questions and getting meaningfully different results given how generally sensitive respondents in psychological research are to variations in item phrasing. (We can of course go back and forth about which phrasing is better/more actionable/etc, but this is orthogonal to the main question of whether these are reasonable apples-to-apples comparisons.)
The most recent data you have that you mention briefly at the end of your response seems far more relevant in my view. It seems like both of the key results you are taking issue with here (cause prioritization and lukewarm longtermism views) you found yourself to some degree in these results (which it’s also worth noting was sampled at the same time as our data, rather than 2 or 4 years ago):
Your result 1:
Our result 1:
We specifically find the exact same two cause areas, animals and GHD, as being considered the most promising to currently pursue.
Your result 2 (listed as the first reason for dissatisfaction with the EA community):
Our result 2:
We specifically find that our sample is overall normally distributed with a slight negative skew (~35% disagree, ~30% agree) that EAs’ recent shift towards longtermism is positive.
I suppose having now read your newest report (which I was not aware of before conducting this project), I actually find myself less clear on why you are as surprised as you seem to be by these results given that they essentially replicate numerous object-level findings you reported only ~2 months ago.
(Want to flag that I would lend more credence in terms of guiding specific action to your prioritization results than to our ‘how promising...’ results given your significantly larger sample size and more precise resource-related questions. But this does not detract from also being able to make valid and action-guiding inferences from both of the results I include in this comment, of which we think there are many as we describe in the body of this post. I don’t think there is any strong reason to ignore or otherwise dismiss out of hand what we’ve found here—we simply sourced a large and diverse sample of EAs, asked them fairly basic questions about their views on EA-related topics, and reported the results for the community to digest and discuss.)
One further question/hunch I have in this regard is that the way we are quantifying high vs. low engagement is almost certainly different (is your sample self-reporting this/do you give them any quantitative criteria for reporting this?), which adds an additional layer of distance between these results.
Thanks Cameron!
We agree that our surveys asked different questions. I’m mostly not interested in assessing which of our questions are the most ‘apples-to-apples comparisons’, since I’m not interested in critiquing your results per se. Rather, I’m interested in what we should conclude about the object-level questions given our respective results (e.g. is the engaged EA community lukewarm and longtermism, and prioritises preference for global poverty and animal welfare, or is the community divided on these views, with the most engaged more strongly prioritising longtermism?).
I would just note that in my original response I showed the how the results varied across the full range of engagement levels, which I think offers more insight into how the community’s views differ across groups, than just looking at one sub-group.
The engagement scale is based on self-identification, but the highest engagement level is characterised with reference to “helping to lead an EA group or working at an EA-aligned organization”. You can read more about our different measures of engagement and how they cohere here. Crucially, I also presented results specifically for EA org employees and people doing EA work so concerns about the engagement scale specifically do not seem relevant.
I respond to these two points below:
I don’t think this tells us much about which causes people think most promising overall. The result you’re referring to is looking only at the 22% of respondents who mentioned Cause prioritization as a reason for dissatisfaction with EA and were not one of the 16% of people who mentioned excessive focus on x-risk as a cause for dissatisfaction (38 respondents, of which 8 mentioned animals, 4 mentioned Global poverty and 7 mentioned another cause (the rest mentioned something other than a specific cause area)).
Our footnote mentioning this was never intended to indicate which causes are overall judged most promising, just to clarify how our ‘Cause prioritization’ and ‘Excessive focus on AI’ categories differed. (As it happens, I do think our results suggest Global Poverty and Animal Welfare are the highest rated non-x-risk cause areas, but they’re not prioritised more highly than all x-risk causes).
Our results show that, among people dissatisfied with EA, Cause prioritisation (22%) and Focus on AI risks/x-risks/longtermism (16%) are among the most commonly mentioned reasons.[1] I should also emphasise that ‘Focus on AI risks/x-risks/longtermism’ is not the first reason for dissatisfaction with the EA community, it’s the fifth.
I think both our sets of results show that (at least) a significant minority believe that the community has veered too much in the direction of AI/x-risk/longtermism. But I don’t think that either sets of results show that the community overall is lukewarm on longtermism. I think the situation is better characterised as division between people who are more supportive of longtermist causes[2] (whose support has been growing), and those who are more supportive of neartermist causes.
I certainly agree that my comment here have only addressed one specific set of results to do with cause prioritisation, and that people should assess the other results on their own merits!
And, to be clear, these categories are overlapping, so the totals can’t be combined.
As we have emphasised elsewhere, we’re using “longtermist” and “neartermist” as a shorthand, and don’t think that the division is necessarily explained by longtermism per se (e.g. the groupings might be explained by epistemic attitudes towards different kinds of evidence).
Agreed.
It seems like you find the descriptor ‘lukewarm’ to be specifically problematic—I am considering changing the word choice of the ‘headline result’ accordingly given this exchange. (I originally chose to use the word ‘lukewarm’ to reflect the normal-but-slightly-negative skew of the results I’ve highlighted previously. I probably would have used ‘divided’ if our results looked bimodal, but they do not.)
What seems clear from this is that the hundreds of actively involved EAs we sampled are not collectively aligned (or ‘divided’ or ‘collectively-lukewarm’ or however you want to describe it) on whether increased attention to longtermist causes represents a positive change in the community—despite systematically mispredicting numerous times that the sample would respond more positively. I will again refer to the relevant result to ensure any readers appreciate how straightforwardly this interpretation follows from the result—
(~35% don’t think positive shift, ~30% do; ~45% don’t think primary focus, ~25% do. ~250 actively involved EAs sampled from across 10+ cause areas.)
This division/lukewarmness/misalignment represents a foundational philosophical disagreement about how to go about doing the most good and seemed pretty important for us to highlight in the write-up. It is also worth emphasizing that we personally care very much about causes like AI risk and would have hoped to see stronger support for longtermism in general—but we did not find this, much to our surprise (and to the surprise of the hundreds of participants who predicted the distributions would look significantly different as can be seen above).
As noted in the post, we definitely think follow-up research is very important for fleshing out all of these findings, and we are very supportive of all of the great work Rethink Priorities has done in this space. Perhaps it would be worthwhile at some point in the future to attempt to collaboratively investigate this specific question to see if we can’t better determine what is driving this pattern of results.
(Also, to be clear, I was not insinuating the engagement scale is invalid—looks completely reasonable to me. Simply pointing out that we are quantifying engagement differently, which may further contribute to explaining why our related but distinct analyses yielded different results.)
Thanks again for your engagement with the post and for providing readers with really interesting context throughout this discussion :)
Thanks again for the detailed reply Cameron!
I don’t think our disagreement is to do with the word “lukewarm”. I’d be happy for the word “lukewarm” to be replaced with “normal but slightly negative skew” or “roughly neutral, but slightly negative” in our disagreement. I’ll explain where I think the disagreement is below.
Here’s the core statement which I disagreed with:
The first point of disagreement concerned this claim:
“EAs (actively involved across 10+ cause areas) generally seem to think that AI risk and x-risk are less promising cause areas than ones like global health and development and animal welfare”
If we take “promising” to mean anything like prioritise / support / believe should receive a larger amount of resources / believe is more impactful etc., then I think this is a straightforward substantive disagreement: I think whatever way we slice ‘active involvement’, we’ll find more actively involved EAs prioritise X-risk more.
As we discussed above, it’s possible that “promising” means something else. But I personally do not have a good sense of in what way actively involved EAs think AI and x-risk are less promising than GHD and animal welfare.[1]
Concerning this claim, I think we need to distinguish (as I did above), between:
What do people think of ‘longtermism’? / What do people think about allocations to or prioritisation of longtermist causes?
What do people think of EA’s shift more towards longtermist causes?
Regarding the first of these questions, your second result shows slight disagreement with the claim “I think longtermist causes should be the primary focus in effective altruism”. I agree that a reasonable interpretation of this result, taken in isolation, is that the actively involved EA community is slightly negative regarding longtermism. But taking into account other data, like our cause prioritisation data which shows actively engaged EAs strongly prioritise x-risk causes or result suggesting slight agreement with an abstract statement of longtermism, I’m more sceptical. I wonder if what explains the difference is people’s response to the notion of these causes being the “primary focus”, rather than their attitudes towards longtermist causes per se.[2] If so, these responses need not indicate that the actively involved community leans slightly negative towards longtermism.
In any case, this question largely seems to me to reduce to the question of what people’s actual cause prioritisation is + what their beliefs are about abstract longtermism, discussed above.
Regarding the question of EA’s attitudes towards the “overall shift towards longtermist causes”, I would also say that, taken in isolation, it’s reasonable to interpret your result as showing that actively involved EAs are lean slightly negative towards EA’s shift towards longtermism. Again, our cause prioritisation results suggesting strong and increasing prioritisation of longtermist causes by more engaged EAs across multiple surveys gives me pause. But the main point I’ll make (which suggests a potential conciliatory way to reconcile these results) is to observe that attitudes towards the “overall shift towards longtermist causes” may not reflect attitudes towards longtermism per se. Perhaps people are Neutral/Agnostic regarding the “overall shift”, despite personally prioritising longtermist causes, because they are Agnostic about what people in the rest of the community should do. Or perhaps people think that the shift overall has been mishandled (whatever their cause prioritisation). If so the results may be interesting, regarding EAs’ attitudes towards this “shift” but not regarding their overall attitudes towards longtermism and longtermist causes.
Thanks again for your work producing these results and responding to these comments!
As I noted, I could imagine “promising” connoting something like new, young, scrappy cause areas (such that an area could be more “promising” even if people support it less than a larger established cause area). I could sort of see this fitting Animal Welfare (though it’s not really a new cause area), but it’s hard for me to see this applying to Global Health/Global Poverty which is a very old, established and large cause area.
For example, people might think EA should not have a “primary focus”, but remain a ‘cause-neutral’ movement (even though they prioritise longtermist cause most strongly and think they should get most resources). Or people might think we should split resources across causes for some other reason, despite favouring longtermism.