Thanks for your comment! I would suspect that these differences are largely being driven by the samples being significantly different. Here is the closest apples-to-apples comparison I could find related to sampling differences (please do correct me if you think there is a better one):
From your sample:
From our sample:
In words, I think your sample is significantly broader than ours: we were looking specifically for people actively involved (we defined as >5h/week) in a specific EA cause area, which would probably correspond to the non-profit buckets in your survey (but explicitly not, for example, ‘still deciding what to pursue’, ‘for profit (earning to give)’, etc., which seemingly accounts for many hundreds of datapoints in your sample).
In other words, I think our results do not support the claim that
[it] isn’t that EAs as a whole are lukewarm about longtermism: it’s that highly engaged EAs prioritise longtermist causes and less highly engaged more strongly prioritise neartermist causes.
given that our sample is almost entirely composed of highly engaged EAs.
Additional sanity checks on our cause area result are that the community’s predictions of the community’s views do more closely mirror your 2020 finding (ie, people indeed expected something more like your 2020 result)—but that the community’s ground truth views are clearly significantly misaligned with these predictions.
Note that we are also measuring meaningfully different things related to cause area prioritization between the 2020 analysis and this one: we simply asked our sample how promising they found each cause area, while you seemed to ask about resourced/funded each cause area should be, which may invite more zero-sum considerations than our questions and may in turn change the nature of the result (ie, respondents could have validly responded ‘very promising’ to all of the cause areas we listed; they presumably could not have similarly responded ‘(near) top priority’ to all of the cause areas you listed).
Finally, it is worth clarifying that our characterization of our sample of EAs seemingly having lukewarm views about longtermism is motivated mainly by these two results:
These results straightforwardly demonstrate that the EAs we sampled clearly predict that the community would have positive views of ‘longtermism x EA’ (what we also would have expected), but the group is actually far more evenly distributed with a slight negative skew on these questions (note the highly statistically significant differences between each prediction vs. ground truth distribution; p≈0 for both).
Finally, it’s worth noting that we find some of our own results quite surprising as well—this is precisely why we are excited to share this work with the community to invite further conversation, follow-up analysis, etc. (which you have done in part here, so thanks for that!).
I think your sample is significantly broader than ours: we were looking specifically for people actively involved (we defined as >5h/week) in a specific EA cause area...
In other words, I think our results do not support the claim that
[it] isn’t that EAs as a whole are lukewarm about longtermism: it’s that highly engaged EAs prioritise longtermist causes and less highly engaged more strongly prioritise neartermist causes.
given that our sample is almost entirely composed of highly engaged EAs.
I don’t think this can explain the difference, because our sample contains a larger number of highly engaged / actively involved EAs, and when we examine results for these groups (as I do above and below), they show the pattern I describe.
These are the results from people who currently work for an EA org or are currently doing direct work (for which we have >500 and 800 respondents respectively). Note that the EA Survey offers a wide variety of ways we can distinguish respondents based on their involvement, but I don’t think any of them change the pattern I’m describing.
Both show that AI risk and Biosecurity are the most strongly prioritized causes among these groups. Global Poverty and Animal Welfare retain respectable levels of support, and it’s important not to neglect that, but are less strongly prioritised among these groups.
To assess the claim of whether there’s a divergence between more and less highly engaged EAs, we need to look at the difference between groups however, not just a single group of somewhat actively involved EAs. Doing this with 2022 data, we see the expected pattern of AI Risk and Biosecurity being more strongly prioritised by highly engaged EAs and Global Poverty less so. Animal Welfare notably achieves higher support along the more highly engaged, but still lower than the longtermist causes.[1]
Note that we are also measuring meaningfully different things related to cause area prioritization between the 2020 analysis and this one: we simply asked our sample how promising they found each cause area, while you seemed to ask about resourced/funded each cause area should be… respondents could have validly responded ‘very promising’ to all of the cause areas we listed
I agree that this could explain some of the differences in results, though I think that how people would prioritize allocation of resources is more relevant for assessing prioritization. I think that promisingness may be hard to interpret both given that, as you say, people could potentially rate everything highly promising, and also because “promising” could connote an early or yet to be developed venture (one might be more inclined to describe a less developed cause area as “promising”, than one which has already reached its full size, even if you think the promising cause area should be prioritized less than the fully developed cause areas). But, of course, your mileage may vary, and you might be interested in your measure for reasons other than assessing cause prioritization.
Finally, it is worth clarifying that our characterization of our sample of EAs seemingly having lukewarm views about longtermism is motivated mainly by these two results:
[“I have a positive view of effective altruism’s overall shift towards longtermist causes” and “I think longtermist causes should be the primary focus in effective altruism”]
Thanks, I think these provide useful new data!
It’s worth noting that we have our own, similar, measure concerning agreement with an explicit statement of longtermism: “The impact of our actions on the very long-term future is the most important consideration when it comes to doing good.”
As such, I would distinguish 3 things:
What do people think of ‘longtermism’? [captured by our explicit statement]
What do people think about allocations to / prioritisation of longtermist causes? [captured by people’s actual cause prioritization]
What do people think of EA’s shift more towards longtermist causes? [captured by your ‘shift’ question]
Abstract support for (quite strong) longtermism
Looking at people’s responses to the above (rather strong) statement of abstract longtermism we see that responses lean more towards agreement than disagreement. Given the bimodal distribution, I would also say that this reflects less a community that is collectively lukewarm on longtermism, and more a community containing one group that tends to agree with it and a group which tends to disagree with it.
Moreover, when we examine these results split by low/high engagement we see clear divergence, as in the results above.
Concrete cause prioritization
Moreover, as noted, the claim that it is “the most important consideration” is quite strong. People may be clearly longtermist despite not endorsing this statement. Looking at people’s concrete cause prioritization, as I do above, we see that two longtermist causes (AI Risk and Biosecurity) are among the most highly prioritized causes across the community as a whole and they are even more strongly prioritised when examining more highly engaged EAs. I think this clearly conflicts with a view that “EAs have lukewarm views about longtermism…EAs (actively involved across 10+ cause areas) generally seem to think that AI risk and x-risk are less promising cause areas than ones like global health and development and animal welfare” and rules out an explanation based on your sample being more highly engaged.
Shift towards longtermism
Lastly, we can consider attitudes towards the “shift” towards longtermism, where your results show no strong leaning one way or the other, with a plurality being Neutral/Agnostic. It’s not clear to me that this represents the community being lukewarm on longtermism, rather than, whatever their own views about cause prioritization, people expressing agnosticism about the community’s shift (people might think “I support longtermist causes, but whether the community should is up to the community” or some such. One other datapoint I would point to regarding the community’s attitudes towards the shift, however, is our own recent data showing that objection to the community’s cause prioritization and perception of an excessive focus on AI / x-risk causes are among the most commonly cited reasons for dissatisfaction with the EA community. Thus, I think this reflects a cause for dissatisfaction for a significant portion of the community, even though large portions of the community clearly support strong prioritization of EA causes. I think more data about the community as a whole’s views about whether longtermist causes should be prioritized more or less strongly by community infrastructure would be useful and is something we’ll consider adding to future surveys.
Animal Welfare does not perform as a clear ‘neartermist’ cause in our data, when we examine the relationships between causes. It’s about as strongly associated with Biosecurity as Global Poverty, for example.
Thanks for sharing all of this new data—it is very interesting! (Note that in my earlier response, I had nothing to go on besides the 2020 result you have already published, which indicated that the plots you included in your first comment were drawn from a far wider sample of EA-affiliated people than what we were probing in our survey, which I still believe is true. Correct me if I’m wrong!)
Many of these new results you share here, while extremely interesting in their own right, are still not apples-to-apples comparisons for the same reasons we’ve already touched on.[1]
It is not particularly surprising to me that we are asking people meaningfully different questions and getting meaningfully different results given how generally sensitive respondents in psychological research are to variations in item phrasing. (We can of course go back and forth about which phrasing is better/more actionable/etc, but this is orthogonal to the main question of whether these are reasonable apples-to-apples comparisons.)
The most recent data you have that you mention briefly at the end of your response seems far more relevant in my view. It seems like both of the key results you are taking issue with here (cause prioritization and lukewarm longtermism views) you found yourself to some degree in these results (which it’s also worth noting was sampled at the same time as our data, rather than 2 or 4 years ago):
Your result 1:
The responses within the Cause Prioritization category which did not explicitly refer to too much focus on AI, were focused oninsufficient attention being paid to other causes, primarily animals and GHD.
Our result 1:
We specifically find the exact same two cause areas, animals and GHD, as being considered the most promisingto currently pursue.
Your result 2 (listed as the first reason for dissatisfaction with the EA community):
Focus on AI risks/x-risks/longtermism: Mainly a subset of the cause prioritization category, consisting of specific references to an overemphasis on AI risk and existential risks as a cause area, as well as longtermist thinking in the EA community.
Our result 2:
We specifically find that our sample is overall normally distributed with a slight negative skew (~35% disagree, ~30% agree) that EAs’ recent shift towards longtermism is positive.
I suppose having now read your newest report (which I was not aware of before conducting this project), I actually find myself less clear on why you are as surprised as you seem to be by these results given that they essentially replicate numerous object-level findings you reported only ~2 months ago.
(Want to flag that I would lend more credence in terms of guiding specific action to your prioritization results than to our ‘how promising...’ results given your significantly larger sample size and more precise resource-related questions. But this does not detract from also being able to make valid and action-guiding inferences from both of the results I include in this comment, of which we think there are many as we describe in the body of this post. I don’t think there is any strong reason to ignore or otherwise dismiss out of hand what we’ve found here—we simply sourced a large and diverse sample of EAs, asked them fairly basic questions about their views on EA-related topics, and reported the results for the community to digest and discuss.)
One further question/hunch I have in this regard is that the way we are quantifying high vs. low engagement is almost certainly different (is your sample self-reporting this/do you give them any quantitative criteria for reporting this?), which adds an additional layer of distance between these results.
It is not particularly surprising to me that we are asking people meaningfully different questions and getting meaningfully different results…
the main question of whether these are reasonable apples-to-apples comparisons.)
We agree that our surveys asked different questions. I’m mostly not interested in assessing which of our questions are the most ‘apples-to-apples comparisons’, since I’m not interested in critiquing your results per se. Rather, I’m interested in what we should conclude about the object-level questions given our respective results (e.g. is the engaged EA community lukewarm and longtermism, and prioritises preference for global poverty and animal welfare, or is the community divided on these views, with the most engaged more strongly prioritising longtermism?).
in my earlier response, I had nothing to go on besides the 2020 result you have already published, which indicated that the plots you included in your first comment were drawn from a far wider sample of EA-affiliated people than what we were probing in our survey, which I still believe is true. Correct me if I’m wrong!)
I would just note that in my original response I showed the how the results varied across the full range of engagement levels, which I think offers more insight into how the community’s views differ across groups, than just looking at one sub-group.
One further question/hunch I have in this regard is that the way we are quantifying high vs. low engagement is almost certainly different (is your sample self-reporting this/do you give them any quantitative criteria for reporting this?), which adds an additional layer of distance between these results.
The engagement scale is based on self-identification, but the highest engagement level is characterised with reference to “helping to lead an EA group or working at an EA-aligned organization”. You can read more about our different measures of engagement and how they cohere here. Crucially, I also presented results specifically for EA org employees and people doing EA work so concerns about the engagement scale specifically do not seem relevant.
The most recent data you have that you mention briefly at the end of your response seems far more relevant in my view. It seems like both of the key results you are taking issue with here (cause prioritization and lukewarm longtermism views) you found yourself to some degree in these results
I respond to these two points below:
Your result 1:
The responses within the Cause Prioritization category which did not explicitly refer to too much focus on AI, were focused oninsufficient attention being paid to other causes, primarily animals and GHD.
We specifically find the exact same two cause areas, animals and GHD, as being considered the most promising to currently pursue.
I don’t think this tells us much about which causes people think most promising overall. The result you’re referring to is looking only at the 22% of respondents who mentioned Cause prioritization as a reason for dissatisfaction with EA andwere not one of the 16% of people who mentioned excessive focus on x-risk as a cause for dissatisfaction (38 respondents, of which 8 mentioned animals, 4 mentioned Global poverty and 7 mentioned another cause (the rest mentioned something other than a specific cause area)).
Our footnote mentioning this was never intended to indicate which causes are overall judged most promising, just to clarify how our ‘Cause prioritization’ and ‘Excessive focus on AI’ categories differed. (As it happens, I do think our results suggest Global Poverty and Animal Welfare are the highest rated non-x-risk cause areas, but they’re not prioritised more highly than all x-risk causes).
Your result 2 (listed as the first reason for dissatisfaction with the EA community):
Focus on AI risks/x-risks/longtermism: Mainly a subset of the cause prioritization category, consisting of specific references to an overemphasis on AI risk and existential risks as a cause area, as well as longtermist thinking in the EA community.
Our results show that, among people dissatisfied with EA, Cause prioritisation (22%) and Focus on AI risks/x-risks/longtermism (16%) are among the most commonly mentioned reasons.[1] I should also emphasise that ‘Focus on AI risks/x-risks/longtermism’ is not the first reason for dissatisfaction with the EA community, it’s the fifth.
I think both our sets of results show that (at least) a significant minority believe that the community has veered too much in the direction of AI/x-risk/longtermism. But I don’t think that either sets of results show that the community overall is lukewarm on longtermism. I think the situation is better characterised as division between people who are more supportive of longtermist causes[2] (whose support has been growing), and those who are more supportive of neartermist causes.
I don’t think there is any strong reason to ignore or otherwise dismiss out of hand what we’ve found here—we simply sourced a large and diverse sample of EAs, asked them fairly basic questions about their views on EA-related topics, and reported the results for the community to digest and discuss.)
I certainly agree that my comment here have only addressed one specific set of results to do with cause prioritisation, and that people should assess the other results on their own merits!
As we have emphasised elsewhere, we’re using “longtermist” and “neartermist” as a shorthand, and don’t think that the division is necessarily explained by longtermism per se (e.g. the groupings might be explained by epistemic attitudes towards different kinds of evidence).
I think both our sets of results show that (at least) a significant minority believe that the community has veered too much in the direction of AI/x-risk/longtermism.
Agreed.
But I don’t think that either sets of results show that the community overall is lukewarm on longtermism. I think the situation is better characterised as division between people who are more supportive of longtermist causes (whose support has been growing), and those who are more supportive of neartermist causes.
It seems like you find the descriptor ‘lukewarm’ to be specifically problematic—I am considering changing the word choice of the ‘headline result’ accordingly given this exchange. (I originally chose to use the word ‘lukewarm’ to reflect the normal-but-slightly-negative skew of the results I’ve highlighted previously. I probably would have used ‘divided’ if our results looked bimodal, but they do not.)
What seems clear from this is that the hundreds of actively involved EAs we sampled arenotcollectively aligned (or ‘divided’ or ‘collectively-lukewarm’ or however you want to describe it) on whether increased attention to longtermist causes represents a positive change in the community—despite systematically mispredicting numerous times that the sample would respond more positively. I will again refer to the relevant result to ensure any readers appreciate how straightforwardly this interpretation follows from the result—
(~35% don’t think positive shift, ~30% do; ~45% don’t think primary focus, ~25% do. ~250 actively involved EAs sampled from across 10+ cause areas.)
This division/lukewarmness/misalignment represents a foundational philosophical disagreement about how to go about doing the most good and seemed pretty important for us to highlight in the write-up. It is also worth emphasizing that we personally care very much about causes like AI risk and would have hoped to see stronger support for longtermism in general—but we did not find this, much to our surprise (and to the surprise of the hundreds of participants who predicted the distributions would look significantly different as can be seen above).
As noted in the post, we definitely think follow-up research is very important for fleshing out all of these findings, and we are very supportive of all of the great work Rethink Priorities has done in this space. Perhaps it would be worthwhile at some point in the future to attempt to collaboratively investigate this specific question to see if we can’t better determine what is driving this pattern of results.
(Also, to be clear, I was not insinuating the engagement scale is invalid—looks completely reasonable to me. Simply pointing out that we are quantifying engagement differently, which may further contribute to explaining why our related but distinct analyses yielded different results.)
Thanks again for your engagement with the post and for providing readers with really interesting context throughout this discussion :)
It seems like you find the descriptor ‘lukewarm’ to be specifically problematic—I am considering changing the word choice of the ‘headline result’ accordingly given this exchange. (I originally chose to use the word ‘lukewarm’ to reflect the normal-but-slightly-negative skew of the results I’ve highlighted previously. I probably would have used ‘divided’ if our results looked bimodal, but they do not.)
I don’t think our disagreement is to do with the word “lukewarm”. I’d be happy for the word “lukewarm” to be replaced with “normal but slightly negative skew” or “roughly neutral, but slightly negative” in our disagreement. I’ll explain where I think the disagreement is below.
Here’s the core statement which I disagreed with:
EAs have lukewarm [normal but slightly-negative skew] views about longtermism
Result: EAs (actively involved across 10+ cause areas) generally seem to think that AI risk and x-risk are less promising cause areas than ones like global health and development and animal welfare
The first point of disagreement concerned this claim:
“EAs (actively involved across 10+ cause areas) generally seem to think that AI risk and x-risk are less promising cause areas than ones like global health and development and animal welfare”
If we take “promising” to mean anything like prioritise / support / believe should receive a larger amount of resources / believe is more impactful etc., then I think this is a straightforward substantive disagreement: I think whatever way we slice ‘active involvement’, we’ll find more actively involved EAs prioritise X-risk more.
As we discussed above, it’s possible that “promising” means something else. But I personally do not have a good sense of in what way actively involved EAs think AI and x-risk are less promising than GHD and animal welfare.[1]
EAs have lukewarm [normal but slightly-negative skew] views about longtermism
Concerning this claim, I think we need to distinguish (as I did above), between:
What do people think of ‘longtermism’? / What do people think about allocations to or prioritisation of longtermist causes?
What do people think of EA’s shift more towards longtermist causes?
Regarding the first of these questions, your second result shows slight disagreement with the claim “I think longtermist causes should be the primary focus in effective altruism”. I agree that a reasonable interpretation of this result, taken in isolation, is that the actively involved EA community is slightly negative regarding longtermism. But taking into account other data, like our cause prioritisation data which shows actively engaged EAs strongly prioritise x-risk causes or result suggesting slight agreement with an abstract statement of longtermism, I’m more sceptical. I wonder if what explains the difference is people’s response to the notion of these causes being the “primary focus”, rather than their attitudes towards longtermist causes per se.[2] If so, these responses need not indicate that the actively involved community leans slightly negative towards longtermism.
In any case, this question largely seems to me to reduce to the question of what people’s actual cause prioritisation is + what their beliefs are about abstract longtermism, discussed above.
Regarding the question of EA’s attitudes towards the “overall shift towards longtermist causes”, I would also say that, taken in isolation, it’s reasonable to interpret your result as showing that actively involved EAs are lean slightly negative towards EA’s shift towards longtermism.Again, our cause prioritisation results suggesting strong and increasing prioritisation of longtermist causes by more engaged EAs across multiple surveys gives me pause. But the main point I’ll make (which suggests a potential conciliatory way to reconcile these results) is to observe that attitudes towards the “overall shift towards longtermist causes” may not reflect attitudes towards longtermism per se. Perhaps people are Neutral/Agnostic regarding the “overall shift”, despite personally prioritising longtermist causes, because they are Agnostic about what people in the rest of the community should do. Or perhaps people think that the shift overall has been mishandled (whatever their cause prioritisation). If so the results may be interesting, regarding EAs’ attitudes towards this “shift” but not regarding their overall attitudes towards longtermism and longtermist causes.
Thanks again for your work producing these results and responding to these comments!
As I noted, I could imagine “promising” connoting something like new, young, scrappy cause areas (such that an area could be more “promising” even if people support it less than a larger established cause area). I could sort of see this fitting Animal Welfare (though it’s not really a new cause area), but it’s hard for me to see this applying to Global Health/Global Poverty which is a very old, established and large cause area.
For example, people might think EA should not have a “primary focus”, but remain a ‘cause-neutral’ movement (even though they prioritise longtermist cause most strongly and think they should get most resources). Or people might think we should split resources across causes for some other reason, despite favouring longtermism.
Thanks for your comment! I would suspect that these differences are largely being driven by the samples being significantly different. Here is the closest apples-to-apples comparison I could find related to sampling differences (please do correct me if you think there is a better one):
From your sample:
From our sample:
In words, I think your sample is significantly broader than ours: we were looking specifically for people actively involved (we defined as >5h/week) in a specific EA cause area, which would probably correspond to the non-profit buckets in your survey (but explicitly not, for example, ‘still deciding what to pursue’, ‘for profit (earning to give)’, etc., which seemingly accounts for many hundreds of datapoints in your sample).
In other words, I think our results do not support the claim that
given that our sample is almost entirely composed of highly engaged EAs.
Additional sanity checks on our cause area result are that the community’s predictions of the community’s views do more closely mirror your 2020 finding (ie, people indeed expected something more like your 2020 result)—but that the community’s ground truth views are clearly significantly misaligned with these predictions.
Note that we are also measuring meaningfully different things related to cause area prioritization between the 2020 analysis and this one: we simply asked our sample how promising they found each cause area, while you seemed to ask about resourced/funded each cause area should be, which may invite more zero-sum considerations than our questions and may in turn change the nature of the result (ie, respondents could have validly responded ‘very promising’ to all of the cause areas we listed; they presumably could not have similarly responded ‘(near) top priority’ to all of the cause areas you listed).
Finally, it is worth clarifying that our characterization of our sample of EAs seemingly having lukewarm views about longtermism is motivated mainly by these two results:
These results straightforwardly demonstrate that the EAs we sampled clearly predict that the community would have positive views of ‘longtermism x EA’ (what we also would have expected), but the group is actually far more evenly distributed with a slight negative skew on these questions (note the highly statistically significant differences between each prediction vs. ground truth distribution; p≈0 for both).
Finally, it’s worth noting that we find some of our own results quite surprising as well—this is precisely why we are excited to share this work with the community to invite further conversation, follow-up analysis, etc. (which you have done in part here, so thanks for that!).
I don’t think this can explain the difference, because our sample contains a larger number of highly engaged / actively involved EAs, and when we examine results for these groups (as I do above and below), they show the pattern I describe.
These are the results from people who currently work for an EA org or are currently doing direct work (for which we have >500 and 800 respondents respectively). Note that the EA Survey offers a wide variety of ways we can distinguish respondents based on their involvement, but I don’t think any of them change the pattern I’m describing.
Both show that AI risk and Biosecurity are the most strongly prioritized causes among these groups. Global Poverty and Animal Welfare retain respectable levels of support, and it’s important not to neglect that, but are less strongly prioritised among these groups.
To assess the claim of whether there’s a divergence between more and less highly engaged EAs, we need to look at the difference between groups however, not just a single group of somewhat actively involved EAs. Doing this with 2022 data, we see the expected pattern of AI Risk and Biosecurity being more strongly prioritised by highly engaged EAs and Global Poverty less so. Animal Welfare notably achieves higher support along the more highly engaged, but still lower than the longtermist causes.[1]
I agree that this could explain some of the differences in results, though I think that how people would prioritize allocation of resources is more relevant for assessing prioritization. I think that promisingness may be hard to interpret both given that, as you say, people could potentially rate everything highly promising, and also because “promising” could connote an early or yet to be developed venture (one might be more inclined to describe a less developed cause area as “promising”, than one which has already reached its full size, even if you think the promising cause area should be prioritized less than the fully developed cause areas). But, of course, your mileage may vary, and you might be interested in your measure for reasons other than assessing cause prioritization.
Thanks, I think these provide useful new data!
It’s worth noting that we have our own, similar, measure concerning agreement with an explicit statement of longtermism: “The impact of our actions on the very long-term future is the most important consideration when it comes to doing good.”
As such, I would distinguish 3 things:
What do people think of ‘longtermism’? [captured by our explicit statement]
What do people think about allocations to / prioritisation of longtermist causes? [captured by people’s actual cause prioritization]
What do people think of EA’s shift more towards longtermist causes? [captured by your ‘shift’ question]
Abstract support for (quite strong) longtermism
Looking at people’s responses to the above (rather strong) statement of abstract longtermism we see that responses lean more towards agreement than disagreement. Given the bimodal distribution, I would also say that this reflects less a community that is collectively lukewarm on longtermism, and more a community containing one group that tends to agree with it and a group which tends to disagree with it.
Moreover, when we examine these results split by low/high engagement we see clear divergence, as in the results above.
Concrete cause prioritization
Moreover, as noted, the claim that it is “the most important consideration” is quite strong. People may be clearly longtermist despite not endorsing this statement. Looking at people’s concrete cause prioritization, as I do above, we see that two longtermist causes (AI Risk and Biosecurity) are among the most highly prioritized causes across the community as a whole and they are even more strongly prioritised when examining more highly engaged EAs. I think this clearly conflicts with a view that “EAs have lukewarm views about longtermism… EAs (actively involved across 10+ cause areas) generally seem to think that AI risk and x-risk are less promising cause areas than ones like global health and development and animal welfare” and rules out an explanation based on your sample being more highly engaged.
Shift towards longtermism
Lastly, we can consider attitudes towards the “shift” towards longtermism, where your results show no strong leaning one way or the other, with a plurality being Neutral/Agnostic. It’s not clear to me that this represents the community being lukewarm on longtermism, rather than, whatever their own views about cause prioritization, people expressing agnosticism about the community’s shift (people might think “I support longtermist causes, but whether the community should is up to the community” or some such. One other datapoint I would point to regarding the community’s attitudes towards the shift, however, is our own recent data showing that objection to the community’s cause prioritization and perception of an excessive focus on AI / x-risk causes are among the most commonly cited reasons for dissatisfaction with the EA community. Thus, I think this reflects a cause for dissatisfaction for a significant portion of the community, even though large portions of the community clearly support strong prioritization of EA causes. I think more data about the community as a whole’s views about whether longtermist causes should be prioritized more or less strongly by community infrastructure would be useful and is something we’ll consider adding to future surveys.
Animal Welfare does not perform as a clear ‘neartermist’ cause in our data, when we examine the relationships between causes. It’s about as strongly associated with Biosecurity as Global Poverty, for example.
Thanks for sharing all of this new data—it is very interesting! (Note that in my earlier response, I had nothing to go on besides the 2020 result you have already published, which indicated that the plots you included in your first comment were drawn from a far wider sample of EA-affiliated people than what we were probing in our survey, which I still believe is true. Correct me if I’m wrong!)
Many of these new results you share here, while extremely interesting in their own right, are still not apples-to-apples comparisons for the same reasons we’ve already touched on.[1]
It is not particularly surprising to me that we are asking people meaningfully different questions and getting meaningfully different results given how generally sensitive respondents in psychological research are to variations in item phrasing. (We can of course go back and forth about which phrasing is better/more actionable/etc, but this is orthogonal to the main question of whether these are reasonable apples-to-apples comparisons.)
The most recent data you have that you mention briefly at the end of your response seems far more relevant in my view. It seems like both of the key results you are taking issue with here (cause prioritization and lukewarm longtermism views) you found yourself to some degree in these results (which it’s also worth noting was sampled at the same time as our data, rather than 2 or 4 years ago):
Your result 1:
Our result 1:
We specifically find the exact same two cause areas, animals and GHD, as being considered the most promising to currently pursue.
Your result 2 (listed as the first reason for dissatisfaction with the EA community):
Our result 2:
We specifically find that our sample is overall normally distributed with a slight negative skew (~35% disagree, ~30% agree) that EAs’ recent shift towards longtermism is positive.
I suppose having now read your newest report (which I was not aware of before conducting this project), I actually find myself less clear on why you are as surprised as you seem to be by these results given that they essentially replicate numerous object-level findings you reported only ~2 months ago.
(Want to flag that I would lend more credence in terms of guiding specific action to your prioritization results than to our ‘how promising...’ results given your significantly larger sample size and more precise resource-related questions. But this does not detract from also being able to make valid and action-guiding inferences from both of the results I include in this comment, of which we think there are many as we describe in the body of this post. I don’t think there is any strong reason to ignore or otherwise dismiss out of hand what we’ve found here—we simply sourced a large and diverse sample of EAs, asked them fairly basic questions about their views on EA-related topics, and reported the results for the community to digest and discuss.)
One further question/hunch I have in this regard is that the way we are quantifying high vs. low engagement is almost certainly different (is your sample self-reporting this/do you give them any quantitative criteria for reporting this?), which adds an additional layer of distance between these results.
Thanks Cameron!
We agree that our surveys asked different questions. I’m mostly not interested in assessing which of our questions are the most ‘apples-to-apples comparisons’, since I’m not interested in critiquing your results per se. Rather, I’m interested in what we should conclude about the object-level questions given our respective results (e.g. is the engaged EA community lukewarm and longtermism, and prioritises preference for global poverty and animal welfare, or is the community divided on these views, with the most engaged more strongly prioritising longtermism?).
I would just note that in my original response I showed the how the results varied across the full range of engagement levels, which I think offers more insight into how the community’s views differ across groups, than just looking at one sub-group.
The engagement scale is based on self-identification, but the highest engagement level is characterised with reference to “helping to lead an EA group or working at an EA-aligned organization”. You can read more about our different measures of engagement and how they cohere here. Crucially, I also presented results specifically for EA org employees and people doing EA work so concerns about the engagement scale specifically do not seem relevant.
I respond to these two points below:
I don’t think this tells us much about which causes people think most promising overall. The result you’re referring to is looking only at the 22% of respondents who mentioned Cause prioritization as a reason for dissatisfaction with EA and were not one of the 16% of people who mentioned excessive focus on x-risk as a cause for dissatisfaction (38 respondents, of which 8 mentioned animals, 4 mentioned Global poverty and 7 mentioned another cause (the rest mentioned something other than a specific cause area)).
Our footnote mentioning this was never intended to indicate which causes are overall judged most promising, just to clarify how our ‘Cause prioritization’ and ‘Excessive focus on AI’ categories differed. (As it happens, I do think our results suggest Global Poverty and Animal Welfare are the highest rated non-x-risk cause areas, but they’re not prioritised more highly than all x-risk causes).
Our results show that, among people dissatisfied with EA, Cause prioritisation (22%) and Focus on AI risks/x-risks/longtermism (16%) are among the most commonly mentioned reasons.[1] I should also emphasise that ‘Focus on AI risks/x-risks/longtermism’ is not the first reason for dissatisfaction with the EA community, it’s the fifth.
I think both our sets of results show that (at least) a significant minority believe that the community has veered too much in the direction of AI/x-risk/longtermism. But I don’t think that either sets of results show that the community overall is lukewarm on longtermism. I think the situation is better characterised as division between people who are more supportive of longtermist causes[2] (whose support has been growing), and those who are more supportive of neartermist causes.
I certainly agree that my comment here have only addressed one specific set of results to do with cause prioritisation, and that people should assess the other results on their own merits!
And, to be clear, these categories are overlapping, so the totals can’t be combined.
As we have emphasised elsewhere, we’re using “longtermist” and “neartermist” as a shorthand, and don’t think that the division is necessarily explained by longtermism per se (e.g. the groupings might be explained by epistemic attitudes towards different kinds of evidence).
Agreed.
It seems like you find the descriptor ‘lukewarm’ to be specifically problematic—I am considering changing the word choice of the ‘headline result’ accordingly given this exchange. (I originally chose to use the word ‘lukewarm’ to reflect the normal-but-slightly-negative skew of the results I’ve highlighted previously. I probably would have used ‘divided’ if our results looked bimodal, but they do not.)
What seems clear from this is that the hundreds of actively involved EAs we sampled are not collectively aligned (or ‘divided’ or ‘collectively-lukewarm’ or however you want to describe it) on whether increased attention to longtermist causes represents a positive change in the community—despite systematically mispredicting numerous times that the sample would respond more positively. I will again refer to the relevant result to ensure any readers appreciate how straightforwardly this interpretation follows from the result—
(~35% don’t think positive shift, ~30% do; ~45% don’t think primary focus, ~25% do. ~250 actively involved EAs sampled from across 10+ cause areas.)
This division/lukewarmness/misalignment represents a foundational philosophical disagreement about how to go about doing the most good and seemed pretty important for us to highlight in the write-up. It is also worth emphasizing that we personally care very much about causes like AI risk and would have hoped to see stronger support for longtermism in general—but we did not find this, much to our surprise (and to the surprise of the hundreds of participants who predicted the distributions would look significantly different as can be seen above).
As noted in the post, we definitely think follow-up research is very important for fleshing out all of these findings, and we are very supportive of all of the great work Rethink Priorities has done in this space. Perhaps it would be worthwhile at some point in the future to attempt to collaboratively investigate this specific question to see if we can’t better determine what is driving this pattern of results.
(Also, to be clear, I was not insinuating the engagement scale is invalid—looks completely reasonable to me. Simply pointing out that we are quantifying engagement differently, which may further contribute to explaining why our related but distinct analyses yielded different results.)
Thanks again for your engagement with the post and for providing readers with really interesting context throughout this discussion :)
Thanks again for the detailed reply Cameron!
I don’t think our disagreement is to do with the word “lukewarm”. I’d be happy for the word “lukewarm” to be replaced with “normal but slightly negative skew” or “roughly neutral, but slightly negative” in our disagreement. I’ll explain where I think the disagreement is below.
Here’s the core statement which I disagreed with:
The first point of disagreement concerned this claim:
“EAs (actively involved across 10+ cause areas) generally seem to think that AI risk and x-risk are less promising cause areas than ones like global health and development and animal welfare”
If we take “promising” to mean anything like prioritise / support / believe should receive a larger amount of resources / believe is more impactful etc., then I think this is a straightforward substantive disagreement: I think whatever way we slice ‘active involvement’, we’ll find more actively involved EAs prioritise X-risk more.
As we discussed above, it’s possible that “promising” means something else. But I personally do not have a good sense of in what way actively involved EAs think AI and x-risk are less promising than GHD and animal welfare.[1]
Concerning this claim, I think we need to distinguish (as I did above), between:
What do people think of ‘longtermism’? / What do people think about allocations to or prioritisation of longtermist causes?
What do people think of EA’s shift more towards longtermist causes?
Regarding the first of these questions, your second result shows slight disagreement with the claim “I think longtermist causes should be the primary focus in effective altruism”. I agree that a reasonable interpretation of this result, taken in isolation, is that the actively involved EA community is slightly negative regarding longtermism. But taking into account other data, like our cause prioritisation data which shows actively engaged EAs strongly prioritise x-risk causes or result suggesting slight agreement with an abstract statement of longtermism, I’m more sceptical. I wonder if what explains the difference is people’s response to the notion of these causes being the “primary focus”, rather than their attitudes towards longtermist causes per se.[2] If so, these responses need not indicate that the actively involved community leans slightly negative towards longtermism.
In any case, this question largely seems to me to reduce to the question of what people’s actual cause prioritisation is + what their beliefs are about abstract longtermism, discussed above.
Regarding the question of EA’s attitudes towards the “overall shift towards longtermist causes”, I would also say that, taken in isolation, it’s reasonable to interpret your result as showing that actively involved EAs are lean slightly negative towards EA’s shift towards longtermism. Again, our cause prioritisation results suggesting strong and increasing prioritisation of longtermist causes by more engaged EAs across multiple surveys gives me pause. But the main point I’ll make (which suggests a potential conciliatory way to reconcile these results) is to observe that attitudes towards the “overall shift towards longtermist causes” may not reflect attitudes towards longtermism per se. Perhaps people are Neutral/Agnostic regarding the “overall shift”, despite personally prioritising longtermist causes, because they are Agnostic about what people in the rest of the community should do. Or perhaps people think that the shift overall has been mishandled (whatever their cause prioritisation). If so the results may be interesting, regarding EAs’ attitudes towards this “shift” but not regarding their overall attitudes towards longtermism and longtermist causes.
Thanks again for your work producing these results and responding to these comments!
As I noted, I could imagine “promising” connoting something like new, young, scrappy cause areas (such that an area could be more “promising” even if people support it less than a larger established cause area). I could sort of see this fitting Animal Welfare (though it’s not really a new cause area), but it’s hard for me to see this applying to Global Health/Global Poverty which is a very old, established and large cause area.
For example, people might think EA should not have a “primary focus”, but remain a ‘cause-neutral’ movement (even though they prioritise longtermist cause most strongly and think they should get most resources). Or people might think we should split resources across causes for some other reason, despite favouring longtermism.