In this post we report findings from the 2022 EA Survey (EAS 2022) and the 2023 Supplemental EA Survey (EAS Supplement 2023)[1], covering:
Ratings of different causes, as included in previous EA Surveys (EAS 2022)
New questions about ideas related to cause prioritization (EAS 2022)
A new question about what share of resources respondents believe should be allocated to each cause (EAS Supplement 2023)
New questions about whether people would have gotten involved in EA, had EA supported different causes (EAS Supplement 2023)
Overall cause prioritization
Global Poverty and AI Risk were the highest-rated causes, closely followed by Biosecurity
Ratings of AI Risk, Biosecurity, Nuclear Security, and Animal Welfare have all increased in recent years
Prioritizing longtermist over neartermist causes was predicted by higher levels of engagement, male gender, white ethnicity, and younger age (when accounting for the influence of these and several other variables in the same model).
63% of respondents gave their highest rating to a longtermist cause, while 47% gave a neartermist cause their highest rating (the total exceeds 100% because 26% of respondents gave their highest ratings to both a longtermist cause and a neartermist cause). Splitting these out, we see 38% gave only a longtermist cause their highest rating, 21% a neartermist cause only, 26% both, and 16% neither a near- nor longtermist cause (e.g., Animal Welfare). This suggests that although almost twice as many respondents prioritize a longtermist cause as prioritize a neartermist cause, there is considerable overlap.
Philosophical ideas related to cause prioritization
“It is more than 10% likely that ants have valenced experience (e.g., pain)”: 56% of respondents agreed, 17% disagreed
“The impact of our actions on the very long-term future is the most important consideration when it comes to doing good”: 46% of respondents agreed, 37% disagreed
“I endorse being roughly risk-neutral, even if it increases the odds that I have no impact at all.” 37% of respondents agreed, 41% disagreed
“Most expected value in the future comes from digital minds’ experiences, or the experiences of other nonbiological entities.” 27% of respondents agreed, 43% disagreed
“The EA community should defer to mainstream experts on most topics, rather than embrace contrarian views.” 23% of respondents agreed, 46% disagreed
The following statements were quite strongly associated with each other and predicted support for longtermist causes over neartermist ones quite well: “Long-term future”, “Low probability high impact”, “Risk neutral”, and “Digital minds”
Allocation of resources to causes
On average, respondents allocated the following shares to each cause: Global Health and Development (29.7%), AI Risks (23.2%), Farm Animal Welfare (FAW) (18.7%), Other x-risks (16.0%), Wild Animal Welfare (5.1%), and Other causes (7.3%). Results did not dramatically differ looking only at highly engaged respondents.
Compared to actualallocationsestimed in 2019, the average of the survey allocations are lower for GHD, higher for AI / x-risk overall and higher for animal welfare, but compared to 2019 Leaders’ Forum allocations, the survey assigns larger shares to GHD and to FAW and less to AI and x-risk.
Causes and getting involved in EA
Respondents were strongly inclined to report that “the key philosophical ideas and principles of EA in the abstract” were more important than “Specific causes that EA focuses on”
However, results were much more mixed when asked whether they would still have gotten involved in EA if the community was not significantly engaged with the causes they were most interested in at the time. The most common single response was that this was “probable” (27.0%), and a majority (57.2%) of respondents rated it at least probable that they would have gotten involved, but almost a third (31.5%) rated it more likely that they would not have gotten involved.
Overall cause prioritization
As in previous survey years, respondents were asked to indicate the extent to which a range of different causes should be prioritized by EAs[2]. These results represent the responses of approximately 3000 respondents, with the exact number varying from 2964 and 3143, as not everyone provided ratings for all causes. The first two figures below show the prioritization ratings expressed as means, as well as the percentage of people selecting different levels of prioritization for each cause.
When looking at both the means and the overall distribution of ratings, Global poverty, AI risk, and Biosecurity tend to stand apart from other items as the most highly rated. This interpretation is supported by more formal comparisons among cause ratings presented in the appendix.
As in previous years, Biosecurity stands out as being a near-top priority for fully half of the respondents, despite being much less frequently selected as a top priority than Global Poverty or AI. Conversely, while Climate change scores relatively low on average, it has a high percentage of people picking it as the top priority relative to its average position.
Although there is considerable variability in precisely how much people think different causes should be prioritized, all causes had at least 50% of respondents indicating they should receive at least significant resources.
How many EAs prioritize longtermist, neartermist, and other causes
We also conducted some alternative analyses to assess whether more respondents support longtermist or neartermist causes[3]. For these analyses, we examined whether respondents gave their highest rating to a longtermist cause, a neartermist cause, both, or neither[4]. This new analysis provides a check on the earlier analyses, and offers a more intuitive sense of how many people most prioritize one or other type of cause.
Overall, 63.6% of respondents gave a longtermist cause their highest rating, while 46.8% of respondents gave a neartermist cause their highest rating, and 15.6% gave their highest rating to an Other cause. Notably, these percentages sum to more than 100% because some respondents assigned their highest rating to both a longtermist cause and a neartermist cause.
Splitting these respondents out separately, we see that 37.6% of respondents gave a longtermist cause their highest rating, compared to 21% giving a neartermist cause their highest rating.
In line with our other analyses, support for longtermist and neartermist causes diverged strongly between low engagement and high engagement respondents. Among the highly engaged, almost half of respondents (47%) most prioritized only a longtermist cause and only 13% most prioritized a neartermist cause. In contrast, the less engaged were more evenly split, but favored neartermist causes (31% neartermist vs 26% longtermist).
Trends in cause prioritization over time
Since the first time that questions about cause prioritization were asked in the EA Survey in 2015, the causes most highly-rated by respondents have changed. It can be seen that Global poverty and health used to be rated even more highly than it is at present, though it remains highly endorsed. The importance attributed to several longtermist/existential risk cause areas has tended to increase over time. It is interesting to note that both Biosecurity and Nuclear security see quite large inflections upwards from 2020 to 2022, which may be due to events such as the war in Ukraine and COVID-19 pandemic raising the visibility of these cause areas. However, several other cause areas also show increases from 2020 to 2022, and AI risk in particular shows a steady increase between 2019 and 2022, so we cannot be sure that the observed increases are specifically related to such events. It is possible these changes could also reflect broader shifts in the EA ecosystem to promote existential risks as a category more broadly.
Animal welfare has risen substantially as a priority since 2015. Relative to how much the prioritization of other cause areas has changed over time, Mental health and Climate change are rated similarly to how they were when initially introduced as cause areas in the 2018 survey.
The relative prioritization of different causes, and their changes over time, may also relate to shifts in the composition of EA Survey respondents over time. We therefore also assessed changes over time, broken down by level of engagement, which are shown in the appendix.
Engagement and cause prioritization
Cause prioritization for some causes differed substantially between high vs. low engagement respondents[5]. The most striking differences were in ratings of Global poverty, AI risk, Climate change, and EA movement building. Although they still gave high priority to Global poverty, high engagement respondents rated it lower than those with low engagement. Relative to highly engaged respondents, low engagement respondents gave much higher priority to Climate change. In contrast, highly engaged respondents gave higher ratings to AI risk—which was the top priority on average in this group—as well as higher ratings to EA movement building.
Factors linked with prioritization of different causes
To assess factors that might explain or relate to prioritization of different causes, we conducted multiple regression analyses using predictors of interest that have been included in previous years and often found to be associated with EA-related outcomes, or of broad interest for understanding the EA community, namely: respondent age, year involved in EA, identifying as of white ethnicity, identifying as a man, being a student, having high engagement, and residing in the USA[6].
We’ve selected a range of informative prioritization-related outcomes for these multiple regression models.[7]
Below, we highlight a number of findings:
High engagement tended to have the largest associations with the different cause prioritization ratings, being associated with substantially higher Longtermist (LT) versus Neartermist (NT) scores, higher ratings for meta-level priorities, and greater prioritization for Animal welfare. It was also strongly negatively associated with Global poverty / health.
Higher age was associated with greater concern for the two ‘neartermist’ causes, lower LT scores, and therefore substantially lower LT minus NT ratings.
Getting involved in EA in more recent years was associated with generally giving higher ratings across the cause areas/scores, with the exception of Animal welfare.
White ethnicity was associated with slightly lower preference for the ‘neartermist’ causes and a slightly higher LT versus NT preference.
Identifying as a man was also associated with a higher LT versus NT preference, lower prioritization for the two specific ‘neartermist’ causes, and lower prioritization for Animal welfare.
Being a student showed only minor effects across the different cause areas.
Finally, being based in the US seemed to have quite a strong negative association with concern over the Meta cause areas, and with Mental health, as well as smaller negative associations with Animal welfare and LT scores.
EA-related ideas and attitudes
Respondents to the 2022 EA Survey were, for the first time, also asked to express their agreement or disagreement with some different ideas and epistemic stances[8]. These items were:
The impact of our actions on the very long-term future is the most important consideration when it comes to doing good. [“Long-term future”]
I’m comfortable supporting low-probability, high-impact interventions. [“Low-probability high impact”]
I endorse being roughly risk-neutral, even if it increases the odds that I have no impact at all. [“Risk neutral”]
The EA community should defer to mainstream experts on most topics, rather than embrace contrarian views. [“Defer to experts”]
I have learned a lot from other EAs who prioritize different broad cause areas than I do. [“Learned from others”]
Most expected value in the future comes from digital minds’ experiences, or the experiences of other nonbiological entities. [“Digital minds”]
It is more than 10% likely that ants have valenced experience (e.g., pain). [Ants sentient (10%)]
It is more than 50% likely that ants have valenced experience (e.g., pain). [Ants sentient (50%)]
Response options were on a 5-point Likert scale, from Strongly Disagree to Strongly Agree.
Some particularly notable findings are:
Participants tend to disagree with being “roughly risk neutral”
Just under half of participants endorse an abstract statement of longtermism, but the distribution is bimodal[9]
Over half of participants assign 10% probability to ant sentience, while 32% assign more than 50% probability
Relationships between ideas and causes
We examined relationships between these ideas (shown in the appendix) and found that four items—“Long-term future”, “Low probability high impact”, “Risk neutral”, and “Digital minds”—also showed quite strong positive relationships with one another.
These four items tended to have negative associations with Global poverty and Climate change, and positive associations with AI risk and Other longtermism. Notably, there was much less of an association between these ideas and support for Biosecurity and Nuclear security.
Animal welfare prioritization was most reliably linked with the two ant sentience items, which with it was positively correlated. Exact pairwise correlations are presented in the appendix.
The four items noted above were also positively associated with support for longtermist over neartermist causes[10]. The correlation between the mean of these items and the LT vs. NT score was .55, representing an explained variance of 30%.
We also examined predictors of agreement with these ideas, as we did for support for different causes. Full results are shown in the appendix. Some notable findings are that:
Being ‘risk neutral’, endorsing ‘low probability, high-impact interventions’, and believing the ‘long-term future’ was the most important consideration were positively associated with identifying as a man and with higher engagement
Agreement that ants were more than 10% likely (and more than 50% likely) to be sentient was associated negatively with identifying as a man and with more years spent in EA.
Allocation of resources across causes
In December 2023 through January 2024, a supplemental survey was fielded to members of the EA community. In this survey, some additional questions of relevance for cause prioritization were asked. Approximately 600 people responded to the survey, although the percentage who provided responses to different questions in the survey varied greatly.
Resource allocation to different causes
Respondents were asked to indicate the proportion of resources they would allocate to each of several cause areas: Global health/poverty/development (GHD), farmed animal welfare (FAW), wild animal welfare (WAW), AI risk, Existential risk other than AI, and ‘Other’ causes.
Overall, GHD received on average the greatest share of resource allocation, followed by AI risk, then FAW, then X-risk. WAW and ‘Other’ causes received substantially lower percentages of resource allocation.
This overall pattern was the same when breaking down respondents into high relative to low engagement, but higher engagement respondents tended to give proportionally greater shares of resources to AI risk and FAW than did lower engagement respondents, with corresponding lower shares then given to GHD.
The results suggest that, compared to actual allocations, our survey’s allocations assign fewer resources to GHD and more resources to AI, and more to Farmed Animal Welfare. However, compared to the Leaders Survey allocations, our survey assigns more resources to GHD, less to AI, and more to FAW. Moreover, both our results and the Leaders’ Forum survey appear to assign dramatically more resources to WAW.
Of course, it’s worth noting that both these survey results may reflect certain idealisations on the part of respondents, and so not reflect recommendations taking into account various practicalities (e.g. the effects of rapidly shifting funding allocations).
Causes and getting involved with EA
In addition to allocating resources to different cause areas, respondents were also asked the extent to which the specific causes on which EA focuses, or the philosophical ideas that underpin EA, were more influential in getting them involved in EA. The majority of respondents indicated that the ideas that underpin EA were more influential for them.
However, respondents were asked to consider whether or not they would have gotten involved with EA if the EA community had not been significantly engaged with causes they were most interested in at the time they became involved with EA. A slight majority of respondents indicated that they likely would have, whereas approximately 30% indicated they likely would not have become involved with EA.
Acknowledgments
This post was written by Jamie Elsey and David Moss. We would like to thank Willem Sleegers for help with some analyses and for review, and Marie Buhl, Erich Grunewald, Bob Fischer, Oscar Delaney, and Sarah Negris-Mamani for review of and suggestions to the final draft of this post.
Appendix
Probability of superiority
To compare the relative prioritization of different items with one another in a way that respects the ordinal nature of the ratings, we can calculate the within subjects Probability of Superiority (PSup). PSup gives an indication of the proportion of times one item ‘A’ would be rated more highly than a comparison item ‘B’ in a forced choice comparison between them: For each respondent, we assess whether A was rated higher than, the same as, or less than B, scoring these cases as 1, 0.5, and 0 respectively. When averaging these ratings across all respondents, this amounts to a probability, from 0 to 1, of A being rated greater than B, where 0 indicates that nobody rates A > B, .5 would indicate A and B are essentially equally rated, and 1 indicates everybody rates A > B.
Using this metric, we can see that Global poverty and health, AI risk, and Biosecurity are expected to be rated as roughly equivalent to one another, with PSup ratings hovering around .5. However, many comparisons have moderate or large effect sizes, with a 2:1 ratio of people favoring one cause over another (PSup ~.67, roughly equivalent to a Cohen’s d of .62, e.g., Biosecurity vs. Climate change, Biosecurity vs. Nuclear security), and some even with a 3:1 ratio (PSup ~.75, roughly equivalent to a Cohen’s d of .95, e.g., Global poverty or Biosecurity vs. Mental health).
Probability of superiority in high and low engagement respondents
Comparisons among the cause ratings within these different engagement levels using Probability of superiority are presented in the appendix. Most notable in these comparisons is that, among high engagement respondents, Climate change performed especially poorly, whereas among Low engagement respondents, Climate change was rated reliably higher than most other causes with the exception of Global poverty. Among high engagement respondents, both AI risk and Biosecurity were given higher priority than Global poverty. Among high engagement respondents, EA movement building also ranked comparably to several specific cause areas, including Animal welfare, Nuclear security, and X risks (besides bio/nuclear/AI), whereas it was rated lower than most other causes among Low engagement respondents.
Predictors across causes
Relationships between ideas
As would be expected, the two ant sentience items were highly positively correlated with one another. Four items—“Long-term future”, “Low probability high impact”, “Risk neutral”, and “Digital minds”—also showed quite strong positive relationships with one another, ranging from .21 to .47. A principal component analysis (PCA) on these items suggested that the items “Long-term future”, “Low probability high impact”, “Risk neutral”, and “Digital minds” tended to cohere together as a component, each with loadings of .5 or more.
Pairwise associations between EA-related ideas and cause prioritization
Predictors of ideas
Similarly to our multiple regression analyses of cause prioritization, we entered the EA-related ideas into multiple regression models including as predictors: respondent age, year involved in EA, identifying as of white ethnicity, identifying as a man, being a student, having high engagement, and residing in the USA.
Student status, being based in the USA, and reporting white racial identification all had relatively few and small associations with EA-related ideas. In contrast, high engagement showed several relatively large and positive associations with “Low probability high impact”, “Risk neutral”, “Long-term future”, “Digital minds”, and “Learned from others”, with a negative association with agreeing with deference to experts. Identifying as a man was associated with greater agreement with “Low probability high impact”, “Risk neutral”, “Long-term future”, “Digital minds”, whereas being of older age was associated with lower agreement with these ideas. Identifying as a man was also negatively associated with attributing sentience to ants, whereas having become involved in the EA community in more recent years was associated with a greater tendency to endorse ant sentience. Both identifying as a man and increasing age were associated with being less likely to agree with “Learned from others”.
Ratings across time broken down by engagement
We first added the question about engagement in the 2019 EA Survey and so can show these breakdowns for EAS 2019, 2020 and 2022. For simplicity, we group respondents as Low engagement (‘None’, ‘Mild’, and ‘Moderate’, ~45% of respondents) and High engagement (‘Considerable’ and ‘High’, ~55% of respondents). Further breakdowns of cause prioritization depending on engagement and other variables are presented below.
Logistic regression on most cause allocation selections
In a logistic regression model, we found that roughly half of respondents would be expected to give their highest (or equal highest) share of resources to GHD, in contrast to 37% for AI risk. FAW and X-risks were expected to be allocated a top share of resources by approximately 20% and 15% of respondents respectively, with a minority of 2-3% doing so for WAW.
When assessing those causes to which respondents were expected to allocate no resources, based on logistic regression modeling we expect approximately 28% of members to allocate no resources to WAW. Other than WAW, cause areas were generally expected to receive no resource allocation by at most around 10% of people.
Respondents were asked “Please give a rough indication of how much you think each of these causes should be prioritized by EAs.” The full rating options were: (1) I do not think any resources should be devoted to this cause, (2) I do not think this is a priority, but should receive some resources, (3) This cause deserves significant resources, but less than the top priorities, (4) This cause should be a near-top priority, (5) This cause should be the top priority. In 2015 and 2017, the wording of some options was slightly different but conveyed the same meaning.
In previous years and in analyses here, we calculated a Longtermism-minus-Neartermism score to assess prioritization of longtermist causes over neartermist causes. The mean scores offer a better sense of the degree to which respondents generally prioritize longtermist causes over neartermist causes (or prioritize each separately). However, as this is a continuous scale it does not give a straightforward answer as to which respondents should be counted a longtermist or neartermist overall.
Causes classified as longtermist were Biosecurity, Nuclear risk, AI risk, X-risk other and Other longtermist. Causes classified as neartermist were Mental health, Global poverty and Neartermist other. Causes classified as Other were Animal Welfare, Cause Prioritization, EA movement building and Climate change.
As we have noted previously, people may of course prioritise these causes for reasons other than longtermism or neartermism per se. Likewise, people might support the ‘Other’ causes here for neartermist or longtermist reasons. The classifications we used here were informed by our prior analyses of the factor structure of the cause prioritisation items and a priori theoretical considerations (namely Climate Change is clearly associated with the neartermist causes not the longtermist causes, but theoretically some might think of it as a longtermist cause, and we count it as ‘Other’ in this analysis). Prior analyses we have conducted on the full response scales suggest the support for ‘Meta’ causes (Cause prioritisation and movement building) tend to be strongly correlated with support for longtermist causes.
A respondent’s “highest rating” was the highest rating that they gave to any cause, whether or not it was the absolute highest rating on the response scale (e.g., a respondent who assigned a cause 4⁄5, but assigned no cause a 5, would be counted as assigning that cause their highest rating, since they most prioritized that cause).
As in previous posts, we used a binary classification where respondents indicating ‘None’, ‘Mild’, and ‘Moderate’, (~45% of respondents) were classified as ‘Low engagement’ and respondents indicating ‘Considerable’ and ‘High’, (~55% of respondents) were classified as ‘High engagement’.
Age and Year involved in EA were standardized by mean centering and dividing by two standard deviations, as recommended by statistician Dr Andrew Gelman (Gelman, A. (2008). Scaling regression inputs by dividing by two standard deviations. Statistics in medicine, 27(15), 2865-2873.)
The ‘LT score’ reflects the average of several longtermism-related causes (AI risk, Biosecurity, Nuclear security, and other X-risks), whereas the ‘NT score’ reflects the average of two typically ‘neartermist’ causes (Global poverty / health and Mental health). ‘LT minus NT’ reflects the LT score minus the NT score, such that higher values indicate a relatively greater preference for the above LT vs. NT cause areas, on average. However, because we have found in exploratory work that Global poverty / health may have quite different associations with various correlates than does Mental health, we also plot these two outcomes independently of one another. The ‘Meta score’ reflects an average of Cause prioritization and EA movement building. Finally, we include Animal welfare, as concern for animals would otherwise go unrepresented in these regression analyses. The plot below highlights the predictors for each cause area, and in the appendix we show how the predictors tended to perform across different priorities.
The two items related to ant sentience were purely exploratory items requested by an organization with interests in animal welfare, whereas the remaining six items are more strictly the ‘EA-related ideas/attitudes’ that were intended to generate more insight into people’s cause prioritization preferences, and their general epistemological stances.
i.e., there are two separate ‘peaks’, with more people somewhat agreeing and somewhat disagreeing with the statement than there are people in the middle of the response scale, in contrast to a normal distribution
This score was the average of support for AI, Nuclear, Biosecurity, and other X-risk minus support for the average of Global poverty and Mental health.
EA Survey: Cause Prioritization
Summary
In this post we report findings from the 2022 EA Survey (EAS 2022) and the 2023 Supplemental EA Survey (EAS Supplement 2023)[1], covering:
Ratings of different causes, as included in previous EA Surveys (EAS 2022)
New questions about ideas related to cause prioritization (EAS 2022)
A new question about what share of resources respondents believe should be allocated to each cause (EAS Supplement 2023)
New questions about whether people would have gotten involved in EA, had EA supported different causes (EAS Supplement 2023)
Overall cause prioritization
Global Poverty and AI Risk were the highest-rated causes, closely followed by Biosecurity
Ratings of AI Risk, Biosecurity, Nuclear Security, and Animal Welfare have all increased in recent years
Prioritizing longtermist over neartermist causes was predicted by higher levels of engagement, male gender, white ethnicity, and younger age (when accounting for the influence of these and several other variables in the same model).
63% of respondents gave their highest rating to a longtermist cause, while 47% gave a neartermist cause their highest rating (the total exceeds 100% because 26% of respondents gave their highest ratings to both a longtermist cause and a neartermist cause). Splitting these out, we see 38% gave only a longtermist cause their highest rating, 21% a neartermist cause only, 26% both, and 16% neither a near- nor longtermist cause (e.g., Animal Welfare). This suggests that although almost twice as many respondents prioritize a longtermist cause as prioritize a neartermist cause, there is considerable overlap.
Philosophical ideas related to cause prioritization
“I’m comfortable supporting low-probability, high-impact interventions”: 66% of respondents agreed, 19% disagreed.
“It is more than 10% likely that ants have valenced experience (e.g., pain)”: 56% of respondents agreed, 17% disagreed
“The impact of our actions on the very long-term future is the most important consideration when it comes to doing good”: 46% of respondents agreed, 37% disagreed
“I endorse being roughly risk-neutral, even if it increases the odds that I have no impact at all.” 37% of respondents agreed, 41% disagreed
“Most expected value in the future comes from digital minds’ experiences, or the experiences of other nonbiological entities.” 27% of respondents agreed, 43% disagreed
“The EA community should defer to mainstream experts on most topics, rather than embrace contrarian views.” 23% of respondents agreed, 46% disagreed
The following statements were quite strongly associated with each other and predicted support for longtermist causes over neartermist ones quite well: “Long-term future”, “Low probability high impact”, “Risk neutral”, and “Digital minds”
Allocation of resources to causes
On average, respondents allocated the following shares to each cause: Global Health and Development (29.7%), AI Risks (23.2%), Farm Animal Welfare (FAW) (18.7%), Other x-risks (16.0%), Wild Animal Welfare (5.1%), and Other causes (7.3%). Results did not dramatically differ looking only at highly engaged respondents.
Compared to actual allocations estimed in 2019, the average of the survey allocations are lower for GHD, higher for AI / x-risk overall and higher for animal welfare, but compared to 2019 Leaders’ Forum allocations, the survey assigns larger shares to GHD and to FAW and less to AI and x-risk.
Causes and getting involved in EA
Respondents were strongly inclined to report that “the key philosophical ideas and principles of EA in the abstract” were more important than “Specific causes that EA focuses on”
However, results were much more mixed when asked whether they would still have gotten involved in EA if the community was not significantly engaged with the causes they were most interested in at the time. The most common single response was that this was “probable” (27.0%), and a majority (57.2%) of respondents rated it at least probable that they would have gotten involved, but almost a third (31.5%) rated it more likely that they would not have gotten involved.
Overall cause prioritization
As in previous survey years, respondents were asked to indicate the extent to which a range of different causes should be prioritized by EAs[2]. These results represent the responses of approximately 3000 respondents, with the exact number varying from 2964 and 3143, as not everyone provided ratings for all causes. The first two figures below show the prioritization ratings expressed as means, as well as the percentage of people selecting different levels of prioritization for each cause.
When looking at both the means and the overall distribution of ratings, Global poverty, AI risk, and Biosecurity tend to stand apart from other items as the most highly rated. This interpretation is supported by more formal comparisons among cause ratings presented in the appendix.
As in previous years, Biosecurity stands out as being a near-top priority for fully half of the respondents, despite being much less frequently selected as a top priority than Global Poverty or AI. Conversely, while Climate change scores relatively low on average, it has a high percentage of people picking it as the top priority relative to its average position.
Although there is considerable variability in precisely how much people think different causes should be prioritized, all causes had at least 50% of respondents indicating they should receive at least significant resources.
How many EAs prioritize longtermist, neartermist, and other causes
We also conducted some alternative analyses to assess whether more respondents support longtermist or neartermist causes[3]. For these analyses, we examined whether respondents gave their highest rating to a longtermist cause, a neartermist cause, both, or neither[4]. This new analysis provides a check on the earlier analyses, and offers a more intuitive sense of how many people most prioritize one or other type of cause.
Overall, 63.6% of respondents gave a longtermist cause their highest rating, while 46.8% of respondents gave a neartermist cause their highest rating, and 15.6% gave their highest rating to an Other cause. Notably, these percentages sum to more than 100% because some respondents assigned their highest rating to both a longtermist cause and a neartermist cause.
Splitting these respondents out separately, we see that 37.6% of respondents gave a longtermist cause their highest rating, compared to 21% giving a neartermist cause their highest rating.
In line with our other analyses, support for longtermist and neartermist causes diverged strongly between low engagement and high engagement respondents. Among the highly engaged, almost half of respondents (47%) most prioritized only a longtermist cause and only 13% most prioritized a neartermist cause. In contrast, the less engaged were more evenly split, but favored neartermist causes (31% neartermist vs 26% longtermist).
Trends in cause prioritization over time
Since the first time that questions about cause prioritization were asked in the EA Survey in 2015, the causes most highly-rated by respondents have changed. It can be seen that Global poverty and health used to be rated even more highly than it is at present, though it remains highly endorsed. The importance attributed to several longtermist/existential risk cause areas has tended to increase over time. It is interesting to note that both Biosecurity and Nuclear security see quite large inflections upwards from 2020 to 2022, which may be due to events such as the war in Ukraine and COVID-19 pandemic raising the visibility of these cause areas. However, several other cause areas also show increases from 2020 to 2022, and AI risk in particular shows a steady increase between 2019 and 2022, so we cannot be sure that the observed increases are specifically related to such events. It is possible these changes could also reflect broader shifts in the EA ecosystem to promote existential risks as a category more broadly.
Animal welfare has risen substantially as a priority since 2015. Relative to how much the prioritization of other cause areas has changed over time, Mental health and Climate change are rated similarly to how they were when initially introduced as cause areas in the 2018 survey.
The relative prioritization of different causes, and their changes over time, may also relate to shifts in the composition of EA Survey respondents over time. We therefore also assessed changes over time, broken down by level of engagement, which are shown in the appendix.
Engagement and cause prioritization
Cause prioritization for some causes differed substantially between high vs. low engagement respondents[5]. The most striking differences were in ratings of Global poverty, AI risk, Climate change, and EA movement building. Although they still gave high priority to Global poverty, high engagement respondents rated it lower than those with low engagement. Relative to highly engaged respondents, low engagement respondents gave much higher priority to Climate change. In contrast, highly engaged respondents gave higher ratings to AI risk—which was the top priority on average in this group—as well as higher ratings to EA movement building.
Factors linked with prioritization of different causes
To assess factors that might explain or relate to prioritization of different causes, we conducted multiple regression analyses using predictors of interest that have been included in previous years and often found to be associated with EA-related outcomes, or of broad interest for understanding the EA community, namely: respondent age, year involved in EA, identifying as of white ethnicity, identifying as a man, being a student, having high engagement, and residing in the USA[6].
We’ve selected a range of informative prioritization-related outcomes for these multiple regression models.[7]
Below, we highlight a number of findings:
High engagement tended to have the largest associations with the different cause prioritization ratings, being associated with substantially higher Longtermist (LT) versus Neartermist (NT) scores, higher ratings for meta-level priorities, and greater prioritization for Animal welfare. It was also strongly negatively associated with Global poverty / health.
Higher age was associated with greater concern for the two ‘neartermist’ causes, lower LT scores, and therefore substantially lower LT minus NT ratings.
Getting involved in EA in more recent years was associated with generally giving higher ratings across the cause areas/scores, with the exception of Animal welfare.
White ethnicity was associated with slightly lower preference for the ‘neartermist’ causes and a slightly higher LT versus NT preference.
Identifying as a man was also associated with a higher LT versus NT preference, lower prioritization for the two specific ‘neartermist’ causes, and lower prioritization for Animal welfare.
Being a student showed only minor effects across the different cause areas.
Finally, being based in the US seemed to have quite a strong negative association with concern over the Meta cause areas, and with Mental health, as well as smaller negative associations with Animal welfare and LT scores.
EA-related ideas and attitudes
Respondents to the 2022 EA Survey were, for the first time, also asked to express their agreement or disagreement with some different ideas and epistemic stances[8]. These items were:
The impact of our actions on the very long-term future is the most important consideration when it comes to doing good. [“Long-term future”]
I’m comfortable supporting low-probability, high-impact interventions. [“Low-probability high impact”]
I endorse being roughly risk-neutral, even if it increases the odds that I have no impact at all. [“Risk neutral”]
The EA community should defer to mainstream experts on most topics, rather than embrace contrarian views. [“Defer to experts”]
I have learned a lot from other EAs who prioritize different broad cause areas than I do. [“Learned from others”]
Most expected value in the future comes from digital minds’ experiences, or the experiences of other nonbiological entities. [“Digital minds”]
It is more than 10% likely that ants have valenced experience (e.g., pain). [Ants sentient (10%)]
It is more than 50% likely that ants have valenced experience (e.g., pain). [Ants sentient (50%)]
Response options were on a 5-point Likert scale, from Strongly Disagree to Strongly Agree.
Some particularly notable findings are:
Participants tend to disagree with being “roughly risk neutral”
Just under half of participants endorse an abstract statement of longtermism, but the distribution is bimodal[9]
Over half of participants assign 10% probability to ant sentience, while 32% assign more than 50% probability
Relationships between ideas and causes
We examined relationships between these ideas (shown in the appendix) and found that four items—“Long-term future”, “Low probability high impact”, “Risk neutral”, and “Digital minds”—also showed quite strong positive relationships with one another.
These four items tended to have negative associations with Global poverty and Climate change, and positive associations with AI risk and Other longtermism. Notably, there was much less of an association between these ideas and support for Biosecurity and Nuclear security.
Animal welfare prioritization was most reliably linked with the two ant sentience items, which with it was positively correlated. Exact pairwise correlations are presented in the appendix.
The four items noted above were also positively associated with support for longtermist over neartermist causes[10]. The correlation between the mean of these items and the LT vs. NT score was .55, representing an explained variance of 30%.We also examined predictors of agreement with these ideas, as we did for support for different causes. Full results are shown in the appendix. Some notable findings are that:
Being ‘risk neutral’, endorsing ‘low probability, high-impact interventions’, and believing the ‘long-term future’ was the most important consideration were positively associated with identifying as a man and with higher engagement
Agreement that ants were more than 10% likely (and more than 50% likely) to be sentient was associated negatively with identifying as a man and with more years spent in EA.
Allocation of resources across causes
In December 2023 through January 2024, a supplemental survey was fielded to members of the EA community. In this survey, some additional questions of relevance for cause prioritization were asked. Approximately 600 people responded to the survey, although the percentage who provided responses to different questions in the survey varied greatly.
Resource allocation to different causes
Respondents were asked to indicate the proportion of resources they would allocate to each of several cause areas: Global health/poverty/development (GHD), farmed animal welfare (FAW), wild animal welfare (WAW), AI risk, Existential risk other than AI, and ‘Other’ causes.
Overall, GHD received on average the greatest share of resource allocation, followed by AI risk, then FAW, then X-risk. WAW and ‘Other’ causes received substantially lower percentages of resource allocation.
This overall pattern was the same when breaking down respondents into high relative to low engagement, but higher engagement respondents tended to give proportionally greater shares of resources to AI risk and FAW than did lower engagement respondents, with corresponding lower shares then given to GHD.
Below, we compare the mean[11] allocations from our survey respondents to actual allocations (as estimated by Ben Todd) and Leaders’ ideal allocations (as estimated by the 2019 Leaders’ Forum)[12].
The results suggest that, compared to actual allocations, our survey’s allocations assign fewer resources to GHD and more resources to AI, and more to Farmed Animal Welfare. However, compared to the Leaders Survey allocations, our survey assigns more resources to GHD, less to AI, and more to FAW. Moreover, both our results and the Leaders’ Forum survey appear to assign dramatically more resources to WAW.
Of course, it’s worth noting that both these survey results may reflect certain idealisations on the part of respondents, and so not reflect recommendations taking into account various practicalities (e.g. the effects of rapidly shifting funding allocations).
Causes and getting involved with EA
In addition to allocating resources to different cause areas, respondents were also asked the extent to which the specific causes on which EA focuses, or the philosophical ideas that underpin EA, were more influential in getting them involved in EA. The majority of respondents indicated that the ideas that underpin EA were more influential for them.
However, respondents were asked to consider whether or not they would have gotten involved with EA if the EA community had not been significantly engaged with causes they were most interested in at the time they became involved with EA. A slight majority of respondents indicated that they likely would have, whereas approximately 30% indicated they likely would not have become involved with EA.Acknowledgments
This post was written by Jamie Elsey and David Moss. We would like to thank Willem Sleegers for help with some analyses and for review, and Marie Buhl, Erich Grunewald, Bob Fischer, Oscar Delaney, and Sarah Negris-Mamani for review of and suggestions to the final draft of this post.
Appendix
Probability of superiority
To compare the relative prioritization of different items with one another in a way that respects the ordinal nature of the ratings, we can calculate the within subjects Probability of Superiority (PSup). PSup gives an indication of the proportion of times one item ‘A’ would be rated more highly than a comparison item ‘B’ in a forced choice comparison between them: For each respondent, we assess whether A was rated higher than, the same as, or less than B, scoring these cases as 1, 0.5, and 0 respectively. When averaging these ratings across all respondents, this amounts to a probability, from 0 to 1, of A being rated greater than B, where 0 indicates that nobody rates A > B, .5 would indicate A and B are essentially equally rated, and 1 indicates everybody rates A > B.
Using this metric, we can see that Global poverty and health, AI risk, and Biosecurity are expected to be rated as roughly equivalent to one another, with PSup ratings hovering around .5. However, many comparisons have moderate or large effect sizes, with a 2:1 ratio of people favoring one cause over another (PSup ~.67, roughly equivalent to a Cohen’s d of .62, e.g., Biosecurity vs. Climate change, Biosecurity vs. Nuclear security), and some even with a 3:1 ratio (PSup ~.75, roughly equivalent to a Cohen’s d of .95, e.g., Global poverty or Biosecurity vs. Mental health).
Probability of superiority in high and low engagement respondents
Comparisons among the cause ratings within these different engagement levels using Probability of superiority are presented in the appendix. Most notable in these comparisons is that, among high engagement respondents, Climate change performed especially poorly, whereas among Low engagement respondents, Climate change was rated reliably higher than most other causes with the exception of Global poverty. Among high engagement respondents, both AI risk and Biosecurity were given higher priority than Global poverty. Among high engagement respondents, EA movement building also ranked comparably to several specific cause areas, including Animal welfare, Nuclear security, and X risks (besides bio/nuclear/AI), whereas it was rated lower than most other causes among Low engagement respondents.
Predictors across causes
Relationships between ideas
As would be expected, the two ant sentience items were highly positively correlated with one another. Four items—“Long-term future”, “Low probability high impact”, “Risk neutral”, and “Digital minds”—also showed quite strong positive relationships with one another, ranging from .21 to .47. A principal component analysis (PCA) on these items suggested that the items “Long-term future”, “Low probability high impact”, “Risk neutral”, and “Digital minds” tended to cohere together as a component, each with loadings of .5 or more.
Pairwise associations between EA-related ideas and cause prioritization
Predictors of ideas
Similarly to our multiple regression analyses of cause prioritization, we entered the EA-related ideas into multiple regression models including as predictors: respondent age, year involved in EA, identifying as of white ethnicity, identifying as a man, being a student, having high engagement, and residing in the USA.
Student status, being based in the USA, and reporting white racial identification all had relatively few and small associations with EA-related ideas. In contrast, high engagement showed several relatively large and positive associations with “Low probability high impact”, “Risk neutral”, “Long-term future”, “Digital minds”, and “Learned from others”, with a negative association with agreeing with deference to experts. Identifying as a man was associated with greater agreement with “Low probability high impact”, “Risk neutral”, “Long-term future”, “Digital minds”, whereas being of older age was associated with lower agreement with these ideas. Identifying as a man was also negatively associated with attributing sentience to ants, whereas having become involved in the EA community in more recent years was associated with a greater tendency to endorse ant sentience. Both identifying as a man and increasing age were associated with being less likely to agree with “Learned from others”.
Ratings across time broken down by engagement
We first added the question about engagement in the 2019 EA Survey and so can show these breakdowns for EAS 2019, 2020 and 2022. For simplicity, we group respondents as Low engagement (‘None’, ‘Mild’, and ‘Moderate’, ~45% of respondents) and High engagement (‘Considerable’ and ‘High’, ~55% of respondents). Further breakdowns of cause prioritization depending on engagement and other variables are presented below.
Logistic regression on most cause allocation selections
In a logistic regression model, we found that roughly half of respondents would be expected to give their highest (or equal highest) share of resources to GHD, in contrast to 37% for AI risk. FAW and X-risks were expected to be allocated a top share of resources by approximately 20% and 15% of respondents respectively, with a minority of 2-3% doing so for WAW.
When assessing those causes to which respondents were expected to allocate no resources, based on logistic regression modeling we expect approximately 28% of members to allocate no resources to WAW. Other than WAW, cause areas were generally expected to receive no resource allocation by at most around 10% of people.
The EAS 2022 was conducted from November to December, 2022. The EAS Supplement 2023 was conducted from December 2023 to January, 2024.
Respondents were asked “Please give a rough indication of how much you think each of these causes should be prioritized by EAs.” The full rating options were: (1) I do not think any resources should be devoted to this cause, (2) I do not think this is a priority, but should receive some resources, (3) This cause deserves significant resources, but less than the top priorities, (4) This cause should be a near-top priority, (5) This cause should be the top priority. In 2015 and 2017, the wording of some options was slightly different but conveyed the same meaning.
In previous years and in analyses here, we calculated a Longtermism-minus-Neartermism score to assess prioritization of longtermist causes over neartermist causes. The mean scores offer a better sense of the degree to which respondents generally prioritize longtermist causes over neartermist causes (or prioritize each separately). However, as this is a continuous scale it does not give a straightforward answer as to which respondents should be counted a longtermist or neartermist overall.
Causes classified as longtermist were Biosecurity, Nuclear risk, AI risk, X-risk other and Other longtermist. Causes classified as neartermist were Mental health, Global poverty and Neartermist other. Causes classified as Other were Animal Welfare, Cause Prioritization, EA movement building and Climate change.
As we have noted previously, people may of course prioritise these causes for reasons other than longtermism or neartermism per se. Likewise, people might support the ‘Other’ causes here for neartermist or longtermist reasons. The classifications we used here were informed by our prior analyses of the factor structure of the cause prioritisation items and a priori theoretical considerations (namely Climate Change is clearly associated with the neartermist causes not the longtermist causes, but theoretically some might think of it as a longtermist cause, and we count it as ‘Other’ in this analysis). Prior analyses we have conducted on the full response scales suggest the support for ‘Meta’ causes (Cause prioritisation and movement building) tend to be strongly correlated with support for longtermist causes.
A respondent’s “highest rating” was the highest rating that they gave to any cause, whether or not it was the absolute highest rating on the response scale (e.g., a respondent who assigned a cause 4⁄5, but assigned no cause a 5, would be counted as assigning that cause their highest rating, since they most prioritized that cause).
As in previous posts, we used a binary classification where respondents indicating ‘None’, ‘Mild’, and ‘Moderate’, (~45% of respondents) were classified as ‘Low engagement’ and respondents indicating ‘Considerable’ and ‘High’, (~55% of respondents) were classified as ‘High engagement’.
Age and Year involved in EA were standardized by mean centering and dividing by two standard deviations, as recommended by statistician Dr Andrew Gelman (Gelman, A. (2008). Scaling regression inputs by dividing by two standard deviations. Statistics in medicine, 27(15), 2865-2873.)
The ‘LT score’ reflects the average of several longtermism-related causes (AI risk, Biosecurity, Nuclear security, and other X-risks), whereas the ‘NT score’ reflects the average of two typically ‘neartermist’ causes (Global poverty / health and Mental health). ‘LT minus NT’ reflects the LT score minus the NT score, such that higher values indicate a relatively greater preference for the above LT vs. NT cause areas, on average. However, because we have found in exploratory work that Global poverty / health may have quite different associations with various correlates than does Mental health, we also plot these two outcomes independently of one another. The ‘Meta score’ reflects an average of Cause prioritization and EA movement building. Finally, we include Animal welfare, as concern for animals would otherwise go unrepresented in these regression analyses. The plot below highlights the predictors for each cause area, and in the appendix we show how the predictors tended to perform across different priorities.
The two items related to ant sentience were purely exploratory items requested by an organization with interests in animal welfare, whereas the remaining six items are more strictly the ‘EA-related ideas/attitudes’ that were intended to generate more insight into people’s cause prioritization preferences, and their general epistemological stances.
i.e., there are two separate ‘peaks’, with more people somewhat agreeing and somewhat disagreeing with the statement than there are people in the middle of the response scale, in contrast to a normal distribution
This score was the average of support for AI, Nuclear, Biosecurity, and other X-risk minus support for the average of Global poverty and Mental health.
Median results were only minimally different and do not change the pattern of results overall.
More up to date estimates of allocations are available from Tyler Maule, but the categories line up less well with the survey categories.