I think both our sets of results show that (at least) a significant minority believe that the community has veered too much in the direction of AI/x-risk/longtermism.
Agreed.
But I don’t think that either sets of results show that the community overall is lukewarm on longtermism. I think the situation is better characterised as division between people who are more supportive of longtermist causes (whose support has been growing), and those who are more supportive of neartermist causes.
It seems like you find the descriptor ‘lukewarm’ to be specifically problematic—I am considering changing the word choice of the ‘headline result’ accordingly given this exchange. (I originally chose to use the word ‘lukewarm’ to reflect the normal-but-slightly-negative skew of the results I’ve highlighted previously. I probably would have used ‘divided’ if our results looked bimodal, but they do not.)
What seems clear from this is that the hundreds of actively involved EAs we sampled arenotcollectively aligned (or ‘divided’ or ‘collectively-lukewarm’ or however you want to describe it) on whether increased attention to longtermist causes represents a positive change in the community—despite systematically mispredicting numerous times that the sample would respond more positively. I will again refer to the relevant result to ensure any readers appreciate how straightforwardly this interpretation follows from the result—
(~35% don’t think positive shift, ~30% do; ~45% don’t think primary focus, ~25% do. ~250 actively involved EAs sampled from across 10+ cause areas.)
This division/lukewarmness/misalignment represents a foundational philosophical disagreement about how to go about doing the most good and seemed pretty important for us to highlight in the write-up. It is also worth emphasizing that we personally care very much about causes like AI risk and would have hoped to see stronger support for longtermism in general—but we did not find this, much to our surprise (and to the surprise of the hundreds of participants who predicted the distributions would look significantly different as can be seen above).
As noted in the post, we definitely think follow-up research is very important for fleshing out all of these findings, and we are very supportive of all of the great work Rethink Priorities has done in this space. Perhaps it would be worthwhile at some point in the future to attempt to collaboratively investigate this specific question to see if we can’t better determine what is driving this pattern of results.
(Also, to be clear, I was not insinuating the engagement scale is invalid—looks completely reasonable to me. Simply pointing out that we are quantifying engagement differently, which may further contribute to explaining why our related but distinct analyses yielded different results.)
Thanks again for your engagement with the post and for providing readers with really interesting context throughout this discussion :)
It seems like you find the descriptor ‘lukewarm’ to be specifically problematic—I am considering changing the word choice of the ‘headline result’ accordingly given this exchange. (I originally chose to use the word ‘lukewarm’ to reflect the normal-but-slightly-negative skew of the results I’ve highlighted previously. I probably would have used ‘divided’ if our results looked bimodal, but they do not.)
I don’t think our disagreement is to do with the word “lukewarm”. I’d be happy for the word “lukewarm” to be replaced with “normal but slightly negative skew” or “roughly neutral, but slightly negative” in our disagreement. I’ll explain where I think the disagreement is below.
Here’s the core statement which I disagreed with:
EAs have lukewarm [normal but slightly-negative skew] views about longtermism
Result: EAs (actively involved across 10+ cause areas) generally seem to think that AI risk and x-risk are less promising cause areas than ones like global health and development and animal welfare
The first point of disagreement concerned this claim:
“EAs (actively involved across 10+ cause areas) generally seem to think that AI risk and x-risk are less promising cause areas than ones like global health and development and animal welfare”
If we take “promising” to mean anything like prioritise / support / believe should receive a larger amount of resources / believe is more impactful etc., then I think this is a straightforward substantive disagreement: I think whatever way we slice ‘active involvement’, we’ll find more actively involved EAs prioritise X-risk more.
As we discussed above, it’s possible that “promising” means something else. But I personally do not have a good sense of in what way actively involved EAs think AI and x-risk are less promising than GHD and animal welfare.[1]
EAs have lukewarm [normal but slightly-negative skew] views about longtermism
Concerning this claim, I think we need to distinguish (as I did above), between:
What do people think of ‘longtermism’? / What do people think about allocations to or prioritisation of longtermist causes?
What do people think of EA’s shift more towards longtermist causes?
Regarding the first of these questions, your second result shows slight disagreement with the claim “I think longtermist causes should be the primary focus in effective altruism”. I agree that a reasonable interpretation of this result, taken in isolation, is that the actively involved EA community is slightly negative regarding longtermism. But taking into account other data, like our cause prioritisation data which shows actively engaged EAs strongly prioritise x-risk causes or result suggesting slight agreement with an abstract statement of longtermism, I’m more sceptical. I wonder if what explains the difference is people’s response to the notion of these causes being the “primary focus”, rather than their attitudes towards longtermist causes per se.[2] If so, these responses need not indicate that the actively involved community leans slightly negative towards longtermism.
In any case, this question largely seems to me to reduce to the question of what people’s actual cause prioritisation is + what their beliefs are about abstract longtermism, discussed above.
Regarding the question of EA’s attitudes towards the “overall shift towards longtermist causes”, I would also say that, taken in isolation, it’s reasonable to interpret your result as showing that actively involved EAs are lean slightly negative towards EA’s shift towards longtermism.Again, our cause prioritisation results suggesting strong and increasing prioritisation of longtermist causes by more engaged EAs across multiple surveys gives me pause. But the main point I’ll make (which suggests a potential conciliatory way to reconcile these results) is to observe that attitudes towards the “overall shift towards longtermist causes” may not reflect attitudes towards longtermism per se. Perhaps people are Neutral/Agnostic regarding the “overall shift”, despite personally prioritising longtermist causes, because they are Agnostic about what people in the rest of the community should do. Or perhaps people think that the shift overall has been mishandled (whatever their cause prioritisation). If so the results may be interesting, regarding EAs’ attitudes towards this “shift” but not regarding their overall attitudes towards longtermism and longtermist causes.
Thanks again for your work producing these results and responding to these comments!
As I noted, I could imagine “promising” connoting something like new, young, scrappy cause areas (such that an area could be more “promising” even if people support it less than a larger established cause area). I could sort of see this fitting Animal Welfare (though it’s not really a new cause area), but it’s hard for me to see this applying to Global Health/Global Poverty which is a very old, established and large cause area.
For example, people might think EA should not have a “primary focus”, but remain a ‘cause-neutral’ movement (even though they prioritise longtermist cause most strongly and think they should get most resources). Or people might think we should split resources across causes for some other reason, despite favouring longtermism.
Agreed.
It seems like you find the descriptor ‘lukewarm’ to be specifically problematic—I am considering changing the word choice of the ‘headline result’ accordingly given this exchange. (I originally chose to use the word ‘lukewarm’ to reflect the normal-but-slightly-negative skew of the results I’ve highlighted previously. I probably would have used ‘divided’ if our results looked bimodal, but they do not.)
What seems clear from this is that the hundreds of actively involved EAs we sampled are not collectively aligned (or ‘divided’ or ‘collectively-lukewarm’ or however you want to describe it) on whether increased attention to longtermist causes represents a positive change in the community—despite systematically mispredicting numerous times that the sample would respond more positively. I will again refer to the relevant result to ensure any readers appreciate how straightforwardly this interpretation follows from the result—
(~35% don’t think positive shift, ~30% do; ~45% don’t think primary focus, ~25% do. ~250 actively involved EAs sampled from across 10+ cause areas.)
This division/lukewarmness/misalignment represents a foundational philosophical disagreement about how to go about doing the most good and seemed pretty important for us to highlight in the write-up. It is also worth emphasizing that we personally care very much about causes like AI risk and would have hoped to see stronger support for longtermism in general—but we did not find this, much to our surprise (and to the surprise of the hundreds of participants who predicted the distributions would look significantly different as can be seen above).
As noted in the post, we definitely think follow-up research is very important for fleshing out all of these findings, and we are very supportive of all of the great work Rethink Priorities has done in this space. Perhaps it would be worthwhile at some point in the future to attempt to collaboratively investigate this specific question to see if we can’t better determine what is driving this pattern of results.
(Also, to be clear, I was not insinuating the engagement scale is invalid—looks completely reasonable to me. Simply pointing out that we are quantifying engagement differently, which may further contribute to explaining why our related but distinct analyses yielded different results.)
Thanks again for your engagement with the post and for providing readers with really interesting context throughout this discussion :)
Thanks again for the detailed reply Cameron!
I don’t think our disagreement is to do with the word “lukewarm”. I’d be happy for the word “lukewarm” to be replaced with “normal but slightly negative skew” or “roughly neutral, but slightly negative” in our disagreement. I’ll explain where I think the disagreement is below.
Here’s the core statement which I disagreed with:
The first point of disagreement concerned this claim:
“EAs (actively involved across 10+ cause areas) generally seem to think that AI risk and x-risk are less promising cause areas than ones like global health and development and animal welfare”
If we take “promising” to mean anything like prioritise / support / believe should receive a larger amount of resources / believe is more impactful etc., then I think this is a straightforward substantive disagreement: I think whatever way we slice ‘active involvement’, we’ll find more actively involved EAs prioritise X-risk more.
As we discussed above, it’s possible that “promising” means something else. But I personally do not have a good sense of in what way actively involved EAs think AI and x-risk are less promising than GHD and animal welfare.[1]
Concerning this claim, I think we need to distinguish (as I did above), between:
What do people think of ‘longtermism’? / What do people think about allocations to or prioritisation of longtermist causes?
What do people think of EA’s shift more towards longtermist causes?
Regarding the first of these questions, your second result shows slight disagreement with the claim “I think longtermist causes should be the primary focus in effective altruism”. I agree that a reasonable interpretation of this result, taken in isolation, is that the actively involved EA community is slightly negative regarding longtermism. But taking into account other data, like our cause prioritisation data which shows actively engaged EAs strongly prioritise x-risk causes or result suggesting slight agreement with an abstract statement of longtermism, I’m more sceptical. I wonder if what explains the difference is people’s response to the notion of these causes being the “primary focus”, rather than their attitudes towards longtermist causes per se.[2] If so, these responses need not indicate that the actively involved community leans slightly negative towards longtermism.
In any case, this question largely seems to me to reduce to the question of what people’s actual cause prioritisation is + what their beliefs are about abstract longtermism, discussed above.
Regarding the question of EA’s attitudes towards the “overall shift towards longtermist causes”, I would also say that, taken in isolation, it’s reasonable to interpret your result as showing that actively involved EAs are lean slightly negative towards EA’s shift towards longtermism. Again, our cause prioritisation results suggesting strong and increasing prioritisation of longtermist causes by more engaged EAs across multiple surveys gives me pause. But the main point I’ll make (which suggests a potential conciliatory way to reconcile these results) is to observe that attitudes towards the “overall shift towards longtermist causes” may not reflect attitudes towards longtermism per se. Perhaps people are Neutral/Agnostic regarding the “overall shift”, despite personally prioritising longtermist causes, because they are Agnostic about what people in the rest of the community should do. Or perhaps people think that the shift overall has been mishandled (whatever their cause prioritisation). If so the results may be interesting, regarding EAs’ attitudes towards this “shift” but not regarding their overall attitudes towards longtermism and longtermist causes.
Thanks again for your work producing these results and responding to these comments!
As I noted, I could imagine “promising” connoting something like new, young, scrappy cause areas (such that an area could be more “promising” even if people support it less than a larger established cause area). I could sort of see this fitting Animal Welfare (though it’s not really a new cause area), but it’s hard for me to see this applying to Global Health/Global Poverty which is a very old, established and large cause area.
For example, people might think EA should not have a “primary focus”, but remain a ‘cause-neutral’ movement (even though they prioritise longtermist cause most strongly and think they should get most resources). Or people might think we should split resources across causes for some other reason, despite favouring longtermism.