If EA engagement predicts relatively more support for AI relative to climate change and global poverty, I’m sure people have been asking as to whether EA engagement causes this, or if people from some cause areas just engage more for some other reason. Has anyone drawn any conclusions?
Either way: one _might_ conclude that “climate change” and “global poverty” are more “mainstream” priorities, where “mainstream” is defined as the popular opinion of the populations from which EA is drawing. Would this be a valid conclusion?
Do people know of data on what cause prioritization looks like among various non-EA populations who might be defined as more “mainstream” or more “hegemonic” in some fashion? Bearing in mind that “mainstream” / distance from EA is a continuum, and it would be useful to sample multiple points on that continuum. (For example, “College Professors” might be representative of opinions that are both more mainstream and more hegemonic within a certain group)
(I’ll try to come back to this comment and link any relevant data myself if I come across it later)
This isn’t really what I was looking for, but it’s an”online national sample of Americans” polled on giving to deworming vs make a wish and the local choir. I’m hoping to find something more focused on the diversity of causes within EA, and more well defined and more adjacent populations.
I mentioned college professors above, but I can think of lots of different populations e.g .”students from specific colleges” or “members of adjacent online forums”, or “startup founders” or ’doctors without borders people” or “teach for america people” or even “Non-EA friends and relatives of EAs” which might be illustrative as points of comparison—some easier to poll than others. Generally I think the most useful data comes from those who are representative of people who are already sort of adjacent to EA, represent key institutions, and whose buy-in would be most practically useful for movement building over decades, which is why I went for “college professors” first.
Thanks for your stimulating questions and comments Ishaan.
one might conclude that “climate change” and “global poverty” are more “mainstream” priorities, where “mainstream” is defined as the popular opinion of the populations from which EA is drawing. Would this be a valid conclusion?
This seems a pretty uncontroversial conclusion relative to many of the cause areas we asked about (e.g. AI, Cause Prioritization, Biosecurity, Meta, Other Existential Risk, Rationality, Nuclear Security and Mental Health)
Do people know of data on what cause prioritization looks like among various non-EA populations who might be defined as more “mainstream”
We don’t have data on how the general population would prioritize these causes- indeed, it would be difficult to gather such data, since most non-EAs would not be familiar with what many of these categories refer to.
We can examine donation data from the general population however. In the UK we see the following breakdown:
As you can see, the vast majority of donor money is going to causes which don’t even feature in the EA causes list.
(For example, “College Professors” might be representative of opinions that are both more mainstream and more hegemonic within a certain group)
I imagine that college professors might be quite unrepresentative and counter-mainstream in their own ways. Examining (elite?) university students or recent graduates might be interesting (though unrepresentative) as a comparison, as a group that a large number of EAs are drawn from.
Elite opinion and what representatives from key institutions think seems like a further interesting question, though would likely require different research methods.
If EA engagement predicts relatively more support for AI relative to climate change and global poverty, I’m sure people have been asking as to whether EA engagement causes this, or if people from some cause areas just engage more for some other reason. Has anyone drawn any conclusions?
I think there are plausibly multiple different mechanisms operating at once, some of which may be mutually reinforcing.
People shifting in certain directions as they spend more time in EA/become more involved in EA/become involved in certain parts of EA: there certainly seem to be cases where this happens (people change their views upon exposure to certain EA arguments) and it seems to mostly be in the direction we found (people updating in the direction of Long Term Future causes, so this seems quite plausible
Differential dropout/retention across different causes: it seems fairly plausible that people who support causes which receive more official and unofficial sanction and status (LTF) would be more likely to remain in the movement than causes which receive less (and support for which is often implicitly or explicitly presented as indicating that you don’t really get EA. So it’s possible that people who support these other causes drop out in higher numbers than those who support LTF causes. (I know many people who talk about leaving the movement due to this, although none to my knowledge have).
There could also be third factors which drive both higher EA involvement and higher interest in certain causes (perhaps in interaction with time in the movement or some such). Unfortunately without individual level longitudinal data we don’t know exactly how far people are changing views vs the composition of different groups changing.
If EA engagement predicts relatively more support for AI relative to climate change and global poverty, I’m sure people have been asking as to whether EA engagement causes this, or if people from some cause areas just engage more for some other reason. Has anyone drawn any conclusions?
Either way: one _might_ conclude that “climate change” and “global poverty” are more “mainstream” priorities, where “mainstream” is defined as the popular opinion of the populations from which EA is drawing. Would this be a valid conclusion?
Do people know of data on what cause prioritization looks like among various non-EA populations who might be defined as more “mainstream” or more “hegemonic” in some fashion? Bearing in mind that “mainstream” / distance from EA is a continuum, and it would be useful to sample multiple points on that continuum. (For example, “College Professors” might be representative of opinions that are both more mainstream and more hegemonic within a certain group)
(I’ll try to come back to this comment and link any relevant data myself if I come across it later)
https://forum.effectivealtruism.org/posts/MDxaD688pATMnjwmB/to-grow-a-healthy-movement-pick-the-low-hanging-fruit
https://www.dropbox.com/s/9ywputy5v0qzu3t/Understanding Effective Givers.pdf?dl=0
This isn’t really what I was looking for, but it’s an”online national sample of Americans” polled on giving to deworming vs make a wish and the local choir. I’m hoping to find something more focused on the diversity of causes within EA, and more well defined and more adjacent populations.
I mentioned college professors above, but I can think of lots of different populations e.g .”students from specific colleges” or “members of adjacent online forums”, or “startup founders” or ’doctors without borders people” or “teach for america people” or even “Non-EA friends and relatives of EAs” which might be illustrative as points of comparison—some easier to poll than others. Generally I think the most useful data comes from those who are representative of people who are already sort of adjacent to EA, represent key institutions, and whose buy-in would be most practically useful for movement building over decades, which is why I went for “college professors” first.
Thanks for your stimulating questions and comments Ishaan.
This seems a pretty uncontroversial conclusion relative to many of the cause areas we asked about (e.g. AI, Cause Prioritization, Biosecurity, Meta, Other Existential Risk, Rationality, Nuclear Security and Mental Health)
We don’t have data on how the general population would prioritize these causes- indeed, it would be difficult to gather such data, since most non-EAs would not be familiar with what many of these categories refer to.
We can examine donation data from the general population however. In the UK we see the following breakdown:
As you can see, the vast majority of donor money is going to causes which don’t even feature in the EA causes list.
I imagine that college professors might be quite unrepresentative and counter-mainstream in their own ways. Examining (elite?) university students or recent graduates might be interesting (though unrepresentative) as a comparison, as a group that a large number of EAs are drawn from.
Elite opinion and what representatives from key institutions think seems like a further interesting question, though would likely require different research methods.
I think there are plausibly multiple different mechanisms operating at once, some of which may be mutually reinforcing.
People shifting in certain directions as they spend more time in EA/become more involved in EA/become involved in certain parts of EA: there certainly seem to be cases where this happens (people change their views upon exposure to certain EA arguments) and it seems to mostly be in the direction we found (people updating in the direction of Long Term Future causes, so this seems quite plausible
Differential dropout/retention across different causes: it seems fairly plausible that people who support causes which receive more official and unofficial sanction and status (LTF) would be more likely to remain in the movement than causes which receive less (and support for which is often implicitly or explicitly presented as indicating that you don’t really get EA. So it’s possible that people who support these other causes drop out in higher numbers than those who support LTF causes. (I know many people who talk about leaving the movement due to this, although none to my knowledge have).
There could also be third factors which drive both higher EA involvement and higher interest in certain causes (perhaps in interaction with time in the movement or some such). Unfortunately without individual level longitudinal data we don’t know exactly how far people are changing views vs the composition of different groups changing.