Thanks for your stimulating questions and comments Ishaan.
one might conclude that “climate change” and “global poverty” are more “mainstream” priorities, where “mainstream” is defined as the popular opinion of the populations from which EA is drawing. Would this be a valid conclusion?
This seems a pretty uncontroversial conclusion relative to many of the cause areas we asked about (e.g. AI, Cause Prioritization, Biosecurity, Meta, Other Existential Risk, Rationality, Nuclear Security and Mental Health)
Do people know of data on what cause prioritization looks like among various non-EA populations who might be defined as more “mainstream”
We don’t have data on how the general population would prioritize these causes- indeed, it would be difficult to gather such data, since most non-EAs would not be familiar with what many of these categories refer to.
We can examine donation data from the general population however. In the UK we see the following breakdown:
As you can see, the vast majority of donor money is going to causes which don’t even feature in the EA causes list.
(For example, “College Professors” might be representative of opinions that are both more mainstream and more hegemonic within a certain group)
I imagine that college professors might be quite unrepresentative and counter-mainstream in their own ways. Examining (elite?) university students or recent graduates might be interesting (though unrepresentative) as a comparison, as a group that a large number of EAs are drawn from.
Elite opinion and what representatives from key institutions think seems like a further interesting question, though would likely require different research methods.
If EA engagement predicts relatively more support for AI relative to climate change and global poverty, I’m sure people have been asking as to whether EA engagement causes this, or if people from some cause areas just engage more for some other reason. Has anyone drawn any conclusions?
I think there are plausibly multiple different mechanisms operating at once, some of which may be mutually reinforcing.
People shifting in certain directions as they spend more time in EA/become more involved in EA/become involved in certain parts of EA: there certainly seem to be cases where this happens (people change their views upon exposure to certain EA arguments) and it seems to mostly be in the direction we found (people updating in the direction of Long Term Future causes, so this seems quite plausible
Differential dropout/retention across different causes: it seems fairly plausible that people who support causes which receive more official and unofficial sanction and status (LTF) would be more likely to remain in the movement than causes which receive less (and support for which is often implicitly or explicitly presented as indicating that you don’t really get EA. So it’s possible that people who support these other causes drop out in higher numbers than those who support LTF causes. (I know many people who talk about leaving the movement due to this, although none to my knowledge have).
There could also be third factors which drive both higher EA involvement and higher interest in certain causes (perhaps in interaction with time in the movement or some such). Unfortunately without individual level longitudinal data we don’t know exactly how far people are changing views vs the composition of different groups changing.
Thanks for your stimulating questions and comments Ishaan.
This seems a pretty uncontroversial conclusion relative to many of the cause areas we asked about (e.g. AI, Cause Prioritization, Biosecurity, Meta, Other Existential Risk, Rationality, Nuclear Security and Mental Health)
We don’t have data on how the general population would prioritize these causes- indeed, it would be difficult to gather such data, since most non-EAs would not be familiar with what many of these categories refer to.
We can examine donation data from the general population however. In the UK we see the following breakdown:
As you can see, the vast majority of donor money is going to causes which don’t even feature in the EA causes list.
I imagine that college professors might be quite unrepresentative and counter-mainstream in their own ways. Examining (elite?) university students or recent graduates might be interesting (though unrepresentative) as a comparison, as a group that a large number of EAs are drawn from.
Elite opinion and what representatives from key institutions think seems like a further interesting question, though would likely require different research methods.
I think there are plausibly multiple different mechanisms operating at once, some of which may be mutually reinforcing.
People shifting in certain directions as they spend more time in EA/become more involved in EA/become involved in certain parts of EA: there certainly seem to be cases where this happens (people change their views upon exposure to certain EA arguments) and it seems to mostly be in the direction we found (people updating in the direction of Long Term Future causes, so this seems quite plausible
Differential dropout/retention across different causes: it seems fairly plausible that people who support causes which receive more official and unofficial sanction and status (LTF) would be more likely to remain in the movement than causes which receive less (and support for which is often implicitly or explicitly presented as indicating that you don’t really get EA. So it’s possible that people who support these other causes drop out in higher numbers than those who support LTF causes. (I know many people who talk about leaving the movement due to this, although none to my knowledge have).
There could also be third factors which drive both higher EA involvement and higher interest in certain causes (perhaps in interaction with time in the movement or some such). Unfortunately without individual level longitudinal data we don’t know exactly how far people are changing views vs the composition of different groups changing.