taking the survey results about engagement at face value doesn’t seem right to me
Not sure I understand – how do you think we should interpret them? Edit: Nevermind, now I get it.
Regarding the latter issue, it sounds like we might address it by repeating the same analysis using, say, EA Survey 2016 data? (Some people have updated their views since and we’d miss out on that, so that might be closer to a lower-bound estimate of interest in longtermism.)
Fortunately we have data on this (including data on different engagement levels using EA Forum as a proxy) going back to 2017 (before that the cause question had a multi-select format that doesn’t allow for easy comparison to these results).
If we look at the full sample over time using the same categories, we can see that there’s been a tendency towards increased support for long-termist causes overall and a decline in support for Global Poverty (though support for Poverty remains >50% higher than for AI. The “Other near term” trend goes in the opposite direction, but this is largely because this category combines Climate Change and Mental Health and we only added Mental Health to the EAS in 2018.
Looking at EA Forum members only (a highly engaged ~20% of the EAS sample), we can see that there’s been a slight trend towards more long-termism over time, though this trend is not so immediately obvious to see since between 2018 and 2019 EAs in this sample seem to have switched between AI and other long-termist causes. But on the whole the EA Forum subset has been more stable in its views (and closer to the LF allocation) over time.
Of course, it is not immediately obvious what we should conclude from this about dropout (or decreasing engagement) in non-longtermist people. We do know that many people have been switching into long-termist causes (and especially AI) over time (see below). But it’s quite possible that non-longtermists have been dropping out of EA over a longer time frame (pre-2017). That said, I do think that the EA Forum proxy for engagement is probably more robust to these kinds of effects than the self-reported (1-5) engagement level, since although people might dropout of Forum membership due to disproportionately longtermist discussion, the Forum still has at least a measure of cause diversity, whereas facets of the engagement scale (such as EA org employement and EA Global attendance) are more directly filtering on long-termism. We will address data about people decreasing engagement or dropping our of EA due to perceiving EA as prioritizing certain causes too heavily in a forthcoming EA Survey post.
My basic concern is that Ben is taking the fact there is high representativeness now to be a good thing while not seeming so worried about how this higher representativeness came about. This higher representativeness (as Denise points out) could well just be result of people who aren’t enthused with the current leaders’ vision simply leaving. The alternative route, where the community change their minds and follow the leaders, would be better.
Anecdotally, it seems like more of the first has happened (but I’d be happy to be proved wrong). Yet, if one thinks representativeness is good, achieving representativeness by having people who don’t share your vision leave doesn’t seem like a good result!
I think the point is that some previously highly engaged EAs may have become less engaged (so dropped out of the 1000 people), or some would-be-engaged people didn’t become engaged, due to the community’s strong emphasis of longtermism. So I think it’s all the same point, not two separate points.
I think I personally know a lot more EAs who have changed their views to longtermism than EAs who have dropped out of EA due to its longtermist focus. If that’s true of the community as a whole (which I’m not sure about), the main point stands.
This is very much an aside, but I would be really curious how many people you perceive as having changed their views to longtermism would actually agree with this. (According to David’s analysis, it is probably a decent amount.)
E.g. I’m wondering whether I would count in this category. From the outside I might have looked like I changed my views towards longtermism, while from the inside I would describe my views as pretty agnostic, but I prioritised community preferences over my own. There might also be some people who felt like they had to appear to have or act on longtermist views to not lose access to the community.
Some may also have started off longtermist without that being obvious—I knew I was a total utilitarian and cared about the long run future from ~2009, but didn’t feel like I knew how to act on that until much later. So I guess from the outside my views may look like they changed over the last couple of years in a way they didn’t.
Not sure I understand – how do you think we should interpret them? Edit: Nevermind, now I get it.
Regarding the latter issue, it sounds like we might address it by repeating the same analysis using, say, EA Survey 2016 data? (Some people have updated their views since and we’d miss out on that, so that might be closer to a lower-bound estimate of interest in longtermism.)
Fortunately we have data on this (including data on different engagement levels using EA Forum as a proxy) going back to 2017 (before that the cause question had a multi-select format that doesn’t allow for easy comparison to these results).
If we look at the full sample over time using the same categories, we can see that there’s been a tendency towards increased support for long-termist causes overall and a decline in support for Global Poverty (though support for Poverty remains >50% higher than for AI. The “Other near term” trend goes in the opposite direction, but this is largely because this category combines Climate Change and Mental Health and we only added Mental Health to the EAS in 2018.
Looking at EA Forum members only (a highly engaged ~20% of the EAS sample), we can see that there’s been a slight trend towards more long-termism over time, though this trend is not so immediately obvious to see since between 2018 and 2019 EAs in this sample seem to have switched between AI and other long-termist causes. But on the whole the EA Forum subset has been more stable in its views (and closer to the LF allocation) over time.
Of course, it is not immediately obvious what we should conclude from this about dropout (or decreasing engagement) in non-longtermist people. We do know that many people have been switching into long-termist causes (and especially AI) over time (see below). But it’s quite possible that non-longtermists have been dropping out of EA over a longer time frame (pre-2017). That said, I do think that the EA Forum proxy for engagement is probably more robust to these kinds of effects than the self-reported (1-5) engagement level, since although people might dropout of Forum membership due to disproportionately longtermist discussion, the Forum still has at least a measure of cause diversity, whereas facets of the engagement scale (such as EA org employement and EA Global attendance) are more directly filtering on long-termism. We will address data about people decreasing engagement or dropping our of EA due to perceiving EA as prioritizing certain causes too heavily in a forthcoming EA Survey post.
Both images from the EA Survey 2019: Cause Prioritization
Yes, that is what I meant. Thank you so much for providing additional analysis!
Thank you for preparing these—very interesting!
I share Denise’s worry.
My basic concern is that Ben is taking the fact there is high representativeness now to be a good thing while not seeming so worried about how this higher representativeness came about. This higher representativeness (as Denise points out) could well just be result of people who aren’t enthused with the current leaders’ vision simply leaving. The alternative route, where the community change their minds and follow the leaders, would be better.
Anecdotally, it seems like more of the first has happened (but I’d be happy to be proved wrong). Yet, if one thinks representativeness is good, achieving representativeness by having people who don’t share your vision leave doesn’t seem like a good result!
I’m also not sure I know what you mean.
I think the point is that some previously highly engaged EAs may have become less engaged (so dropped out of the 1000 people), or some would-be-engaged people didn’t become engaged, due to the community’s strong emphasis of longtermism. So I think it’s all the same point, not two separate points.
I think I personally know a lot more EAs who have changed their views to longtermism than EAs who have dropped out of EA due to its longtermist focus. If that’s true of the community as a whole (which I’m not sure about), the main point stands.
This is very much an aside, but I would be really curious how many people you perceive as having changed their views to longtermism would actually agree with this. (According to David’s analysis, it is probably a decent amount.)
E.g. I’m wondering whether I would count in this category. From the outside I might have looked like I changed my views towards longtermism, while from the inside I would describe my views as pretty agnostic, but I prioritised community preferences over my own. There might also be some people who felt like they had to appear to have or act on longtermist views to not lose access to the community.
Some may also have started off longtermist without that being obvious—I knew I was a total utilitarian and cared about the long run future from ~2009, but didn’t feel like I knew how to act on that until much later. So I guess from the outside my views may look like they changed over the last couple of years in a way they didn’t.
Yeah, I think this is worth taking seriously. (FWIW, I think I had been mostly (though perhaps not completely) aware that you are agnostic.)