Smaller percentages reported other changes such as ceasing to engage with online EA spaces (6.8%), permanently stopping promoting EA ideas or projects (6.3%), stopping attending EA events (5.5%), stopping working on any EA projects (4.3%) and stopping donating (2.5%).
Was there any attempt to deal with the issue that people that left EA were probably far less likely to see and take the survey?
I mean, I can’t think of an easy way to do so, but it might be worth noting.
As we noted in our earlier report, individuals who are particularly dissatisfied with EA may be less likely to complete the survey (whether they have completely dropped out of the community or not), although the opposite effect (more dissatisfied respondents are more motivated to complete the survey to express their dissatisfaction) is also plausible.
I don’t think there’s any feasible way to address this within this, smaller, supplementary survey. Within the main EA Survey we do look for signs of differential attrition.
Looking at the distribution of satisfaction scores in late 2022 vs late 2023, we see the reduction in satisfaction over that period coming from fewer people giving high ratings of 9 or 10, and more people giving low ratings of 3-5. But for the lowest ratings, we see basically no change in the number of people giving a rating of 2, and fewer people (almost nobody) now giving a rating of 1.
This suggests we could crudely estimate the selection effects of people dropping out of the community and therefore not answering the survey by assuming that there was a similar increase in scores of 1 and 2 as there was for scores of 3-5. My guess is that this would still understate the selection bias (because I’d guess we’re also missing people who would have given ratings in the 3-5 range), but it would at least be a start. I think it would be fair to assume that people who would have given satisfaction ratings of 1 or 2 but didn’t bother to complete the survey are probably also undercounted in the various measures of behavioral change.
This is a neat idea, but I think that’s probably putting more weight on the (absence) of small differences at particular levels of the response scale than the smaller sample size of the Extra EA Survey will support. If we look at the CIs for any individual response level, they are relatively wide for the EEAS, and the numbers selecting the lowest response levels were very low anyway.
That makes sense. That said, while it might not be possible to quantify the extent of selectin bias at play, I do think the combination of a) favoring simpler explanations and b) the pattern I observed in the data makes a pretty compelling case that dissatisfied people being less likely to take the survey is probably much more of an issue than dissatisfied people being more likely to take the survey to voice their dissatisfaction.
Was there any attempt to deal with the issue that people that left EA were probably far less likely to see and take the survey?
I mean, I can’t think of an easy way to do so, but it might be worth noting.
We did note this explicitly:
I don’t think there’s any feasible way to address this within this, smaller, supplementary survey. Within the main EA Survey we do look for signs of differential attrition.
As I mentioned in point 3 of this comment:
This suggests we could crudely estimate the selection effects of people dropping out of the community and therefore not answering the survey by assuming that there was a similar increase in scores of 1 and 2 as there was for scores of 3-5. My guess is that this would still understate the selection bias (because I’d guess we’re also missing people who would have given ratings in the 3-5 range), but it would at least be a start. I think it would be fair to assume that people who would have given satisfaction ratings of 1 or 2 but didn’t bother to complete the survey are probably also undercounted in the various measures of behavioral change.
This is a neat idea, but I think that’s probably putting more weight on the (absence) of small differences at particular levels of the response scale than the smaller sample size of the Extra EA Survey will support. If we look at the CIs for any individual response level, they are relatively wide for the EEAS, and the numbers selecting the lowest response levels were very low anyway.
That makes sense. That said, while it might not be possible to quantify the extent of selectin bias at play, I do think the combination of a) favoring simpler explanations and b) the pattern I observed in the data makes a pretty compelling case that dissatisfied people being less likely to take the survey is probably much more of an issue than dissatisfied people being more likely to take the survey to voice their dissatisfaction.