Looking at the distribution of satisfaction scores in late 2022 vs late 2023, we see the reduction in satisfaction over that period coming from fewer people giving high ratings of 9 or 10, and more people giving low ratings of 3-5. But for the lowest ratings, we see basically no change in the number of people giving a rating of 2, and fewer people (almost nobody) now giving a rating of 1.
This suggests we could crudely estimate the selection effects of people dropping out of the community and therefore not answering the survey by assuming that there was a similar increase in scores of 1 and 2 as there was for scores of 3-5. My guess is that this would still understate the selection bias (because I’d guess we’re also missing people who would have given ratings in the 3-5 range), but it would at least be a start. I think it would be fair to assume that people who would have given satisfaction ratings of 1 or 2 but didn’t bother to complete the survey are probably also undercounted in the various measures of behavioral change.
This is a neat idea, but I think that’s probably putting more weight on the (absence) of small differences at particular levels of the response scale than the smaller sample size of the Extra EA Survey will support. If we look at the CIs for any individual response level, they are relatively wide for the EEAS, and the numbers selecting the lowest response levels were very low anyway.
That makes sense. That said, while it might not be possible to quantify the extent of selectin bias at play, I do think the combination of a) favoring simpler explanations and b) the pattern I observed in the data makes a pretty compelling case that dissatisfied people being less likely to take the survey is probably much more of an issue than dissatisfied people being more likely to take the survey to voice their dissatisfaction.
As I mentioned in point 3 of this comment:
This suggests we could crudely estimate the selection effects of people dropping out of the community and therefore not answering the survey by assuming that there was a similar increase in scores of 1 and 2 as there was for scores of 3-5. My guess is that this would still understate the selection bias (because I’d guess we’re also missing people who would have given ratings in the 3-5 range), but it would at least be a start. I think it would be fair to assume that people who would have given satisfaction ratings of 1 or 2 but didn’t bother to complete the survey are probably also undercounted in the various measures of behavioral change.
This is a neat idea, but I think that’s probably putting more weight on the (absence) of small differences at particular levels of the response scale than the smaller sample size of the Extra EA Survey will support. If we look at the CIs for any individual response level, they are relatively wide for the EEAS, and the numbers selecting the lowest response levels were very low anyway.
That makes sense. That said, while it might not be possible to quantify the extent of selectin bias at play, I do think the combination of a) favoring simpler explanations and b) the pattern I observed in the data makes a pretty compelling case that dissatisfied people being less likely to take the survey is probably much more of an issue than dissatisfied people being more likely to take the survey to voice their dissatisfaction.