Thank you to the survey team for completing what is an easy-to-underestimate volume of work. Thank you also to the many who completed this survey, helping us to both understand different EA communities better and to improve this process of learning about ourselves as a wider group in future years.
I have designed and analysed several consumer surveys professionally as part of my job as a strategy consultant.
There is already a discussion of sample bias so I will leave those issues alone in this post and focus on three simple suggestions to make the process easier and more reliable for when this valuable exercise is repeated next year.
Firstly, we should use commercial software to operate the survey rather than trying to build something ourselves. These are both less effort and more reliable. For example, SurveyMonkey could have done everything this survey does for about £300. I’m happy to pay that myself next year to avoid some of the data quality issues.
Secondly, we should use live data validation to improve data collection, data integrity and ease of analysis. SurveyMonkey or other tools can help John to fill in his age in the right box. It could refuse to believe the 7 year old, and suggest that they have another go at entering their age. It could also be valuable to do some respondent validation by asking people to answer a question with a given answer, removing any random clickers or poor quality respondents who are speeding through (eg “Please enter the number ‘2’ in letters into the textbox to prove you are not a robot. For example, the number ‘1’ in letters is ‘one’”)
Thirdly, we should do more testing by trying out draft versions with respondents who have not written the survey. It is very, very hard to estimate how people are going to read a particular question, or which options should be included in multiple choice questions. Within my firm, it is typical for an entire project team to run through a survey several times before sending it out to the public. Part of the value here is that most team members were not closely involved in writing the survey, and so won’t necessarily be reading it in the way the author expected them to read it. I would suggest you want to try any version of the survey out with a large group (at least twenty) of different people who might answer it, to catch the interpretations of questions which different groups might have. Does the EA affiliation filter work as hoped for? Are there important charities which we should include in the prompt list? It does not seem unreasonable to pilot and redraft a few times with a diverse group of willing volunteers before releasing generally.
The analysis throws up several interesting conclusions, and I have learned a lot by reading through it. The main shocks are: the relatively low levels of donations in $ terms by many self-identified EAs, the relatively low proportion of EAs identifying chapters/local groups as a reason for joining or identifying with the community and, (for me) the encouragingly high proportion of respondents who are vegetarian or vegan.
I’m going to set aside some time in May to go through the data in a ‘consulting’ sort of way to see if that approach throws up anything interesting or different to others and will circulate with the survey team before publishing here.
(On the 0 donors question: I’ve written about this elsewhere in the comments and a sizeable majority of these respondents were full time students or low income or had made significant past donations or had pledged at least (and often much more) of future income). Once all these people are taken account of, the number of 0 donors was pretty low. There was a similar (if not even stronger) trend for people donating <$500).
Thanks Chris, this is useful feedback and we’ll go through it. For example, I think trying out draft versions would be valuable. I may ask you some more questions, e.g. about SurveyMonkey’s features.
Thank you to the survey team for completing what is an easy-to-underestimate volume of work. Thank you also to the many who completed this survey, helping us to both understand different EA communities better and to improve this process of learning about ourselves as a wider group in future years.
I have designed and analysed several consumer surveys professionally as part of my job as a strategy consultant.
There is already a discussion of sample bias so I will leave those issues alone in this post and focus on three simple suggestions to make the process easier and more reliable for when this valuable exercise is repeated next year.
Firstly, we should use commercial software to operate the survey rather than trying to build something ourselves. These are both less effort and more reliable. For example, SurveyMonkey could have done everything this survey does for about £300. I’m happy to pay that myself next year to avoid some of the data quality issues.
Secondly, we should use live data validation to improve data collection, data integrity and ease of analysis. SurveyMonkey or other tools can help John to fill in his age in the right box. It could refuse to believe the 7 year old, and suggest that they have another go at entering their age. It could also be valuable to do some respondent validation by asking people to answer a question with a given answer, removing any random clickers or poor quality respondents who are speeding through (eg “Please enter the number ‘2’ in letters into the textbox to prove you are not a robot. For example, the number ‘1’ in letters is ‘one’”)
Thirdly, we should do more testing by trying out draft versions with respondents who have not written the survey. It is very, very hard to estimate how people are going to read a particular question, or which options should be included in multiple choice questions. Within my firm, it is typical for an entire project team to run through a survey several times before sending it out to the public. Part of the value here is that most team members were not closely involved in writing the survey, and so won’t necessarily be reading it in the way the author expected them to read it. I would suggest you want to try any version of the survey out with a large group (at least twenty) of different people who might answer it, to catch the interpretations of questions which different groups might have. Does the EA affiliation filter work as hoped for? Are there important charities which we should include in the prompt list? It does not seem unreasonable to pilot and redraft a few times with a diverse group of willing volunteers before releasing generally.
The analysis throws up several interesting conclusions, and I have learned a lot by reading through it. The main shocks are: the relatively low levels of donations in $ terms by many self-identified EAs, the relatively low proportion of EAs identifying chapters/local groups as a reason for joining or identifying with the community and, (for me) the encouragingly high proportion of respondents who are vegetarian or vegan.
I’m going to set aside some time in May to go through the data in a ‘consulting’ sort of way to see if that approach throws up anything interesting or different to others and will circulate with the survey team before publishing here.
Thanks Chris, all very useful info.
(On the 0 donors question: I’ve written about this elsewhere in the comments and a sizeable majority of these respondents were full time students or low income or had made significant past donations or had pledged at least (and often much more) of future income). Once all these people are taken account of, the number of 0 donors was pretty low. There was a similar (if not even stronger) trend for people donating <$500).
Thanks Chris, this is useful feedback and we’ll go through it. For example, I think trying out draft versions would be valuable. I may ask you some more questions, e.g. about SurveyMonkey’s features.
Happy to answer these any time, and happy to help out next year (ideally in low time commitment ways, given other constraints).