I’m going to reproduce a comment I wrote at the time the 2014 results were released in order to have them on the agenda for the call later on. I remain convinced that each of these three practical suggestions is relatively low effort and will make the survey process easier, the data more reliable and any resulting conclusions more credible:
Firstly, we should use commercial software to operate the survey rather than trying to build something ourselves. These are both less effort and more reliable. For example, SurveyMonkey could have done everything this survey does for about £300. I’m happy to pay that myself next year to avoid some of the data quality issues.
Secondly, we should use live data validation to improve data collection, data integrity and ease of analysis. SurveyMonkey or other tools can help John to fill in his age in the right box. It could refuse to believe the 7 year old, and suggest that they have another go at entering their age. It could also be valuable to do some respondent validation by asking people to answer a question with a given answer, removing any random clickers or poor quality respondents who are speeding through (eg “Please enter the number ‘2’ in letters into the textbox to prove you are not a robot. For example, the number ‘1’ in letters is ‘one’”)
Thirdly, we should do more testing by trying out draft versions with respondents who have not written the survey. It is very, very hard to estimate how people are going to read a particular question, or which options should be included in multiple choice questions. Within my firm, it is typical for an entire project team to run through a survey several times before sending it out to the public. Part of the value here is that most team members were not closely involved in writing the survey, and so won’t necessarily be reading it in the way the author expected them to read it. I would suggest you want to try any version of the survey out with a large group (at least twenty) of different people who might answer it, to catch the interpretations of questions which different groups might have. Does the EA affiliation filter work as hoped for? Are there important charities which we should include in the prompt list? It does not seem unreasonable to pilot and redraft a few times with a diverse group of willing volunteers before releasing generally.
Firstly, we should use commercial software to operate the survey rather than trying to build something ourselves. These are both less effort and more reliable. For example, SurveyMonkey could have done everything this survey does for about £300. I’m happy to pay that myself next year to avoid some of the data quality issues.
It does seem clearly to be worth this expense. I’m concerned that .impact/the community team behind the survey are too reluctant to spend money and undervalue the time relative to it. I suppose that’s the cost of not being a funded organization.
asking people to answer a question with a given answer, removing any random clickers or poor quality respondents who are speeding through (eg “Please enter the number ‘2’ in letters into the textbox to prove you are not a robot. For example, the number ‘1’ in letters is ‘one’”)
Seconded—I’d urge the team to do this, even if it means ignoring some genuine answers (I would expect Effective Altruists to generally put enough effort into the survey to spot and complete this question, though I might be naïve).
Thirdly, we should do more testing by trying out draft versions with respondents who have not written the survey.
An excellent suggestion also. I’d be willing to do this—I imagine anyone else who’d volunteer can comment below and hopefully someone from the team will spot this and send messages.
I’m going to reproduce a comment I wrote at the time the 2014 results were released in order to have them on the agenda for the call later on. I remain convinced that each of these three practical suggestions is relatively low effort and will make the survey process easier, the data more reliable and any resulting conclusions more credible:
Firstly, we should use commercial software to operate the survey rather than trying to build something ourselves. These are both less effort and more reliable. For example, SurveyMonkey could have done everything this survey does for about £300. I’m happy to pay that myself next year to avoid some of the data quality issues.
Secondly, we should use live data validation to improve data collection, data integrity and ease of analysis. SurveyMonkey or other tools can help John to fill in his age in the right box. It could refuse to believe the 7 year old, and suggest that they have another go at entering their age. It could also be valuable to do some respondent validation by asking people to answer a question with a given answer, removing any random clickers or poor quality respondents who are speeding through (eg “Please enter the number ‘2’ in letters into the textbox to prove you are not a robot. For example, the number ‘1’ in letters is ‘one’”)
Thirdly, we should do more testing by trying out draft versions with respondents who have not written the survey. It is very, very hard to estimate how people are going to read a particular question, or which options should be included in multiple choice questions. Within my firm, it is typical for an entire project team to run through a survey several times before sending it out to the public. Part of the value here is that most team members were not closely involved in writing the survey, and so won’t necessarily be reading it in the way the author expected them to read it. I would suggest you want to try any version of the survey out with a large group (at least twenty) of different people who might answer it, to catch the interpretations of questions which different groups might have. Does the EA affiliation filter work as hoped for? Are there important charities which we should include in the prompt list? It does not seem unreasonable to pilot and redraft a few times with a diverse group of willing volunteers before releasing generally.
It does seem clearly to be worth this expense. I’m concerned that .impact/the community team behind the survey are too reluctant to spend money and undervalue the time relative to it. I suppose that’s the cost of not being a funded organization.
Seconded—I’d urge the team to do this, even if it means ignoring some genuine answers (I would expect Effective Altruists to generally put enough effort into the survey to spot and complete this question, though I might be naïve).
An excellent suggestion also. I’d be willing to do this—I imagine anyone else who’d volunteer can comment below and hopefully someone from the team will spot this and send messages.
Great suggestion Stens!
I’m happy to trial draft versions of the survey