Me too! We’re in the process of creating the survey now and will be distributing it in January. This is one thing we’re going to address, and if you have suggestions about specific questions, we’d be interested in hearing them.
Unless you have a specific hypothesis that you are testing, I think the survey is the wrong methodology to answer this question. If you actually want to explore the reasons why (and expect there will not be a single answer) then you need qualitative research.
If you do pursue questions on this topic in a survey format, it is likely you will get misleading answers unless you have the resources to very rigorously test and refine your question methodology. Since you will essentially be asking people if they are not doing something they have said is good to do, there will be all sorts of biases as play, and it will be very difficult to write questions that function the way you expect them to. To the best of my knowledge question testing didn’t happen at all with the first survey, I don’t know if any happened with the second.
I appreciate the survey uses a vast amount of people’s resources, and is done for good reasons. I hate sounding like a doom-monger, but there are pitfalls here and significant limitations on surveys as a research method. I think the EA community risks falling into a trap on this topic, thinking dubious data is better than none, when actually false data can literally costs lives. As previously, I would strongly suggest getting professional involvement.
Ah sorry Bernadette I misunderstood your first question!
I think ‘pin down an explanation’ was probably too strong on my part, because I definitely don’t think it’d be conclusive and I do hope that we have some more qualitative research into this.
We do have professionals working on the survey this year (is that what you meant by professional involvement?) and I’ve sent your comment to them. They’re far better placed to analyze this than me!
Thanks Georgie—I see where we were misunderstanding each other!
That’s great—research like this is quite hard to get right, and I think it’s an excellent plan to have people with experience and knowledge about the design and execution as well as analysis involved. (My background is medical research as well as clinical medicine, and a depressing amount of research—including randomised clinical trials—is never able to answer the important question because of fundamental design choices. Unfortunately knowing this fact isn’t enough to avoid the pitfalls. It’s great that EA is interested in data, but it’s vital we generate and analyse good data well.)
Me too! We’re in the process of creating the survey now and will be distributing it in January. This is one thing we’re going to address, and if you have suggestions about specific questions, we’d be interested in hearing them.
Unless you have a specific hypothesis that you are testing, I think the survey is the wrong methodology to answer this question. If you actually want to explore the reasons why (and expect there will not be a single answer) then you need qualitative research.
If you do pursue questions on this topic in a survey format, it is likely you will get misleading answers unless you have the resources to very rigorously test and refine your question methodology. Since you will essentially be asking people if they are not doing something they have said is good to do, there will be all sorts of biases as play, and it will be very difficult to write questions that function the way you expect them to. To the best of my knowledge question testing didn’t happen at all with the first survey, I don’t know if any happened with the second.
I appreciate the survey uses a vast amount of people’s resources, and is done for good reasons. I hate sounding like a doom-monger, but there are pitfalls here and significant limitations on surveys as a research method. I think the EA community risks falling into a trap on this topic, thinking dubious data is better than none, when actually false data can literally costs lives. As previously, I would strongly suggest getting professional involvement.
Ah sorry Bernadette I misunderstood your first question!
I think ‘pin down an explanation’ was probably too strong on my part, because I definitely don’t think it’d be conclusive and I do hope that we have some more qualitative research into this.
We do have professionals working on the survey this year (is that what you meant by professional involvement?) and I’ve sent your comment to them. They’re far better placed to analyze this than me!
Thanks Georgie—I see where we were misunderstanding each other! That’s great—research like this is quite hard to get right, and I think it’s an excellent plan to have people with experience and knowledge about the design and execution as well as analysis involved. (My background is medical research as well as clinical medicine, and a depressing amount of research—including randomised clinical trials—is never able to answer the important question because of fundamental design choices. Unfortunately knowing this fact isn’t enough to avoid the pitfalls. It’s great that EA is interested in data, but it’s vital we generate and analyse good data well.)
Please include a question about race. At the Effective Animal Advocacy Symposium this past weekend at Princeton, the 2015 EA Survey was specifically called out for neglecting to ask a question about the race of the respondents.
Thanks Eric, we spoke to Garrett about this too :)