I’m really excited to see this survey idea getting developed. Congratulations to the Rethink team on securing funding for this!
A few questions on design, content and purpose:
Who are the users for this survey, how will they be involved with the design, and how will findings be communicated with them?
In previous living / repeated survey work that I’ve done (SCRUB COVID-19), having research users involved in the design was crucial for it to influence their decision-making. This also got complex when the survey became successful and there were different groups of research users, all of whom had different needs
Because “what gets measured, gets managed”, there is a risk / opportunity in who decides which questions should be included in order to measure “awareness and attitudes towards EA and longtermism”.
Will data, materials, code and documentation from the survey be made available for replication, international adaptation, and secondary analysis?
This could include anonymised data, Qualtrics survey instruments, R code, Google docs of data documentation, etc
Secondary analysis could significantly boost the current and long-term value of the project by opening it up to other interested researchers to explore hypotheses relevant to EA
Providing materials and good code & documentation can help international replication and adapation.
Was there a particular reason to choose a monthly cycle for the survey? Do you have an end date in mind or are you hoping to continue indefinitely?
Is there capacity to effectively design, conduct, analyse, and communicate at this pace? In previous work I’ve found that this cycle—especially in communicating with / managing research users, survey panel companies, etc—can become exhausting, especially if the idea is to run the survey indefinitely.
In terms of specific questions to add, my main thought is to include behavioural items, not just attitudes and beliefs.
Ways of measuring this could include “investigated the effectiveness of a charity before donating on the last occasion you had a chance”, or “donated to effective charity in past 12 months”, or “number of days in the past week that you ate only plant-based products (no meat, seafood, dairy or eggs)
Through the SCRUB COVID-19 project, we (several of us at Ready) ran a survey of 1700 Australians every 3 weeks for about 15 months (2020-2021) in close consultation with state policymakers and their research users. Please reach out if you’d like to discuss / share experiences.
Thanks Alexander! I appreciate the offer to meet to talk about your experiences, that sounds very useful!
Who are the users for this survey, how will they be involved with the design, and how will findings be communicated with them?
We envisage the main users of the survey being EA orgs and decision-makers. We’ve already been in touch with some of the main groups and will reach out to some key ones to co-ordinate again now that we’ve formally announced. That said, we’re also keen to receive suggestions and requests from a broader set of stakeholders in the community (hence this announcement).
The exact composition of the survey, in terms of serving different users, will depend on how many priority requests we get from different groups, so we’ll be working that out over the course of the next month as different groups make requests.
Will data, materials, code and documentation from the survey be made available for replication, international adaptation, and secondary analysis?
Related to the above, we don’t know exactly how much we’ll be making public, because we don’t know how much of the survey will be part of the core public tracker vs bespoke requests from particular decision makers (which may or may not be private/confidential). That said, I’m optimistic we’ll be able to make a large amount public (or shared with relevant researchers) regarding the core tracker (e.g. for things we are reporting publicly).
Was there a particular reason to choose a monthly cycle for the survey? Do you have an end date in mind or are you hoping to continue indefinitely?
We’re essentially trialing this for 12 months, to see how useful it is and how much demand there seems to be for it, after which, if all goes well, we would be looking to continue and/or expand.
The monthly cadence is influenced by multiple considerations. One is that, ideally, we would be able to detect changes over relatively short time-scales (e.g. in response to media coverage), and part of this trial will be to identify what is feasible and useful. Another consideration is that running more surveys within the time span will allow us to include more ad hoc time sensitive requests by orgs (i.e. things they want to know within a given month, rather than things we are tracking across time). I think it’s definitely quite plausible we might switch to a different cadence later, perhaps due to resource constraints (including availability of respondents).
I would agree that more general or fundamental attitudes are unlikely to change on a monthly cadence. I think it’s more plausible to see changes on a short time-frame for some of the more specific things we’re looking at (e.g awareness of or attitude towards particular (currently) low salience issues or ideas).
I’m really excited to see this survey idea getting developed. Congratulations to the Rethink team on securing funding for this!
A few questions on design, content and purpose:
Who are the users for this survey, how will they be involved with the design, and how will findings be communicated with them?
In previous living / repeated survey work that I’ve done (SCRUB COVID-19), having research users involved in the design was crucial for it to influence their decision-making. This also got complex when the survey became successful and there were different groups of research users, all of whom had different needs
Because “what gets measured, gets managed”, there is a risk / opportunity in who decides which questions should be included in order to measure “awareness and attitudes towards EA and longtermism”.
Will data, materials, code and documentation from the survey be made available for replication, international adaptation, and secondary analysis?
This could include anonymised data, Qualtrics survey instruments, R code, Google docs of data documentation, etc
Secondary analysis could significantly boost the current and long-term value of the project by opening it up to other interested researchers to explore hypotheses relevant to EA
Providing materials and good code & documentation can help international replication and adapation.
Was there a particular reason to choose a monthly cycle for the survey? Do you have an end date in mind or are you hoping to continue indefinitely?
Do you anticipate that attitudes and beliefs would change that rapidly? In other successful ‘pulse’ style national surveys, it’s more common to see yearly or even less frequent measurement (here’s one great example of a longitudinal values survey from New Zealand)
Is there capacity to effectively design, conduct, analyse, and communicate at this pace? In previous work I’ve found that this cycle—especially in communicating with / managing research users, survey panel companies, etc—can become exhausting, especially if the idea is to run the survey indefinitely.
In terms of specific questions to add, my main thought is to include behavioural items, not just attitudes and beliefs.
Ways of measuring this could include “investigated the effectiveness of a charity before donating on the last occasion you had a chance”, or “donated to effective charity in past 12 months”, or “number of days in the past week that you ate only plant-based products (no meat, seafood, dairy or eggs)
Through the SCRUB COVID-19 project, we (several of us at Ready) ran a survey of 1700 Australians every 3 weeks for about 15 months (2020-2021) in close consultation with state policymakers and their research users. Please reach out if you’d like to discuss / share experiences.
Thanks Alexander! I appreciate the offer to meet to talk about your experiences, that sounds very useful!
We envisage the main users of the survey being EA orgs and decision-makers. We’ve already been in touch with some of the main groups and will reach out to some key ones to co-ordinate again now that we’ve formally announced. That said, we’re also keen to receive suggestions and requests from a broader set of stakeholders in the community (hence this announcement).
The exact composition of the survey, in terms of serving different users, will depend on how many priority requests we get from different groups, so we’ll be working that out over the course of the next month as different groups make requests.
Related to the above, we don’t know exactly how much we’ll be making public, because we don’t know how much of the survey will be part of the core public tracker vs bespoke requests from particular decision makers (which may or may not be private/confidential). That said, I’m optimistic we’ll be able to make a large amount public (or shared with relevant researchers) regarding the core tracker (e.g. for things we are reporting publicly).
We’re essentially trialing this for 12 months, to see how useful it is and how much demand there seems to be for it, after which, if all goes well, we would be looking to continue and/or expand.
The monthly cadence is influenced by multiple considerations. One is that, ideally, we would be able to detect changes over relatively short time-scales (e.g. in response to media coverage), and part of this trial will be to identify what is feasible and useful. Another consideration is that running more surveys within the time span will allow us to include more ad hoc time sensitive requests by orgs (i.e. things they want to know within a given month, rather than things we are tracking across time). I think it’s definitely quite plausible we might switch to a different cadence later, perhaps due to resource constraints (including availability of respondents).
I would agree that more general or fundamental attitudes are unlikely to change on a monthly cadence. I think it’s more plausible to see changes on a short time-frame for some of the more specific things we’re looking at (e.g awareness of or attitude towards particular (currently) low salience issues or ideas).
Looking forward to talking more about this.