Thanks Alexander! I appreciate the offer to meet to talk about your experiences, that sounds very useful!
Who are the users for this survey, how will they be involved with the design, and how will findings be communicated with them?
We envisage the main users of the survey being EA orgs and decision-makers. We’ve already been in touch with some of the main groups and will reach out to some key ones to co-ordinate again now that we’ve formally announced. That said, we’re also keen to receive suggestions and requests from a broader set of stakeholders in the community (hence this announcement).
The exact composition of the survey, in terms of serving different users, will depend on how many priority requests we get from different groups, so we’ll be working that out over the course of the next month as different groups make requests.
Will data, materials, code and documentation from the survey be made available for replication, international adaptation, and secondary analysis?
Related to the above, we don’t know exactly how much we’ll be making public, because we don’t know how much of the survey will be part of the core public tracker vs bespoke requests from particular decision makers (which may or may not be private/confidential). That said, I’m optimistic we’ll be able to make a large amount public (or shared with relevant researchers) regarding the core tracker (e.g. for things we are reporting publicly).
Was there a particular reason to choose a monthly cycle for the survey? Do you have an end date in mind or are you hoping to continue indefinitely?
We’re essentially trialing this for 12 months, to see how useful it is and how much demand there seems to be for it, after which, if all goes well, we would be looking to continue and/or expand.
The monthly cadence is influenced by multiple considerations. One is that, ideally, we would be able to detect changes over relatively short time-scales (e.g. in response to media coverage), and part of this trial will be to identify what is feasible and useful. Another consideration is that running more surveys within the time span will allow us to include more ad hoc time sensitive requests by orgs (i.e. things they want to know within a given month, rather than things we are tracking across time). I think it’s definitely quite plausible we might switch to a different cadence later, perhaps due to resource constraints (including availability of respondents).
I would agree that more general or fundamental attitudes are unlikely to change on a monthly cadence. I think it’s more plausible to see changes on a short time-frame for some of the more specific things we’re looking at (e.g awareness of or attitude towards particular (currently) low salience issues or ideas).
Thanks Alexander! I appreciate the offer to meet to talk about your experiences, that sounds very useful!
We envisage the main users of the survey being EA orgs and decision-makers. We’ve already been in touch with some of the main groups and will reach out to some key ones to co-ordinate again now that we’ve formally announced. That said, we’re also keen to receive suggestions and requests from a broader set of stakeholders in the community (hence this announcement).
The exact composition of the survey, in terms of serving different users, will depend on how many priority requests we get from different groups, so we’ll be working that out over the course of the next month as different groups make requests.
Related to the above, we don’t know exactly how much we’ll be making public, because we don’t know how much of the survey will be part of the core public tracker vs bespoke requests from particular decision makers (which may or may not be private/confidential). That said, I’m optimistic we’ll be able to make a large amount public (or shared with relevant researchers) regarding the core tracker (e.g. for things we are reporting publicly).
We’re essentially trialing this for 12 months, to see how useful it is and how much demand there seems to be for it, after which, if all goes well, we would be looking to continue and/or expand.
The monthly cadence is influenced by multiple considerations. One is that, ideally, we would be able to detect changes over relatively short time-scales (e.g. in response to media coverage), and part of this trial will be to identify what is feasible and useful. Another consideration is that running more surveys within the time span will allow us to include more ad hoc time sensitive requests by orgs (i.e. things they want to know within a given month, rather than things we are tracking across time). I think it’s definitely quite plausible we might switch to a different cadence later, perhaps due to resource constraints (including availability of respondents).
I would agree that more general or fundamental attitudes are unlikely to change on a monthly cadence. I think it’s more plausible to see changes on a short time-frame for some of the more specific things we’re looking at (e.g awareness of or attitude towards particular (currently) low salience issues or ideas).
Looking forward to talking more about this.