Thanks for this Sean! I think work like this is exceptionally useful as introductory information for busy people who are likely to pattern match “advanced AI” to “terminator” or “beyond time horizon”.
One piece of feedback I’ll offer is to encourage you to consider whether it’s possible to link narrow AI ethics concerns to AGI alignment in a way that your last point, “there is work that can be done” shows how current efforts to address narrow AI issues can be linked to AGI. This is especially relevant for governance. This could help people understand why it’s important to address AGI issues now, rather than waiting until narrow AI ethics is “fixed” (a misperception I’ve seen a few times).
I’m really excited to see this survey idea getting developed. Congratulations to the Rethink team on securing funding for this!
A few questions on design, content and purpose:
Who are the users for this survey, how will they be involved with the design, and how will findings be communicated with them?
In previous living / repeated survey work that I’ve done (SCRUB COVID-19), having research users involved in the design was crucial for it to influence their decision-making. This also got complex when the survey became successful and there were different groups of research users, all of whom had different needs
Because “what gets measured, gets managed”, there is a risk / opportunity in who decides which questions should be included in order to measure “awareness and attitudes towards EA and longtermism”.
Will data, materials, code and documentation from the survey be made available for replication, international adaptation, and secondary analysis?
This could include anonymised data, Qualtrics survey instruments, R code, Google docs of data documentation, etc
Secondary analysis could significantly boost the current and long-term value of the project by opening it up to other interested researchers to explore hypotheses relevant to EA
Providing materials and good code & documentation can help international replication and adapation.
Was there a particular reason to choose a monthly cycle for the survey? Do you have an end date in mind or are you hoping to continue indefinitely?
Do you anticipate that attitudes and beliefs would change that rapidly? In other successful ‘pulse’ style national surveys, it’s more common to see yearly or even less frequent measurement (here’s one great example of a longitudinal values survey from New Zealand)
Is there capacity to effectively design, conduct, analyse, and communicate at this pace? In previous work I’ve found that this cycle—especially in communicating with / managing research users, survey panel companies, etc—can become exhausting, especially if the idea is to run the survey indefinitely.
In terms of specific questions to add, my main thought is to include behavioural items, not just attitudes and beliefs.
Ways of measuring this could include “investigated the effectiveness of a charity before donating on the last occasion you had a chance”, or “donated to effective charity in past 12 months”, or “number of days in the past week that you ate only plant-based products (no meat, seafood, dairy or eggs)
Through the SCRUB COVID-19 project, we (several of us at Ready) ran a survey of 1700 Australians every 3 weeks for about 15 months (2020-2021) in close consultation with state policymakers and their research users. Please reach out if you’d like to discuss / share experiences.