Thanks for the comment, I wasn’t aware of this work! I’ve actually pivoted a bit since completing this project, I don’t currently have plans for a follow-on study, instead I’m working with the AI Safety Awareness Foundation doing direct AI outreach/education via in-person workshops oriented at a non-technical mainstream audience. Our work could certainly benefit from data about effective messaging. I will try to connect with Mikhail and see if there’s an opportunity to collaborate!
Thanks for the post, Noah. I’m really glad to see this preliminary work being done.
You should reach out to @MikhailSamin, of the AI Governance and Safety Institute, if you haven’t already. I think he is doing something similar at the moment: https://manifund.org/projects/testing-and-spreading-messages-to-reduce-ai-x-risk
Hi Andy,
Thanks for the comment, I wasn’t aware of this work! I’ve actually pivoted a bit since completing this project, I don’t currently have plans for a follow-on study, instead I’m working with the AI Safety Awareness Foundation doing direct AI outreach/education via in-person workshops oriented at a non-technical mainstream audience. Our work could certainly benefit from data about effective messaging. I will try to connect with Mikhail and see if there’s an opportunity to collaborate!