Seeking social science students /​ collaborators interested in AI existential risks

tldr: I’m looking for undergraduate research assistants /​ collaborators to work on research questions at the intersection of social science and long-term risks from AI. I’ve collected some research questions here. If you’re interested in working on these or related questions, and would like advice or mentorship, please contact Vael Gates at vlgates@stanford.edu!

Broader Vision

I’m a social scientist, and I want to contribute to reducing long-term risks from AI. I’m excited about growing the community of fellow researchers (at all levels) who are interested in the intersection of AI existential risk and social science.

To that end, I’m hoping to:

  1. collect and grow a list of research questions that would be interesting for social scientists of various subfields and valuable to AI safety

  2. work with undergraduate students /​ collaborators on these questions

Below are some research questions I think are interesting, largely drawn from conversations with others and extant question lists. The questions are biased towards psychology at the moment; I’m looking to expand them.

If any of these look interesting to you, or if you’re generally enthusiastic about the idea of this research intersection, please get in contact (vlgates@stanford.edu)! As a postdoctoral scholar at Stanford, I can offer undergraduate RAs the usual academic array of research experience, mentoring, recommendation letters, and potentially co-authorship on a conference publication.

Research Questions

(AI × social science) Research Questions

Please feel encouraged to add additional research questions to the bottom of the Google doc.

Current topics:

Previous Work

There have already been several requests for social scientists to do work in longtermist AI safety (as opposed to current-day and near-term AI safety). I am excited about these opportunities, yet note that since the field is still new, many of these opportunities require specialized skillsets and interests, substantial research independence, and have limited options for mentorship. What follows is my impression of the established opportunities:

I aim to provide some mentorship for social science-oriented people interested in getting involved, and navigating the above tree. (Perhaps drawing from the growing Effective Altruism behavioral scientist community, and interested EA students, e.g. Stanford Existential Risk Initiative’s “Social Sciences and Existential Risks” reading group in winter 2021?) I hope that as a community we can /​ continue to generate research questions that have strong ties to impact, and coordinate to involve more interested researchers in answering these questions!

Thanks to Abby Novick Hoskin, Michael Aird, Lucius Caviola, Ari Kagan, Bill Zito, Tobias Gerstenberg, and Michael Keenan for providing commentary on earlier versions of this post. The more public-facing version of this post is cross-posted on my website: Seeking social science students interested in long-term risks from AI.