tldr: I’m looking for undergraduate research assistants / collaborators to work on research questions at the intersection of social science and long-term risks from AI. I’ve collected some research questions here. If you’re interested in working on these or related questions, and would like advice or mentorship, please contact Vael Gates at vlgates@stanford.edu!
Broader Vision
I’m a social scientist, and I want to contribute to reducing long-term risks from AI. I’m excited about growing the community of fellow researchers (at all levels) who are interested in the intersection of AI existential risk and social science.
To that end, I’m hoping to:
collect and grow a list of research questions that would be interesting for social scientists of various subfields and valuable to AI safety
work with undergraduate students / collaborators on these questions
Below are some research questions I think are interesting, largely drawn from conversations with others and extant question lists. The questions are biased towards psychology at the moment; I’m looking to expand them.
If any of these look interesting to you, or if you’re generally enthusiastic about the idea of this research intersection, please get in contact (vlgates@stanford.edu)! As a postdoctoral scholar at Stanford, I can offer undergraduate RAs the usual academic array of research experience, mentoring, recommendation letters, and potentially co-authorship on a conference publication.
There have already been several requests for social scientists to do work in longtermist AI safety (as opposed to current-day and near-term AI safety). I am excited about these opportunities, yet note that since the field is still new, many of these opportunities require specialized skillsets and interests, substantial research independence, and have limited options for mentorship. What follows is my impression of the established opportunities:
We need technical people who know how to run experiments, to help implement technical AI safety work that calls for interfacing with humans. (Ought and OpenAI “AI needs social scientists” have historically posted jobs of this nature, but are no longer doing this work to my knowledge)
We need people who will work on forecasting for long-term AI (this isn’t social-scientist specific, but some social scientists could do this work).
We need researchers whose technical work spans both AI and social science (see the research agenda from FHI and DeepMind, Open Problems in Cooperative AI)
We need social science researchers to advance AI governance.
See notes from Baobao Zhang’s talk about how social science can inform AI: “At the Centre for the Governance of AI (GovAI), we think that social science research — whether it’s in political science, international relations, law, economics, or psychology — can inform decision-making around AI governance.”
We need expert social scientists who are highly technically-oriented and aware of modern AI and AI trends (e.g. sociologists, historians, psychologists, economists, etc.) to discover a useful role for themselves in shaping the AI landscape positively.
I aim to provide some mentorship for social science-oriented people interested in getting involved, and navigating the above tree. (Perhaps drawing from the growing Effective Altruism behavioral scientist community, and interested EA students, e.g. Stanford Existential Risk Initiative’s “Social Sciences and Existential Risks” reading group in winter 2021?) I hope that as a community we can / continue to generate research questions that have strong ties to impact, and coordinate to involve more interested researchers in answering these questions!
Thanks to Abby Novick Hoskin, Michael Aird, Lucius Caviola, Ari Kagan, Bill Zito, Tobias Gerstenberg, and Michael Keenan for providing commentary on earlier versions of this post. The more public-facing version of this post is cross-posted on my website: Seeking social science students interested in long-term risks from AI.
Seeking social science students / collaborators interested in AI existential risks
tldr: I’m looking for undergraduate research assistants / collaborators to work on research questions at the intersection of social science and long-term risks from AI. I’ve collected some research questions here. If you’re interested in working on these or related questions, and would like advice or mentorship, please contact Vael Gates at vlgates@stanford.edu!
Broader Vision
I’m a social scientist, and I want to contribute to reducing long-term risks from AI. I’m excited about growing the community of fellow researchers (at all levels) who are interested in the intersection of AI existential risk and social science.
To that end, I’m hoping to:
collect and grow a list of research questions that would be interesting for social scientists of various subfields and valuable to AI safety
work with undergraduate students / collaborators on these questions
Below are some research questions I think are interesting, largely drawn from conversations with others and extant question lists. The questions are biased towards psychology at the moment; I’m looking to expand them.
If any of these look interesting to you, or if you’re generally enthusiastic about the idea of this research intersection, please get in contact (vlgates@stanford.edu)! As a postdoctoral scholar at Stanford, I can offer undergraduate RAs the usual academic array of research experience, mentoring, recommendation letters, and potentially co-authorship on a conference publication.
Research Questions
(AI × social science) Research Questions
Please feel encouraged to add additional research questions to the bottom of the Google doc.
Current topics:
Incentives to work on specific research areas
Technical near-term vs long-term concerns about risks from AI
Increasing involvement
Psychology and perceptions of long-term AI safety
Information hazards
Datasets to analyze the social milieu
(Infrastructure and tool building)
(AI governance and policy)
(Surveys of relevant populations)
Additional questions from Richard Ngo (2019)
How have arguments about AI existential risk changed over time?
Communications and Stories
(Other questions from research question lists, also especially highlighting “Problems in AI risk that economists could potentially contribute to” and “Humanities Research Ideas for Longtermists”)
Previous Work
There have already been several requests for social scientists to do work in longtermist AI safety (as opposed to current-day and near-term AI safety). I am excited about these opportunities, yet note that since the field is still new, many of these opportunities require specialized skillsets and interests, substantial research independence, and have limited options for mentorship. What follows is my impression of the established opportunities:
We need technical people who know how to run experiments, to help implement technical AI safety work that calls for interfacing with humans. (Ought and OpenAI “AI needs social scientists” have historically posted jobs of this nature, but are no longer doing this work to my knowledge)
We need people who are skilled at surveys, to study how the AI landscape is progressing, with various populations of people. (FHI and AI Impacts run surveys, e.g. Grace et al. (2018), Zhang and Dafoe (2019), Zhang et al. (2021), others!)
We need people who will work on forecasting for long-term AI (this isn’t social-scientist specific, but some social scientists could do this work).
We need researchers whose technical work spans both AI and social science (see the research agenda from FHI and DeepMind, Open Problems in Cooperative AI)
We need social science researchers to advance AI governance.
See notes from Baobao Zhang’s talk about how social science can inform AI: “At the Centre for the Governance of AI (GovAI), we think that social science research — whether it’s in political science, international relations, law, economics, or psychology — can inform decision-making around AI governance.”
See also AI Governance: A Research Agenda.
We need expert social scientists who are highly technically-oriented and aware of modern AI and AI trends (e.g. sociologists, historians, psychologists, economists, etc.) to discover a useful role for themselves in shaping the AI landscape positively.
e.g. Schubert, Caviola, and Faber (2019) investigates the psychology of existential risks (not AI-specific), Clark and Hadfield (2019) propose a regulatory market model for AI safety
See lists of research questions submitted to the EA Forum! A central directory for open research questions (which recursively includes this post :))
I aim to provide some mentorship for social science-oriented people interested in getting involved, and navigating the above tree. (Perhaps drawing from the growing Effective Altruism behavioral scientist community, and interested EA students, e.g. Stanford Existential Risk Initiative’s “Social Sciences and Existential Risks” reading group in winter 2021?) I hope that as a community we can / continue to generate research questions that have strong ties to impact, and coordinate to involve more interested researchers in answering these questions!
Thanks to Abby Novick Hoskin, Michael Aird, Lucius Caviola, Ari Kagan, Bill Zito, Tobias Gerstenberg, and Michael Keenan for providing commentary on earlier versions of this post. The more public-facing version of this post is cross-posted on my website: Seeking social science students interested in long-term risks from AI.