Seeking social science students / collaborators interested in AI existential risks
tldr: I’m looking for undergraduate research assistants / collaborators to work on research questions at the intersection of social science and long-term risks from AI. I’ve collected some research questions here. If you’re interested in working on these or related questions, and would like advice or mentorship, please contact Vael Gates at vlgates@stanford.edu!
Broader Vision
I’m a social scientist, and I want to contribute to reducing long-term risks from AI. I’m excited about growing the community of fellow researchers (at all levels) who are interested in the intersection of AI existential risk and social science.
To that end, I’m hoping to:
collect and grow a list of research questions that would be interesting for social scientists of various subfields and valuable to AI safety
work with undergraduate students / collaborators on these questions
Below are some research questions I think are interesting, largely drawn from conversations with others and extant question lists. The questions are biased towards psychology at the moment; I’m looking to expand them.
If any of these look interesting to you, or if you’re generally enthusiastic about the idea of this research intersection, please get in contact (vlgates@stanford.edu)! As a postdoctoral scholar at Stanford, I can offer undergraduate RAs the usual academic array of research experience, mentoring, recommendation letters, and potentially co-authorship on a conference publication.
Research Questions
(AI × social science) Research Questions
Please feel encouraged to add additional research questions to the bottom of the Google doc.
Current topics:
Incentives to work on specific research areas
Technical near-term vs long-term concerns about risks from AI
Increasing involvement
Psychology and perceptions of long-term AI safety
Information hazards
Datasets to analyze the social milieu
(Infrastructure and tool building)
(AI governance and policy)
(Surveys of relevant populations)
Additional questions from Richard Ngo (2019)
How have arguments about AI existential risk changed over time?
Communications and Stories
(Other questions from research question lists, also especially highlighting “Problems in AI risk that economists could potentially contribute to” and “Humanities Research Ideas for Longtermists”)
Previous Work
There have already been several requests for social scientists to do work in longtermist AI safety (as opposed to current-day and near-term AI safety). I am excited about these opportunities, yet note that since the field is still new, many of these opportunities require specialized skillsets and interests, substantial research independence, and have limited options for mentorship. What follows is my impression of the established opportunities:
We need technical people who know how to run experiments, to help implement technical AI safety work that calls for interfacing with humans. (Ought and OpenAI “AI needs social scientists” have historically posted jobs of this nature, but are no longer doing this work to my knowledge)
We need people who are skilled at surveys, to study how the AI landscape is progressing, with various populations of people. (FHI and AI Impacts run surveys, e.g. Grace et al. (2018), Zhang and Dafoe (2019), Zhang et al. (2021), others!)
We need people who will work on forecasting for long-term AI (this isn’t social-scientist specific, but some social scientists could do this work).
We need researchers whose technical work spans both AI and social science (see the research agenda from FHI and DeepMind, Open Problems in Cooperative AI)
We need social science researchers to advance AI governance.
See notes from Baobao Zhang’s talk about how social science can inform AI: “At the Centre for the Governance of AI (GovAI), we think that social science research — whether it’s in political science, international relations, law, economics, or psychology — can inform decision-making around AI governance.”
See also AI Governance: A Research Agenda.
We need expert social scientists who are highly technically-oriented and aware of modern AI and AI trends (e.g. sociologists, historians, psychologists, economists, etc.) to discover a useful role for themselves in shaping the AI landscape positively.
e.g. Schubert, Caviola, and Faber (2019) investigates the psychology of existential risks (not AI-specific), Clark and Hadfield (2019) propose a regulatory market model for AI safety
See lists of research questions submitted to the EA Forum! A central directory for open research questions (which recursively includes this post :))
I aim to provide some mentorship for social science-oriented people interested in getting involved, and navigating the above tree. (Perhaps drawing from the growing Effective Altruism behavioral scientist community, and interested EA students, e.g. Stanford Existential Risk Initiative’s “Social Sciences and Existential Risks” reading group in winter 2021?) I hope that as a community we can / continue to generate research questions that have strong ties to impact, and coordinate to involve more interested researchers in answering these questions!
Thanks to Abby Novick Hoskin, Michael Aird, Lucius Caviola, Ari Kagan, Bill Zito, Tobias Gerstenberg, and Michael Keenan for providing commentary on earlier versions of this post. The more public-facing version of this post is cross-posted on my website: Seeking social science students interested in long-term risks from AI.
- Native languages in the EA community (and issues with assessing promisingness) by 27 Dec 2021 2:01 UTC; 72 points) (
- Social scientists interested in AI safety should consider doing direct technical AI safety research, (possibly meta-research), or governance, support roles, or community building instead by 20 Jul 2022 23:01 UTC; 65 points) (
- [AN #166]: Is it crazy to claim we’re in the most important century? by 8 Oct 2021 17:30 UTC; 52 points) (LessWrong;
- Stackelberg Games and Cooperative Commitment: My Thoughts and Reflections on a 2-Month Research Project by 13 Dec 2021 10:49 UTC; 18 points) (
- 12 Nov 2021 23:58 UTC; 2 points) 's comment on Vael Gates’s Quick takes by (
- 24 Sep 2021 21:58 UTC; 1 point) 's comment on A central directory for open research questions by (
Planned summary for the Alignment Newsletter:
I’m the author of the cited AI safety needs social scientists article (along with Amanda Askell), previously at OpenAI and now at DeepMind. I currently work with social scientists in several different areas (governance, ethics, psychology, …), and would be happy to answer questions (though expect delays in replies).
Thanks so much; I’d be excited to talk! Emailed.
Update: I’ve been running a two-month “program” with eight of the students who reached out to me! We’ve come up with research questions from my original list, and the expectation is that individuals work 9h/week as volunteer RAs. I’ve been meeting with each person / group for 30min per week to discuss progress. We’re halfway through this experiment, with a variety of projects and progress states—hopefully you’ll see at least one EA Forum post up from those students!
--
I was quite surprised by the interest that this post generated; ~30 people reached out to me, and a large number were willing to do a volunteer research for no credit / pay. I ended up working with eight students, mostly based on their willingness to work with me on some of my short-listed projects. I was willing to have their projects drift significantly from my original list if the students were enthusiastic and the project felt decently aligned with risks from long-term AI, and that did occur. My goal here was to get some experience training students who had limited research experience, and I’ve been enjoying working with them.
I’m not sure about how likely it is I’ll continue working with students past this 2-month program, because it does take up a chunk of time (that’s made worse by trying to wrangle schedules), but I’m considering what to do for the future. If anyone’s interested in also mentoring students with an interest in longterm risks from AI, please let me know, since I think there’s interest! It’s a decently low time commitment (30m/student or group of students) once you’ve got everything sorted. However, I am doing it for the benefit of the students, rather than with the expectation of getting help on my work, so it’s more of a volunteer role.
I am of the belief that counterfactuals are socially constructed to an extent and so it might be useful for someone from a social science background to investigate this—at least if you think there’s value in MIRI’s research agenda.
The comment about counterfactuals makes me think about computational cognitive scientist Tobias Gerstenberg’s research (https://cicl.stanford.edu), where his research focuses a lot on counterfactual reasoning in the physical domain, but he also has work in the social domain.
I confess to only a surface-level understanding of MIRI’s research agenda, so I’m not quite able to connect my understanding of counterfactual reasoning in the social domain to a concrete research question within MIRI’s agenda. I’d be happy to hear more though if you had more detail!
I’ve written a post on this topic here—https://www.lesswrong.com/posts/9rtWTHsPAf2mLKizi/counterfactuals-as-a-matter-of-social-convention.
BTW, I should be clear that my opinions on this topic aren’t necessarily a mainstream position.