Your observations ring true—I’ve talked about AI safety issues over the last 6 years with about 30-50 academic faculty, and taught a course for 60+ undergraduate students (mostly psych majors) that includes two weeks of discussion on AI safety. I think almost everything you said sounds similar to my experiences.
Additional observations:
A) The moral, social, and political framing of AI safety issues matters for getting people interested, and these often need adjusting to the other person’s or group’s ideology. Academics who lean left politically often seem more responsive to arguments about algorithmic bias, technological unemployment, concentration of power, disparate impact, etc. Academics who lean libertarian tend to be more responsive to arguments about algorithmic censorship, authoritarian lock-in, and misuse of AI by the military-industrial complex. Conservative academics are often surprisingly interested in multi-generational, longtermism perspectives, and seem quite responsive to X risk arguments (insofar as conservatives tend to view civilization as rather fragile, transient, and in need of protection from technological disruptions.) So, it helps to have a smorgasbord of different AI safety concerns that different kinds of people with different values can relate to. There’s no one-size-fits-all way to get people interested in AI safety.
B) Faculty and students outside computer science often don’t know what they’re supposed to do about AI safety, or how they can contribute. I interact mostly with behavioral and biological scientists in psychology, anthropology, economics, evolutionary theory, behavior genetics, etc. The brighter ones are often very interested in AI issues, and get excited when they hear that ‘AI should be aligned with human values’—because many of them study human values. Yet, when they ask ‘OK, will AI safety insiders respect my expertise about the biological/psychological/economic basis of human values, and want to collaborate with me about alignment?‘, I have to answer ‘Probably not, given the current culture of AI safety research, and the premium it places on technical machine learning knowledge as the price of admission’.
C) Most people—including most academics—come to the AI safety issue through the lens of the science fiction movies and TV series they’ve watched. Rather than dismissing these media sources as silly, misleading, and irrelevant to the ‘serious work’ of AI alignment, I’ve found it helpful to be very familiar with these media sources, to find the ones that really resonate with the person I’m talking with (whether it’s Terminator 2, or Black Mirror, or Ex Machina, or Age of Ultron), and to kind of steer the conversation from that shared enthusiasm about sci fi pop culture towards current AI alignment issues.
Thank you Geoffrey for an insightful contribution!
Regarding B—The project PIBBSS has done over the last fellowship (disclosure I now work there as Ops Director) has exactly this goal in mind, and we are keen to connect to non-AI researchers interested in doing AI safety research by utilizing their diverse professions. Do point them our way and tell them that the interdisciplinary field is in development. The fellowship is not open yet, and we are considering how to go forward, but there will likely be speaker series that would be relevant to these people.
Marius—very helpful post; thank you.
Your observations ring true—I’ve talked about AI safety issues over the last 6 years with about 30-50 academic faculty, and taught a course for 60+ undergraduate students (mostly psych majors) that includes two weeks of discussion on AI safety. I think almost everything you said sounds similar to my experiences.
Additional observations:
A) The moral, social, and political framing of AI safety issues matters for getting people interested, and these often need adjusting to the other person’s or group’s ideology. Academics who lean left politically often seem more responsive to arguments about algorithmic bias, technological unemployment, concentration of power, disparate impact, etc. Academics who lean libertarian tend to be more responsive to arguments about algorithmic censorship, authoritarian lock-in, and misuse of AI by the military-industrial complex. Conservative academics are often surprisingly interested in multi-generational, longtermism perspectives, and seem quite responsive to X risk arguments (insofar as conservatives tend to view civilization as rather fragile, transient, and in need of protection from technological disruptions.) So, it helps to have a smorgasbord of different AI safety concerns that different kinds of people with different values can relate to. There’s no one-size-fits-all way to get people interested in AI safety.
B) Faculty and students outside computer science often don’t know what they’re supposed to do about AI safety, or how they can contribute. I interact mostly with behavioral and biological scientists in psychology, anthropology, economics, evolutionary theory, behavior genetics, etc. The brighter ones are often very interested in AI issues, and get excited when they hear that ‘AI should be aligned with human values’—because many of them study human values. Yet, when they ask ‘OK, will AI safety insiders respect my expertise about the biological/psychological/economic basis of human values, and want to collaborate with me about alignment?‘, I have to answer ‘Probably not, given the current culture of AI safety research, and the premium it places on technical machine learning knowledge as the price of admission’.
C) Most people—including most academics—come to the AI safety issue through the lens of the science fiction movies and TV series they’ve watched. Rather than dismissing these media sources as silly, misleading, and irrelevant to the ‘serious work’ of AI alignment, I’ve found it helpful to be very familiar with these media sources, to find the ones that really resonate with the person I’m talking with (whether it’s Terminator 2, or Black Mirror, or Ex Machina, or Age of Ultron), and to kind of steer the conversation from that shared enthusiasm about sci fi pop culture towards current AI alignment issues.
Thank you Geoffrey for an insightful contribution!
Regarding B—The project PIBBSS has done over the last fellowship (disclosure I now work there as Ops Director) has exactly this goal in mind, and we are keen to connect to non-AI researchers interested in doing AI safety research by utilizing their diverse professions. Do point them our way and tell them that the interdisciplinary field is in development. The fellowship is not open yet, and we are considering how to go forward, but there will likely be speaker series that would be relevant to these people.
Dušan—thanks for this pointed to PIBBSS, which I hadn’t heard of before. I’ve signed up for the newsletter!