Leading computer science universities appear to be a promising place to increase interest in working to address existential risk from AI, especially among the undergraduate and graduate student body. In Q1 2022, EA student groups at Oxford, MIT, Georgia Tech, and other universities have had strong success with AI safety community-building through activities such as facilitating the semester-long AGI Safety Fundamentals program locally, hosting high-profile AI safety researchers for virtual guest speaker events, and running a research paper reading group. We’d also like to see student groups which engage students with opportunities to develop relevant skills and which connect them with mentors to work on AI safety projects, with the goal of empowering students to work full-time on AI safety. We’d be happy to fund students to run AI safety community-building activities alongside their studies or to take a gap semester, or to sponsor other people to support an EA group at leading university in building up the AI safety community.
Some additional comments on why I think AI safety clubs are promising:
For those unfamiliar, the AGI Safety Fundamentals alignment track is a reading group to learn about AI safety over the course of 8+ weeks, with discussions led by a facilitator familiar with the readings. The curriculum is written by Richard Ngo, a researcher at OpenAI.
EA at Georgia Tech (my group) has over 36 participants in our AI Safety Fundamentals program. To give a sense of demographics, 22 are on-campus, 11 are online master’s students, 1 is a TA, and 2 is are alumni. I haven’t done a formal count but I think of our applicants are fairly new if not completely new to both AI safety and EA. As part of our application, we had applicants read Vox’s The case for taking AI seriously as a threat to humanity and the introduction to The Precipice. Even though we accepted all but one applicant, most applicants were quite interested in existential risk from AI. I think the main way we got applicants was through simple emails to the College of Computing newsletter, which were sent out to all the CS students. Though we had the benefit of having an EA student group already established the prior semester, only four applicants had prior engagement with our group, so I don’t think it was a major factor for our applicant pool.
OxAI Safety Hub has been able to have an impressive lineup of guest speakers. Their first event with Rohin Shah from DeepMind attracted 70 attendees (though it’s worth noting that OxAI Safety Hub has the benefit of being at the location with the largest EA student group already). They plan on running AGI Safety Fundamentals locally and starting a local summer research program connecting students to local mentors to work on AI safety projects.
MIT’s unofficial new AI Safety Club has apparently been quite successful with an interpretability reading group and talk series. I’d like to thank Kaivu from MIT for inspiring me to think about AI safety clubs in the first place.
For those who don’t have time to facilitate several cohorts of AGI Safety Fundamentals, it’s possible that we might be able to obtain most of the same value by broadly advertising the virtual AGI Safety Fundamentals by EA Cambridge. That said, I’m not sure the EA Cambridge application and acceptance process used in early 2022 would be suitable for people who are completely new to EA or AI safety. EA NYU was able to get 40+ applications to their AI Alignment Fellowship program (based on the AGI Safety Fundamentals technical track) weeks ahead of the application deadline, and recruited virtual facilitators in order to have enough capacity to facilitate the program.
The part about having people from outside the university help run the group is basically the campus specialist position proposed by the Centre for Effective Altruism, but applied to AI safety instead of EA.
I wanted to make this proposal fairly concrete and grounded in existing examples to demonstrate feasibility. But if this sounds too under-ambitious, some ways that a local AI safety community group could deploy funding could be: having a large team of organizers, sponsoring value-aligned members to attend bootcamps to skill up, and offering stipends for research fellowships. For reference, CEA claims that a campus specialist “could be leading a large team and managing a multi-million dollar budget within three years of starting”.
Thanks for sharing this idea, super exciting to me that there is so much traction for getting junior CS people excited about AI Safety. I’d love to see much more of this happen and will likely (70%?) try to spend > a day thinking about this in the next month. If you have more ideas or pointers to look into, would highly appreciate it.
AI safety university groups
Artificial Intelligence
Leading computer science universities appear to be a promising place to increase interest in working to address existential risk from AI, especially among the undergraduate and graduate student body. In Q1 2022, EA student groups at Oxford, MIT, Georgia Tech, and other universities have had strong success with AI safety community-building through activities such as facilitating the semester-long AGI Safety Fundamentals program locally, hosting high-profile AI safety researchers for virtual guest speaker events, and running a research paper reading group. We’d also like to see student groups which engage students with opportunities to develop relevant skills and which connect them with mentors to work on AI safety projects, with the goal of empowering students to work full-time on AI safety. We’d be happy to fund students to run AI safety community-building activities alongside their studies or to take a gap semester, or to sponsor other people to support an EA group at leading university in building up the AI safety community.
Some additional comments on why I think AI safety clubs are promising:
For those unfamiliar, the AGI Safety Fundamentals alignment track is a reading group to learn about AI safety over the course of 8+ weeks, with discussions led by a facilitator familiar with the readings. The curriculum is written by Richard Ngo, a researcher at OpenAI.
EA at Georgia Tech (my group) has over 36 participants in our AI Safety Fundamentals program. To give a sense of demographics, 22 are on-campus, 11 are online master’s students, 1 is a TA, and 2 is are alumni. I haven’t done a formal count but I think of our applicants are fairly new if not completely new to both AI safety and EA. As part of our application, we had applicants read Vox’s The case for taking AI seriously as a threat to humanity and the introduction to The Precipice. Even though we accepted all but one applicant, most applicants were quite interested in existential risk from AI. I think the main way we got applicants was through simple emails to the College of Computing newsletter, which were sent out to all the CS students. Though we had the benefit of having an EA student group already established the prior semester, only four applicants had prior engagement with our group, so I don’t think it was a major factor for our applicant pool.
OxAI Safety Hub has been able to have an impressive lineup of guest speakers. Their first event with Rohin Shah from DeepMind attracted 70 attendees (though it’s worth noting that OxAI Safety Hub has the benefit of being at the location with the largest EA student group already). They plan on running AGI Safety Fundamentals locally and starting a local summer research program connecting students to local mentors to work on AI safety projects.
MIT’s unofficial new AI Safety Club has apparently been quite successful with an interpretability reading group and talk series. I’d like to thank Kaivu from MIT for inspiring me to think about AI safety clubs in the first place.
For those who don’t have time to facilitate several cohorts of AGI Safety Fundamentals, it’s possible that we might be able to obtain most of the same value by broadly advertising the virtual AGI Safety Fundamentals by EA Cambridge. That said, I’m not sure the EA Cambridge application and acceptance process used in early 2022 would be suitable for people who are completely new to EA or AI safety. EA NYU was able to get 40+ applications to their AI Alignment Fellowship program (based on the AGI Safety Fundamentals technical track) weeks ahead of the application deadline, and recruited virtual facilitators in order to have enough capacity to facilitate the program.
The part about having people from outside the university help run the group is basically the campus specialist position proposed by the Centre for Effective Altruism, but applied to AI safety instead of EA.
I wanted to make this proposal fairly concrete and grounded in existing examples to demonstrate feasibility. But if this sounds too under-ambitious, some ways that a local AI safety community group could deploy funding could be: having a large team of organizers, sponsoring value-aligned members to attend bootcamps to skill up, and offering stipends for research fellowships. For reference, CEA claims that a campus specialist “could be leading a large team and managing a multi-million dollar budget within three years of starting”.
Thanks for sharing this idea, super exciting to me that there is so much traction for getting junior CS people excited about AI Safety. I’d love to see much more of this happen and will likely (70%?) try to spend > a day thinking about this in the next month. If you have more ideas or pointers to look into, would highly appreciate it.