Hey thanks for the post, I agree with a lot of the takeaways here. I think these projects are also good for helping less confident students realise they can contribute to safety research.
I just wanted to flag that we’ve been running a program in Oxford that’s much as you described. It is related to Oxford’s AI society, which has a research program similar to ML@UCB. So far the projects have been great for students “later in the pipeline” and to be honest, a lot of fun to run! If anybody is interested in setting a program as Josh describes, feel free to reach out to me at my email or see our post.
Oo exciting. Yeah, the research program looks like it is closer to what I’m pitching.
Though I’d also be excited about putting research projects right at the start of the pipeline (if they aren’t already). It looks like AGISF is still at the top of your funnel and I’m not sure if discussion groups like these will be as good for attracting talent.
I really appreciate this kind of post :) Agree that no one has AIS field-building figured out and that more experimentation of different models would be great!
One of my main uncertainties about putting these kinds of research projects early on in the pipeline (and indeed one of the main reasons that the Oxford group has been putting it after a round of AGISF) is that having one early on makes it much harder to filter for people who are actually motivated by safety. Because there is such demand for getting to do research projects among ML students, we worried that if we didn’t filter by having them do AGISF first, we might get lots of people who are actually mainly interested in capabilities research; and then be putting our efforts and resources towards potentially furthering capabilities rather than safety (by giving ‘capabilities students’ skills and experience in ML research). Do you have any thoughts on this? In particular; is there a particular reason that you don’t worry about this?
If there’s a way of running what you describe without having this be a significant risk, I’d be very excited about some groups trying this approach! And as Charlie mentions, the AI Safety Hub could be very happy to support them in running the research project (even at this early stage of the funnel). :))
That’s a good point. Here’s another possibility: Require that students go through a ‘research training program’ before they can participate in the research program. It would have to actually help prepare them for technical research though. Relabeling AGISF as a research training program would be misleading, so you would want to add a lot more technical content (reading papers, coding assignments, etc.) It would probably be pretty easy to gauge how much the training program participants care about X-risk / safety and factor that in when deciding whether to accept them into the research program.
The social atmosphere can also probably go a long way in influencing people’s attitudes towards safety. Making AI risk an explicit focus of the club, talking about it a lot at socials, inviting AI safety researchers to dinners, etc might do most of the work tbh.
Hey thanks for the post, I agree with a lot of the takeaways here. I think these projects are also good for helping less confident students realise they can contribute to safety research.
I just wanted to flag that we’ve been running a program in Oxford that’s much as you described. It is related to Oxford’s AI society, which has a research program similar to ML@UCB.
So far the projects have been great for students “later in the pipeline” and to be honest, a lot of fun to run! If anybody is interested in setting a program as Josh describes, feel free to reach out to me at my email or see our post.
Oo exciting. Yeah, the research program looks like it is closer to what I’m pitching.
Though I’d also be excited about putting research projects right at the start of the pipeline (if they aren’t already). It looks like AGISF is still at the top of your funnel and I’m not sure if discussion groups like these will be as good for attracting talent.
I really appreciate this kind of post :) Agree that no one has AIS field-building figured out and that more experimentation of different models would be great!
One of my main uncertainties about putting these kinds of research projects early on in the pipeline (and indeed one of the main reasons that the Oxford group has been putting it after a round of AGISF) is that having one early on makes it much harder to filter for people who are actually motivated by safety. Because there is such demand for getting to do research projects among ML students, we worried that if we didn’t filter by having them do AGISF first, we might get lots of people who are actually mainly interested in capabilities research; and then be putting our efforts and resources towards potentially furthering capabilities rather than safety (by giving ‘capabilities students’ skills and experience in ML research). Do you have any thoughts on this? In particular; is there a particular reason that you don’t worry about this?
If there’s a way of running what you describe without having this be a significant risk, I’d be very excited about some groups trying this approach! And as Charlie mentions, the AI Safety Hub could be very happy to support them in running the research project (even at this early stage of the funnel). :))
That’s a good point. Here’s another possibility:
Require that students go through a ‘research training program’ before they can participate in the research program. It would have to actually help prepare them for technical research though. Relabeling AGISF as a research training program would be misleading, so you would want to add a lot more technical content (reading papers, coding assignments, etc.) It would probably be pretty easy to gauge how much the training program participants care about X-risk / safety and factor that in when deciding whether to accept them into the research program.
The social atmosphere can also probably go a long way in influencing people’s attitudes towards safety. Making AI risk an explicit focus of the club, talking about it a lot at socials, inviting AI safety researchers to dinners, etc might do most of the work tbh.