Hi :) I’m surprised by this post. Doing full-time community building myself, I have a really hard time imagining that any group (or sensible individual) would use these ‘cult indoctrination techniques’ as strategies to get other people interested in EA.
Was wondering if you could share anything more about specific examples / communities where you have found this happening? I’d find that helpful for knowing how to relate to this content as a community builder myself! :-)
(To be clear, I could imagine repeating talking points and closed social circles happening as side effects of other things—more specifically of individuals often not being that good at following what a good argument is and therefore repeating something that seems salient to them, and of people naturally creating social circles with people they get along with. My point is that I find it hard to believe that any of this would be deliberate enough that this kind of criticism really applies! Which is why I’d find examples helpful—to know what we’re specifically speaking about :) )
juliakarbing
Karma: 130
Establishing Oxford’s AI Safety Student Group: Lessons Learnt and Our Model
The transcript is here: https://docs.google.com/document/d/1l8-PEV0hVswDngYMtiJvoTZLACWeRzTreNHMFNxr0Ko/edit?usp=sharing
I’ll add it to the post too :)
I really appreciate this kind of post :) Agree that no one has AIS field-building figured out and that more experimentation of different models would be great!
One of my main uncertainties about putting these kinds of research projects early on in the pipeline (and indeed one of the main reasons that the Oxford group has been putting it after a round of AGISF) is that having one early on makes it much harder to filter for people who are actually motivated by safety. Because there is such demand for getting to do research projects among ML students, we worried that if we didn’t filter by having them do AGISF first, we might get lots of people who are actually mainly interested in capabilities research; and then be putting our efforts and resources towards potentially furthering capabilities rather than safety (by giving ‘capabilities students’ skills and experience in ML research). Do you have any thoughts on this? In particular; is there a particular reason that you don’t worry about this?
If there’s a way of running what you describe without having this be a significant risk, I’d be very excited about some groups trying this approach! And as Charlie mentions, the AI Safety Hub could be very happy to support them in running the research project (even at this early stage of the funnel). :))