University community building seems like the wrong model for AI safety

Reason for this post: many university community builders are considering pivoting their groups hard towards AI safety. From the perspective of how community builders should spend their time, this seems like the wrong tradeoff.

Argument: It seems unlikely that university “community building” frameworks are great fits for the AI safety space. It’s difficult to build and sustain a community devoted to a cause that <1% of members will be able to get jobs in[1]. Even if it would be feasible to sustain such communities, the approach seems unlikely to be optimal, as a ton of organizer time and effort goes to waste[2]. I think a better model would be closer to “recruiting” or “movement building” (edit: I previously included the term “movement building” to describe things like “upskilling” that could be part of a recruitment pipeline, but which seem less associated with the word. As others pointed out, the term is pretty vague and unhelpful, so I removed it.)

Edit: by recruiting, I mean building out a pipeline specifically tailored to getting folks into AI safety jobs, in contrast to generally building a community, which is what most EA student orgs currently focus on.

If this is true, then current university community builders should consider not whether to pivot their groups towards AI, but rather to leave their groups behind and enter recruiting . If these fields are more efficient than community building, however, they would likely have fewer jobs, meaning fewer opportunities for impact for current community builders[3]. If you buy fanaticism, doing anything you can to improve recruiting/​movement building may be worth giving up community building[4]; if you aren’t okay with fanaticism, it seems worthwhile to evaluate the number of opportunities out there and your relative fit compared to others.

Thoughts?

  1. ^

    Recent technical researcher hiring rounds for Anthropic and Redwood have been oversubscribed 100:1. A big reason for this is that candidates are overwhelmingly underqualified, implying that if applicants were more qualified, more would be hired. That said, given how fast interest in the field is growing, it seems likely that applicant numbers will continue to grow faster than job openings, even assuming higher qualifications. (This seems especially likely if, once filling out its management ranks, Redwood can begin hiring non-EA research scientists. This seems to be their current plan, and it would expand its potential applicant pool by many x.) In this world, the vast majority of interested folks will not be able to contribute technically. While there will certainly be many non-technical jobs in the space, it would be surprisingly if non-technical roles vastly exceeded technical ones.

  2. ^

    Widely-targeted community building seems very different from hits-based projects. Given the narrow qualifications for technical researchers, widening AI community building seems likely to have quickly diminishing returns.

  3. ^

    I could be wrong about this—maybe AI alignment is so valuable that we should have, say, 10 or 100 recruiters per safety engineer opening. If the main hiring bottleneck is applicant qualifications, however, I’m not sure why we would need a ton of non-technical recruiters/​movement builders to solve that.

  4. ^

    Okay, maybe not anything… if you counterfactually displace someone who would have recruited better, that’s almost infinitely bad, right? Maybe a better qualifier would be “as long as your work expands the number of opportunities in the space or is marginally better than the next best alternative in a zero-sum situation.”