I think understanding the growth of the field is very important and I appreciate the work you’re doing. However, I have some concerns about the methodology:
It seems to me that this is really “the number of people working at AI safety organizations”, which I think significantly underestimates the number of people working on AI safety. Lots of AI safety work is being done by organizations that don’t explicitly brand themselves as AI safety. I can directly attest this for technical safety in academia (which is my area), but I expect the same applies to other sectors. There’s also some overcounting since not every employee of an AI safety organization is working on AI safety, but I expect the undercounting to dominate.
To be clear, I think “the number of people working at AI safety organizations” is still a useful number to have, but I think it’s important to be clear that that’s what you’re measuring.
Maybe I missed it, but can you share how the data was collected, both for (A) the list of organizations and (B) the number of employees for each organization? For (A), I think many Chinese groups in particular are notably missing from the technical AI safety list, including ones that are explicitly branded as AI safety (see, e.g., https://beijing.ai-safety-and-superalignment.cn). Just as an example of (B), I can confirm that at least for my organization (CHAI), the number is a major undercount. See our website (which is also not perfect because not all of these people are working on AI safety, but I would estimate 18 technical FTEs).
I appreciate that both of these problems may be pretty difficult to solve, and I think this analysis is useful even without solving these problems. But I think the post as written provides an inaccurate impression of the field. Although not a complete solution, I think reframing this as “the number of people working at AI safety organizations” would help significantly.
Thanks for the response Stephen. To clarify point 1, I’m also saying that there may be researchers who are more or less completely focused on AI safety but simply don’t brand themselves that way and don’t belong to an AI safety organization.
For point 2, I think the data collection methodology should be disclosed in the post. I would also be interested to know if you used Gemini Deep Research to help you identify relevant organizations but then verified them yourself (including number of employees), or if you used Gemini’s estimates for the number of employees as given.
Re missing organizations: like I said, I think looking through Chinese research institutes is a good place to start. There’s also a bunch of “Responsible AI”-branded initiatives in the US (e.g., https://www.cmu.edu/block-center/responsible-ai) which should possibly be included, depending on your definition of “AI safety”. (I think the post would also benefit including the guidelines you used to determine what counts as AI safety.)
Thanks for the hard work!