I totally agree with point 1. and you’re right that this post is really estimating the total number of people who work at AI safety organizations and then using this number as a proxy for estimating the size of the field. As you said, there are a lot of people who aren’t completely focused on AI safety but still make significant contributions to the field. For example, a AI researcher might consider themselves to be an “LLM researcher” and split their time between non-AI safety work like evaluating models on benchmarks and AI safety work like new alignment methods. Such a researcher would not be counted in this post.
I might add an “other” category to the estimate to avoid this form of undercounting.
Regarding point 2, I collected the list of organizations and estimated the number of FTEs at each using a mixture of Google Search and Gemini Deep Research. The lists are my attempt to find as many AI safety organizations as possible though of course, I may be missing a few. If you can think of any that aren’t in the list, I would appreciate if you shared them so that I can add them.
Thanks for the response Stephen. To clarify point 1, I’m also saying that there may be researchers who are more or less completely focused on AI safety but simply don’t brand themselves that way and don’t belong to an AI safety organization.
For point 2, I think the data collection methodology should be disclosed in the post. I would also be interested to know if you used Gemini Deep Research to help you identify relevant organizations but then verified them yourself (including number of employees), or if you used Gemini’s estimates for the number of employees as given.
Re missing organizations: like I said, I think looking through Chinese research institutes is a good place to start. There’s also a bunch of “Responsible AI”-branded initiatives in the US (e.g., https://www.cmu.edu/block-center/responsible-ai) which should possibly be included, depending on your definition of “AI safety”. (I think the post would also benefit including the guidelines you used to determine what counts as AI safety.)
Thanks for your feedback Ben.
I totally agree with point 1. and you’re right that this post is really estimating the total number of people who work at AI safety organizations and then using this number as a proxy for estimating the size of the field. As you said, there are a lot of people who aren’t completely focused on AI safety but still make significant contributions to the field. For example, a AI researcher might consider themselves to be an “LLM researcher” and split their time between non-AI safety work like evaluating models on benchmarks and AI safety work like new alignment methods. Such a researcher would not be counted in this post.
I might add an “other” category to the estimate to avoid this form of undercounting.
Regarding point 2, I collected the list of organizations and estimated the number of FTEs at each using a mixture of Google Search and Gemini Deep Research. The lists are my attempt to find as many AI safety organizations as possible though of course, I may be missing a few. If you can think of any that aren’t in the list, I would appreciate if you shared them so that I can add them.
Thanks for the response Stephen. To clarify point 1, I’m also saying that there may be researchers who are more or less completely focused on AI safety but simply don’t brand themselves that way and don’t belong to an AI safety organization.
For point 2, I think the data collection methodology should be disclosed in the post. I would also be interested to know if you used Gemini Deep Research to help you identify relevant organizations but then verified them yourself (including number of employees), or if you used Gemini’s estimates for the number of employees as given.
Re missing organizations: like I said, I think looking through Chinese research institutes is a good place to start. There’s also a bunch of “Responsible AI”-branded initiatives in the US (e.g., https://www.cmu.edu/block-center/responsible-ai) which should possibly be included, depending on your definition of “AI safety”. (I think the post would also benefit including the guidelines you used to determine what counts as AI safety.)
Thanks for the hard work!
I used Gemini Deep Research to discover organizations and then manually visited their websites to create estimates.