Estimating the Current and Future Number of AI Safety Researchers

Summary

I estimate that there are about 300 full-time technical AI safety researchers, 100 full-time non-technical AI safety researchers, and 400 AI safety researchers in total today. I also show that the number of technical AI safety researchers has been increasing exponentially over the past several years and could reach 1000 by the end of the 2020s.

Introduction

Many previous posts have estimated the number of AI safety researchers and a generally accepted order-of-magnitude estimate is 100 full-time researchers. The question of how many AI safety researchers there are is important because the value of work in an area on the margin is proportional to how neglected it is.

The purpose of this post is the analyze this question in detail and come up with hopefully a fairly accurate estimate. I’m going to focus mainly on estimating the number of technical researchers because non-technical AI safety research is more varied and difficult to analyze. Though I’ll create estimates for both types of researchers.

I’ll first summarize some recent estimates before coming up with my own estimate. Then I’ll compare all the estimates later.

Definitions

I’ll be using some specific terms in the post which I think are important to define to avoid misunderstanding or ambiguity. First, I’ll define ‘AI safety’, also known as AI alignment, as work that is done to reduce existential risk from advanced AI. This kind of work tends to focus on the long-term impact of AI rather than short-term problems such as the safety of self-driving cars or AI bias.

My use of the word ‘researcher’ is a generic term for anyone working on AI safety and is an umbrella term for more specific roles such as research scientist, research engineer, or research analyst.

Also, I’ll only be counting full-time researchers. However, since my goal is to estimate research capacity, what I’m really counting is the number of full-time equivalent researchers. For example, two part-time researchers working 20 hours per week can be counted as one full-time researcher.

I’ll define technical AI safety research as research that is directly related to AI safety such as technical machine learning work or conceptual research (e.g. ELK). And non-technical research includes research related to AI governance, policy, and meta-level work such as this post.

Past estimates

  • 80,000 hours: estimated that there were about 300 people working on reducing existential risk from AI in 2022 with a 90% confidence interval between 100 and 1,500. The estimate used data drawn from the AI Watch database.

  • A recent post (2022) on the EA Forum estimated that there are 100-200 people working full-time on AI safety technical research.

  • Another recent post (2022) on LessWrong claims that there are about 150 people working full-time on technical AI safety.

  • In a recent Twitter thread (2022), Benjamin Todd counters the idea that AI safety is saturated and says that there are only about 10 AI safety groups which each have about 10 researchers which is 100 researchers in total. He also states that this number has grown from about 30 in 2017 and that there are 100,000+ researchers working on AI capabilities [1].

  • This Vox article said that about 50 in the world were working full-time on technical AI safety in 2020.

  • This presentation estimated that fewer than 100 people were working full-time on technical AI alignment in 2021.

  • There were about 2000 ‘highly engaged’ members of Effective Altruism in 2021. 450 of those people were working on AI safety.

Estimating the number of AI safety researchers

Organizational estimate

I’ll estimate the number of technical AI safety researchers and then the number of non-technical AI safety researchers. My main estimation method will be what I call an ‘organizational estimate’ which involves creating a list of organizations working on AI safety and then estimating the number of researchers working full-time in each organization to create a table similar to the one in this post. I’ll also estimate the number of independent researchers. Note that I’ll only be counting people who work full-time on AI safety.

Organizational estimate

To create the estimates in the tables below I used the following sources (ordered from most to least reliable):

  • Web pages listing all the researchers working at an organization.

  • Asking people who work at the organization.

  • Scraping publications and posts from sites including The Alignment Forum, DeepMind and OpenAI and analyzing the data [2].

  • LinkedIn insights to estimate the number of employees in an organization.

The confidence column shows how much information went into the estimate and how confident I am about the estimate.

Technical AI safety research organizations

NameEstimateLower bound (95% CI)Upper bound (95% CI)Overall confidence
Other80 [3]15150Medium
Centre for Human-Compatible AI25550Medium
DeepMind20560Medium
OpenAI20550Medium
Machine Intelligence Research Institute10520High
Center for AI Safety (CAIS)10514High
Fund for Alignment Research (FAR)10515High
GoodAI10515High
Sam Bowman8210Medium
Jacob Steinhardt8210Medium
David Krueger7510High
Anthropic15540Low
Redwood Research121020High
Future of Humanity Institute10530Medium
Conjecture10520High
Algorithmic Alignment Group (MIT)537High
Aligned AI425High
Apart Research436High
Foundations of Cooperative AI Lab (CMU)328Medium
Alignment of Complex Systems Research Group (Prague)228Medium
Alignment research center (ARC)225High
Encultured AI215High
Totals27799558Medium

Non-technical AI safety research organizations

NameEstimateLower bound (95% CI)Upper bound (95% CI)Overall confidence
Centre for Security and Emerging Technology (CSET)10540Medium
Epoch AI4210High
Centre for the Governance of AI10515High
Leverhulme Centre for the Future of Intelligence4310Medium
OpenAI10120Low
DeepMind10120Low
Center for the Study of Existential Risk (CSER)327Medium
Future of Life Institute436Medium
Center on Long-Term Risk5510High
Open Philanthropy5215Medium
AI Impacts3210High
Rethink Priorities8510High
Other [4]10530Low
Totals8641203Medium

Conclusions and notes

Summary of the results in the tables above:

  • Technical AI safety researchers:

    • Point estimate: 277

    • Range: 99-558

  • Non-technical AI safety researchers:

    • Point estimate: 86

    • Range: 41-203

  • Total AI safety researchers:

    • Point estimate: 363

    • Range: 140-761

In conclusion, there are probably around 300 technical AI safety researchers, 100 non-technical AI safety researchers and around 400 AI safety researchers in total.[5]

Comparison of estimates

The bar charts below compare my estimates with the estimates from the “Past estimates” section.

In the first chart, my estimate is higher than all the historical estimates possibly because newer estimates will tend to be higher as the number of AI safety researchers increases or because my estimate includes more organizations. My estimate is similar to the other total estimates in the second chart.

How has the number of technical AI safety researchers changed over time?

Technical AI safety research organizations

NameNumber of researchersFounding Year
Center for AI Safety (CAIS)102022
Fund for Alignment Research (FAR)102022
Conjecture102022
Aligned AI42022
Apart Research42022
Encultured AI22022
Anthropic152021
Redwood Research122021
Alignment Research Center (ARC)22021
Alignment Forum502018
Sam Bowman 82020[6]
Jacob Steinhardt 82016[6]
David Krueger 72016[6]
Center for Human-Compatible AI302016
OpenAI202016
DeepMind202012
Future of Humanity Institute (FHI)102005
Machine Intelligence Research Institute (MIRI)152000

I graphed the data in the table above to show how the total number of technical AI safety organizations has changed over time:

The blue dots are the actual number of organizations in each year and the red line is an exponential model fitting the data.

I found that the number of technical AI safety research organizations is increasing exponentially at about 14% per year which makes sense given that EA funding is increasing and AI safety seems increasingly pressing and tractable.

Then I extrapolated the same model into the future to create the following graph:

The table above includes 20 technical AI safety research organizations currently in existence and the model predicts that the number of organizations will double to 40 by 2029.

Technical AI safety researchers

I also created a model to estimate how the total number of AI safety researchers has changed over time. In the model, I assumed that the number of researchers in each organization has increased linearly from zero when each organization was founded up to the current number in 2022. The blue dots are the data points from the model and the red line is an exponential curve fitting the dots.

The model estimates that the number of technical AI safety researchers has been increasing at a rate of about 28% per year since 2000.

The next graph shows the model extrapolated into the future and predicts that the number of technical AI safety researchers will increase from about 200 in 2022 to 1000 by 2028.

How could productivity increase in the future?

How the overall productivity of the technical AI safety research community will increase as the number of researchers increases is unclear. A well-known law that describes the research productivity of a field is Lotka’s Law [7]. The formula for Lotka’s Law is:

Y is the number of researchers who have published X articles. C is the total number of contributors in the field who have published one article and n is a constant which usually has a value of 2. I found that a value of 2.3 fits data from the Alignment Forum most well [8]:

The graph above shows that about 80 people have published a post on the Alignment Forum in the past six months. In this case, C = 80 and n = 2.3. Then the total number of posts published can be calculated by multiplying Y by X for each value of X and adding all the values together. For example:

80 /​ 1^2.3 = ~80 researchers have posted 1 post → 80 * 1 = 80

80 /​ 2^2.3 = ~16 researchers have posted 2 posts → 16 * 2 = 32

80 /​ 3^23 = ~6 researchers have posted 3 posts → 6 * 3 = 18

What happens when the number of researchers is increased? In other words, what happens when the value of C is doubled?

I found that when C is doubled, the total number of articles published per year also doubles. In the chart above, the area under the red curve is exactly double the area under the blue curve. In other words, the total productivity of a research field increases linearly with the number of researchers. The reason why is that increasing the number of researchers increases the number of low-productivity and high-productivity researchers equally.

It’s important to note that simply increasing the number of posts will not necessarily increase the overall rate of progress. More researchers will be helpful if large problems can be broken up and parallelized so that each individual or team can work on a sub-problem. Nevertheless, increasing the size of the field should also increase the number of talented researchers if research quality is more important.

Conclusions

I estimated that there are about 300 full-time technical and 100 full-time non-technical AI safety researchers today which is roughly in line with previous estimates though my estimate for the number of technical researchers is significantly higher.

To be conservative, I think the correct order-of-magnitude estimate for the number of full-time AI safety researchers is around 100 today though I expect this to increase to 1000 in a few years.

The number of technical AI safety organizations and researchers has been increasing exponentially by about 10-30% per year, I expect that trend to continue for several reasons:

  • Funding: EA funding has increased significantly over the past several years and will probably continue to increase in the future. Also, AI is and will increasingly be advanced enough to be commercially valuable which will enable companies such as OpenAI and DeepMind to continue funding AI safety research.

  • Interest: as AI advances and the gap between current systems and AGI narrows, it will become easier and require less imagination to believe that AGI is possible. Consequently, it might become easier to get funding for AI safety research. AI safety research will also seem increasingly urgent which will motivate more people to work on it.

  • Tractability: as time goes on, the current AI architectures will probably become increasingly similar to the architecture used in the first AGI system which will make it easier to experiment with AGI-like systems and learn useful properties about them.

By extrapolating past trends, I’ve estimated that the number of technical AI safety organizations will double from about 20 to 40 by 2030 and the number of technical AI safety researchers will increase from about 300 in 2022 to 1000 by 2030. I find it striking how many well-known organizations working on AI safety were founded very recently. This trend suggests that some of the most influential AI safety organizations will be founded in the future.

I then found that the number of posts published per year will likely increase at the same rate as the number of researchers. If the number of researchers increases by a factor of five by the end of the decade, I expect the number of posts or papers per year to also increase by that amount.

Breaking up problems into subproblems will probably help make the most of that extra productivity. As the volume of articles increases, skills or tools for summarization, curation, or distillation will probably be highly valuable for informing researchers about what is currently happening in their field.

  1. ^

    My estimate is far lower as I would only classify researchers as ‘AI capabilities’ researchers if they push the state-of-the-art forward. Though the number of AI safety researchers is almost certainly lower than the number of AI capabilities researchers.

  2. ^

    What I did:

    - Alignment Forum: scrape posts and count the number of unique authors.

    - DeepMind: scrape safety-tagged publications and count the number of unique authors.

    - OpenAI: manually classify publications as safety-related. Then count the number of unique authors.

  3. ^

    Manually curated list of people on the Alignment Forum who don’t work at any of the other organizations. Includes groups such as:

    - Independent alignment researchers (e.g. John Wentworth)

    - Researchers in programs such as SERI MATS and Refine (e.g. carado)

    - Researchers in master’s or PhD degrees studying AI safety (e.g. Marius Hobbhahn)

  4. ^

    There are about 45 research profile on Google Scholar with the ‘AI governance’ tag. I counted about 8 researchers who weren’t at the other organizations listed.

  5. ^

    Note that the technical estimate is more accurate than the non-technical estimate because technical research is more clearly defined. I also put more research into estimating the number of technical AI safety researchers than non-technical researchers.

    Also bear in mind that since I probably failed to include some organizations or groups in the table, the true figures could be higher.

  6. ^

    These are rough guesses but the model is fairly robust to them.

  7. ^

    Edit: thank you puffymist from the LessWrong comments section for recommending Lotka’s Law over Price’s Law as it is more accurate.

  8. ^

    In case you’re wondering, the outlier on the far right of the chart is John Wentworth.