(In the footnote you hedge this as being a claim about the order of magnitude, but I think even that might be untrue quite soon. But changing the main claim to “on the order of 200 people” would silence my pedant alarm.)
Hmm, interesting. My first draft said “under 1,000” and I got lots of feedback that this was way too high. Taking a look at your count, I think many of these numbers are way too high. For example:
FHI AIS is listed at 34, when the entire FHI staff by my count is 59 and includes lots of philosophers and biosecurity people and the actual AI safety research group is 4, and counting GovAI (where I work this summer [though my opinions are of course my own] and is definitely not AI safety technical research).
MIRI is listed at 40, when their “research staff” page has 9 people.
CSET is listed at 5.8. Who at CSET does alignment technical research? CSET is a national security think-tank that focuses on AI risks, but is not explicitly longtermist, let alone a hub for technical alignment research!
CHAI is listed at 41, but their entire staff is 24, including visiting fellows and assistants.
Should I be persuaded by the Google Scholar label “AI Safety”? What percentage of their time do the listed researchers spend on alignment research, on average?
Agreed (I’m not checking your numbers but this all sounds right [MIRI may have more than 9 but less than 40]). Also AI Impacts, where I’m a research intern, currently/recently has 0–0.2 FTEs on technical AI safety, and I don’t think we’ve ever had more than 1, much less the 7 that Gavin’s count says.
Gavin’s count says it includes strategy and policy people, for which I think AI Impacts counts. He estimated these accounted for half of the field then. (But I think should have included that adjustment of 50% when quoting his historical figure, since this post was clearly just about technical work.)
The numbers are stale, and both FHI and MIRI have suffered a bit since. But I’ll try and reconstruct the reasoning:
FHI: I think I was counting the research scholars, expecting that programme to grow. (It didn’t.)
MIRI: Connected people telling me that they had a lot more people than the Team page, and I did a manual count. Hubinger didn’t blink when someone suggested 20-50 to him.
CSET: There definitely were some aligned people at the time, like Ashwin.
CHAI: Counted PhD students
Yeah, Scholar is a source of leads, not a source of confirmed true-blue people.
Not really. I would guess 600, under a definition like “is currently working seriously on at least one alignment project”. (And I’m not counting the indirect work which I previously obsessed over.)
With “100-200” I really had FTEs in mind rather than the >1 serious alignment threshold (and maybe I should edit the post to reflect this). What do you think the FTE number is?
“100-200” is a serious undercount. My two-year-old, extremely incomplete count gave 300. And the field is way bigger now.
(In the footnote you hedge this as being a claim about the order of magnitude, but I think even that might be untrue quite soon. But changing the main claim to “on the order of 200 people” would silence my pedant alarm.)
Here’s one promising source.
Hmm, interesting. My first draft said “under 1,000” and I got lots of feedback that this was way too high. Taking a look at your count, I think many of these numbers are way too high. For example:
FHI AIS is listed at 34, when the entire FHI staff by my count is 59 and includes lots of philosophers and biosecurity people and the actual AI safety research group is 4, and counting GovAI (where I work this summer [though my opinions are of course my own] and is definitely not AI safety technical research).
MIRI is listed at 40, when their “research staff” page has 9 people.
CSET is listed at 5.8. Who at CSET does alignment technical research? CSET is a national security think-tank that focuses on AI risks, but is not explicitly longtermist, let alone a hub for technical alignment research!
CHAI is listed at 41, but their entire staff is 24, including visiting fellows and assistants.
Should I be persuaded by the Google Scholar label “AI Safety”? What percentage of their time do the listed researchers spend on alignment research, on average?
Agreed (I’m not checking your numbers but this all sounds right [MIRI may have more than 9 but less than 40]). Also AI Impacts, where I’m a research intern, currently/recently has 0–0.2 FTEs on technical AI safety, and I don’t think we’ve ever had more than 1, much less the 7 that Gavin’s count says.
Gavin’s count says it includes strategy and policy people, for which I think AI Impacts counts. He estimated these accounted for half of the field then. (But I think should have included that adjustment of 50% when quoting his historical figure, since this post was clearly just about technical work.)
Sure, good points. (But also note that AI Impacts had more like 4 than 7 FTEs in its highest-employment year, I think.)
The numbers are stale, and both FHI and MIRI have suffered a bit since. But I’ll try and reconstruct the reasoning:
FHI: I think I was counting the research scholars, expecting that programme to grow. (It didn’t.)
MIRI: Connected people telling me that they had a lot more people than the Team page, and I did a manual count. Hubinger didn’t blink when someone suggested 20-50 to him.
CSET: There definitely were some aligned people at the time, like Ashwin.
CHAI: Counted PhD students
Yeah, Scholar is a source of leads, not a source of confirmed true-blue people.
I agree with others that these numbers were way high two years ago and are still way high
Happy to defer, though I wish it was deferring to more than one bit of information.
Do you have a rough estimate of the current size?
Not really. I would guess 600, under a definition like “is currently working seriously on at least one alignment project”. (And I’m not counting the indirect work which I previously obsessed over.)
Great, thanks—I appreciate it. I’d love a systematic study akin to the one Seb Farquhar did years back.
https://forum.effectivealtruism.org/posts/Q83ayse5S8CksbT7K/changes-in-funding-in-the-ai-safety-field
With “100-200” I really had FTEs in mind rather than the >1 serious alignment threshold (and maybe I should edit the post to reflect this). What do you think the FTE number is?
I wouldn’t want my dumb guess to stand with any authority. But: 350?