Fascinating that very few top AI Safety organizations are looking for more funding. By my count, only 4 of these 17 organizations are even publicly requesting donations this year: three independent research groups (GCRI, CLR, and AI Impacts) and an operations org (BERI). Across the board, it doesn’t seem like AI Safety is very funding constrained.
Based on this report, I think the best donation opportunity among these orgs is BERI, the Berkeley Existential Risk Initiative. Larks says that BERI “provides support to existential risk groups at top universities to facilitate activities (like hiring engineers and assistants) that would be hard within the university context.” According to BERI’s blog post requesting donations, this support includes:
$250k to hire contracted researchers and research assistants for university and independent research groups.
$170k for additional support: productivity coaches, software engineers, copy editors, graphic designers, and other specialty services.
Continued to employ two machine learning research engineers to work alongside researchers at CHAI.
Hired Robert Trager and Joslyn Barnhart to work as Visiting Senior Research Fellows with GovAI, as well as hiring a small team of supporting research personnel.
Supported research on European AI strategy and policy in association with CSER.
Combined immediate COVID-19 assistance with long-term benefits.
BERI is also supporting new existential risk research groups at other top universities, including:
The Autonomous Learning Laboratory at UMass Amherst, led by Phil Thomas
Meir Friedenberg and Joe Halpern at Cornell
InterACT at UC Berkeley, led by Anca Dragan
The Stanford Existential Risks Initiative
Yale Effective Altruism, to support x-risk discussion groups
Baobao Zhang and Sarah Kreps at Cornell
Donating to BERI seems to me like the only way to give more money to AI Safety researchers at top universities. FHI, CHAI, and CSER aren’t publicly seeking donations seemingly because anything you directly donate might end up either (a) replacing funding they would’ve received from their university or other donors, or (b) being limited in terms of what they’re allowed to spend it on. If that’s true, then the only way to counterfactually increase funding at these groups is through BERI.
I completely agree that BERI is a great organisation and a good choice. However, I will also just breifly note that FHI, CHAI and CSER (like any academic groups) are always open to receiving donations:
CHAI: If you wanted to donate to them, here is the relevant web page. Unfortunately it is apparently broken at time of writing—they tell me any donation via credit card can be made by calling the Gift Services Department on 510-643-9789.
FLI also is quite funding constrained particularly on technical-adjacent policy research work, where in my opinion there is going to be a lot of important research and a dearth of resources to do it. For example, the charge to NIST to develop an AI risk assessment framework, just passed in the US NDAA, is likely to be extremely critical to get right. FLI will be working hard to connect technical researchers with this effort, but is very resource-constrained.
I generally feel that the idea that AI safety (including research) is not funding constrained to be an incorrect and potentially dangerous one — but that’s a bigger topic for discussion.
Fascinating that very few top AI Safety organizations are looking for more funding. By my count, only 4 of these 17 organizations are even publicly requesting donations this year: three independent research groups (GCRI, CLR, and AI Impacts) and an operations org (BERI). Across the board, it doesn’t seem like AI Safety is very funding constrained.
Based on this report, I think the best donation opportunity among these orgs is BERI, the Berkeley Existential Risk Initiative. Larks says that BERI “provides support to existential risk groups at top universities to facilitate activities (like hiring engineers and assistants) that would be hard within the university context.” According to BERI’s blog post requesting donations, this support includes:
$250k to hire contracted researchers and research assistants for university and independent research groups.
$170k for additional support: productivity coaches, software engineers, copy editors, graphic designers, and other specialty services.
Continued to employ two machine learning research engineers to work alongside researchers at CHAI.
Hired Robert Trager and Joslyn Barnhart to work as Visiting Senior Research Fellows with GovAI, as well as hiring a small team of supporting research personnel.
Supported research on European AI strategy and policy in association with CSER.
Combined immediate COVID-19 assistance with long-term benefits.
BERI is also supporting new existential risk research groups at other top universities, including:
The Autonomous Learning Laboratory at UMass Amherst, led by Phil Thomas
Meir Friedenberg and Joe Halpern at Cornell
InterACT at UC Berkeley, led by Anca Dragan
The Stanford Existential Risks Initiative
Yale Effective Altruism, to support x-risk discussion groups
Baobao Zhang and Sarah Kreps at Cornell
Donating to BERI seems to me like the only way to give more money to AI Safety researchers at top universities. FHI, CHAI, and CSER aren’t publicly seeking donations seemingly because anything you directly donate might end up either (a) replacing funding they would’ve received from their university or other donors, or (b) being limited in terms of what they’re allowed to spend it on. If that’s true, then the only way to counterfactually increase funding at these groups is through BERI.
If you would like, click here to donate to BERI.
Depending on how you interpret this comment, the LTFF is looking for funding as well.
(Disclosure: I run EA Funds.)
Yes, looks like LTFF is also looking for funding. Edited, thanks.
[Disclosure: I work for CSER]
I completely agree that BERI is a great organisation and a good choice. However, I will also just breifly note that FHI, CHAI and CSER (like any academic groups) are always open to receiving donations:
FHI: https://www.fhi.ox.ac.uk/support-fhi/
CSER: https://www.philanthropy.cam.ac.uk/give-to-cambridge/centre-for-the-study-of-existential-risk?table=departmentprojects&id=452
CHAI: If you wanted to donate to them, here is the relevant web page. Unfortunately it is apparently broken at time of writing—they tell me any donation via credit card can be made by calling the Gift Services Department on 510-643-9789.
Thanks Hayden!
FLI also is quite funding constrained particularly on technical-adjacent policy research work, where in my opinion there is going to be a lot of important research and a dearth of resources to do it. For example, the charge to NIST to develop an AI risk assessment framework, just passed in the US NDAA, is likely to be extremely critical to get right. FLI will be working hard to connect technical researchers with this effort, but is very resource-constrained.
I generally feel that the idea that AI safety (including research) is not funding constrained to be an incorrect and potentially dangerous one — but that’s a bigger topic for discussion.