I completely agree that BERI is a great organisation and a good choice. However, I will also just breifly note that FHI, CHAI and CSER (like any academic groups) are always open to receiving donations:
CHAI: If you wanted to donate to them, here is the relevant web page. Unfortunately it is apparently broken at time of writing—they tell me any donation via credit card can be made by calling the Gift Services Department on 510-643-9789.
FLI also is quite funding constrained particularly on technical-adjacent policy research work, where in my opinion there is going to be a lot of important research and a dearth of resources to do it. For example, the charge to NIST to develop an AI risk assessment framework, just passed in the US NDAA, is likely to be extremely critical to get right. FLI will be working hard to connect technical researchers with this effort, but is very resource-constrained.
I generally feel that the idea that AI safety (including research) is not funding constrained to be an incorrect and potentially dangerous one — but that’s a bigger topic for discussion.
[Disclosure: I work for CSER]
I completely agree that BERI is a great organisation and a good choice. However, I will also just breifly note that FHI, CHAI and CSER (like any academic groups) are always open to receiving donations:
FHI: https://www.fhi.ox.ac.uk/support-fhi/
CSER: https://www.philanthropy.cam.ac.uk/give-to-cambridge/centre-for-the-study-of-existential-risk?table=departmentprojects&id=452
CHAI: If you wanted to donate to them, here is the relevant web page. Unfortunately it is apparently broken at time of writing—they tell me any donation via credit card can be made by calling the Gift Services Department on 510-643-9789.
Thanks Hayden!
FLI also is quite funding constrained particularly on technical-adjacent policy research work, where in my opinion there is going to be a lot of important research and a dearth of resources to do it. For example, the charge to NIST to develop an AI risk assessment framework, just passed in the US NDAA, is likely to be extremely critical to get right. FLI will be working hard to connect technical researchers with this effort, but is very resource-constrained.
I generally feel that the idea that AI safety (including research) is not funding constrained to be an incorrect and potentially dangerous one — but that’s a bigger topic for discussion.