I think it’s probably true that teams inside of major labs are better placed to work on AI lab coordination broadly, and this post was published before news of the frontier models forum came out. Still, I think there is still room for coordination to promote AI safety outcomes between labs, e.g. something that brings together open-source actors. However, this project area is probably less tractable and neglected now than when we originally shared this idea.
Jam Kraprayoon
Does the US public support ultraviolet germicidal irradiation technology for reducing risks from pathogens?
Air Safety to Combat Global Catastrophic Biorisks [OLD VERSION]
Rethink Priorities is inviting expressions of interest for (co)leading a longtermist project/organization incubator
Effective Institutions Project is hiring
Research Manager for the Effective Institutions Project
Hi, the General Longtermism Team at Rethink Priorities is currently looking to facilitate faster and better creation of entrepreneurial longtermist projects – that is, new organizations, infrastructure, programs, and services that we believe will cost-effectively contribute to reducing existential risk. Some of these projects are likely to be oriented around Ai safety.
I’ll DM you our expression of interest form to be a founder/co-founder for one of these projects.
Thanks for the question. At the time we were generating the initial list of ideas, it wasn’t clear that AI safety was funding-constrained rather than talent-constrained (or even idea-constrained). As you’ve pointed out, it seems more plausible now that finding additional funding sources could be valuable for a couple of reasons:
Helps respond to the higher funding bar that you’ve mentioned
Takes advantage of new entrants to AI-safety-related philanthropy, notably the mainstream foundations that have now become interested in the space.
I don’t have a strong view on whether additional funding should be used to start a new fund or if it is more efficient to direct it towards existing grantmakers. I’m pretty excited about new grantmakers like Manifund getting set up recently that are trying out new ways for grantmakers to be more responsive to potential grantees. I don’t have a strong view about whether ideas around increasing funding for AI safety are more valuable than those listed above. I’d be pretty excited about the right person doing something around educating mainstream donors about AI safety opportunities.
Hi Matt, I think it’s right that there’s some distinction between domestic and international governance. Unless otherwise specified, our project ideas were usually developed with the US in mind. When evaluating the projects, I think our overall view was (and still is) that the US is probably the most important national actor for AI risk outcomes and that international governance around AI is substantially less tractable since effective international governance will need to involve China. I’d probably favour more effort going into field-building focused on the US, then the EU, then the UK, in that order, before focusing on field-building initiatives aimed at international orgs.
In the short term, it seems like prospects for international governance on AI are low, with the political gridlock in the UN since the Russian invasion of Ukraine. I think there could be some particular international governance opportunities that are high-leverage, e.g. making the OECD AI incidents database very good, but we haven’t looked into that much.
We used the following terms:
Germicidal light vs. Germicidal UV light
Low wavelength light vs. Far-UVC
Upper room germicidal light vs. upper room UVC
This and their accompanying descriptions (which were otherwise kept the same) can be found in Appendix item IV, ‘Description of GUV light systems’.