I think these are great criteria, Neel. If one or more of the funders had come to me and said, “Hey, here are some people who you’ve offended, or here are some people who say you’re sucking up their oxygen, or here’s why your policy proposals are unrealistic,” then I probably would have just accepted their judgment and trusted that the money is better spent elsewhere. Part of why I’m on the forum discussing these issues is that so far, nobody has offered me any details like that; essentially all I have is their bottom-line assessment that CAIP is less valuable than other funding opportunities.
Jason Green-Lowe
We’re Not Advertising Enough (Post 3 of 6 on AI Governance)
The Need for Political Advertising (Post 2 of 6 on AI Governance)
I wish I could! Unfortunately, despite having several conversations and emails with the various AI safety donors, I’m still confused about why they are declining to fund CAIP. The message I’ve been getting is that other funding opportunities seem more valuable to them, but I don’t know exactly what criteria or measurement system they’re using.
At least one major donor said that they were trying to measure counterfactual impact—something like, try to figure out how much good the laws you’re championing would accomplish if they passed, and then ask how close they got to passing. However, I don’t understand why this analysis disfavors CAIP. Compared to most other organizations in the space, the laws we’re working on are less likely to pass, but would do much more good if they did pass.
Another possible factor is that my co-founder, Thomas Larsen, left CAIP in the spring of 2024, less than a year after starting the organization. As I understand it, Thomas left because he learned that political change is harder than he had initially thought, and because he felt frustrated that CAIP was not powerful enough to accomplish its mission within the short time that he expects we have left before superintelligence is deployed, and because he did not see a good fit between the skills he wanted to use (research, longform writing, forecasting) and CAIP’s day-to-day needs.
Thomas’s early departure is obviously an important piece of information that weighs against donating to CAIP, but given the context, I don’t think it’s reasonable for institutional donors to treat it as decisive. I actually agree with Thomas’s point that CAIP’s mission is very ambitious relative to our resources and that we most likely will not succeed. However, I think it’s worth trying anyway, because the stakes are so high that even a small chance of success is very valuable.
Your point about the Nucleic Acid Synthesis Act is well-taken; while writing this post, I confused the Nucleic Acid Synthesis Act with Section 4.4(b)(iii) of Biden’s 2023 Executive Order, which did have that requirement. I’ll correct the error.
We care a lot about future-proofing our legislation. Section 6 of our model legislation takes the unusual step of allowing the AI safety office to modify all of the technical definitions in the statute via regulation, because we know that the paradigms that are current today might be outdated in 2 years and irrelevant in 5. Our bill would also create a Deputy Administrator for Standards whose section’s main task would be to keep abreast of “the fast moving nature of AI” and to update the regulatory regime accordingly. If you have specific suggestions for how to make the bill even more future-proof without losing its current efficacy, we’d love to hear them.
Our model legislation does allow the executive to update the technical specifics as the technology advances.
The very first text in the section on rulemaking authority is “The Administrator shall have full power to promulgate rules to carry out this Act in accordance with section 553 of title 5, United States Code. This includes the power to update or modify any of the technical thresholds in Section 3(s) of this Act (including but not limited to the definitions of “high-compute AI developer,” “high-performance AI chip,” and “major AI hardware cluster”) to ensure that these definitions will continue to adequately protect against major security risks despite changes in the technical landscape such as improvements in algorithmic efficiency.” This is on page 12 of our bill.
I’m not sure how we could make this clearer, and I think it’s unreasonable to attack the model legislation for not having this feature, because it very much does have this feature.