This contains several inaccuracies and misleading statements that I won’t fully enumerate, but at least 2:
The Nucleic Acid Synthesis Act does not at all “require biolabs that receive federal funding to confirm the real identity of customers who are buying their synthetic DNA.” It empowers NIST to create standards and best practices for screening.
It’s not the case that “The particular bills that we edited did not pass Congress, but this is because almost nothing passed out of the 118th Congress.” Lots of bills passed in the CR and other packages. But it was a historically dysfunctional and slow year
Personal gripe: the model legislation is overly prescriptive in a way that does not future proof the statute enough to protect against the fast moving nature of AI and how governance may need to shift and adapt.
Your point about the Nucleic Acid Synthesis Act is well-taken; while writing this post, I confused the Nucleic Acid Synthesis Act with Section 4.4(b)(iii) of Biden’s 2023 Executive Order, which did have that requirement. I’ll correct the error.
We care a lot about future-proofing our legislation. Section 6 of our model legislation takes the unusual step of allowing the AI safety office to modify all of the technical definitions in the statute via regulation, because we know that the paradigms that are current today might be outdated in 2 years and irrelevant in 5. Our bill would also create a Deputy Administrator for Standards whose section’s main task would be to keep abreast of “the fast moving nature of AI” and to update the regulatory regime accordingly. If you have specific suggestions for how to make the bill even more future-proof without losing its current efficacy, we’d love to hear them.
Sure, I’m not going to be able to respond any more to this thread but the methods of governance prescribed themselves are not future proof, as AI governance may need may change as the tech or landscape changes, and the definitions are not future proof.
Then there should be future legislation? Why is it on CAIP and this legislation to foresee the entire future? That’s a prohibitively high bar for regulation.
Most legislation is written broadly enough so that it won’t have to be repealed because it’s rare that legislation is repealed. For example, take their current definition of frontier AI model, which is extremely prescriptive and uses 10^26 in some cases. Usually to be future proof these definitions will be written broadly enough such that the executive can update the technical specifics as the technology advances. Those types of regulations are the sorts of things that would include such details, but not the legislation. I can imagine a future where models are all over 10^26 and meet the other requirements of the model act’s definition of frontier AI model. The reason to even govern the frontier in the first place is because you don’t know what’s coming—it’s not like we know that dangerous capabilities emerge at 10^26 so there’s no reason to use this threshold as putting models under regulatory scrutiny forever. Also (eventually) we might achieve some algorithmic efficiency breakthroughs in which the most capable (and therefore dangerous) models don’t need as much compute anymore and so might not even qualify as frontier AI models under the Act anymore. So I see the risk of this bill first capturing a bunch of models that it doesn’t mean to cover and then possibly not covering any models—all because it’s not written in a future proof way. The bill is written more like a regulation for for the executive level rather than the legislative level.
Feels like your true objection here is that frontier AI development just isn’t that dangerous? Otherwise I don’t know how you could be more concerned about the few piddling “inaccuracies and misleading statements that I won’t fully enumerate” than nobody doing CAIP’s work to get the beginnings of safeguards in place.
Our model legislation does allow the executive to update the technical specifics as the technology advances.
The very first text in the section on rulemaking authority is “The Administrator shall have full power to promulgate rules to carry out this Act in accordance with section 553 of title 5, United States Code. This includes the power to update or modify any of the technical thresholds in Section 3(s) of this Act (including but not limited to the definitions of “high-compute AI developer,” “high-performance AI chip,” and “major AI hardware cluster”) to ensure that these definitions will continue to adequately protect against major security risks despite changes in the technical landscape such as improvements in algorithmic efficiency.” This is on page 12 of our bill.
I’m not sure how we could make this clearer, and I think it’s unreasonable to attack the model legislation for not having this feature, because it very much does have this feature.
This contains several inaccuracies and misleading statements that I won’t fully enumerate, but at least 2:
The Nucleic Acid Synthesis Act does not at all “require biolabs that receive federal funding to confirm the real identity of customers who are buying their synthetic DNA.” It empowers NIST to create standards and best practices for screening.
It’s not the case that “The particular bills that we edited did not pass Congress, but this is because almost nothing passed out of the 118th Congress.” Lots of bills passed in the CR and other packages. But it was a historically dysfunctional and slow year
Personal gripe: the model legislation is overly prescriptive in a way that does not future proof the statute enough to protect against the fast moving nature of AI and how governance may need to shift and adapt.
Your point about the Nucleic Acid Synthesis Act is well-taken; while writing this post, I confused the Nucleic Acid Synthesis Act with Section 4.4(b)(iii) of Biden’s 2023 Executive Order, which did have that requirement. I’ll correct the error.
We care a lot about future-proofing our legislation. Section 6 of our model legislation takes the unusual step of allowing the AI safety office to modify all of the technical definitions in the statute via regulation, because we know that the paradigms that are current today might be outdated in 2 years and irrelevant in 5. Our bill would also create a Deputy Administrator for Standards whose section’s main task would be to keep abreast of “the fast moving nature of AI” and to update the regulatory regime accordingly. If you have specific suggestions for how to make the bill even more future-proof without losing its current efficacy, we’d love to hear them.
Sure, I’m not going to be able to respond any more to this thread but the methods of governance prescribed themselves are not future proof, as AI governance may need may change as the tech or landscape changes, and the definitions are not future proof.
Then there should be future legislation? Why is it on CAIP and this legislation to foresee the entire future? That’s a prohibitively high bar for regulation.
Most legislation is written broadly enough so that it won’t have to be repealed because it’s rare that legislation is repealed. For example, take their current definition of frontier AI model, which is extremely prescriptive and uses 10^26 in some cases. Usually to be future proof these definitions will be written broadly enough such that the executive can update the technical specifics as the technology advances. Those types of regulations are the sorts of things that would include such details, but not the legislation. I can imagine a future where models are all over 10^26 and meet the other requirements of the model act’s definition of frontier AI model. The reason to even govern the frontier in the first place is because you don’t know what’s coming—it’s not like we know that dangerous capabilities emerge at 10^26 so there’s no reason to use this threshold as putting models under regulatory scrutiny forever. Also (eventually) we might achieve some algorithmic efficiency breakthroughs in which the most capable (and therefore dangerous) models don’t need as much compute anymore and so might not even qualify as frontier AI models under the Act anymore. So I see the risk of this bill first capturing a bunch of models that it doesn’t mean to cover and then possibly not covering any models—all because it’s not written in a future proof way. The bill is written more like a regulation for for the executive level rather than the legislative level.
Feels like your true objection here is that frontier AI development just isn’t that dangerous? Otherwise I don’t know how you could be more concerned about the few piddling “inaccuracies and misleading statements that I won’t fully enumerate” than nobody doing CAIP’s work to get the beginnings of safeguards in place.
Our model legislation does allow the executive to update the technical specifics as the technology advances.
The very first text in the section on rulemaking authority is “The Administrator shall have full power to promulgate rules to carry out this Act in accordance with section 553 of title 5, United States Code. This includes the power to update or modify any of the technical thresholds in Section 3(s) of this Act (including but not limited to the definitions of “high-compute AI developer,” “high-performance AI chip,” and “major AI hardware cluster”) to ensure that these definitions will continue to adequately protect against major security risks despite changes in the technical landscape such as improvements in algorithmic efficiency.” This is on page 12 of our bill.
I’m not sure how we could make this clearer, and I think it’s unreasonable to attack the model legislation for not having this feature, because it very much does have this feature.