I appreciate the way this post adds a lot of clarity and detail to pause proposals. Thanks for writing it and thanks also to the debate organisers.
However, I think you’re equivocating kind of unhelpfully between LLM development—which would be presumably affected by a pause—and special-purpose model development (e.g. the linked MegaSyn example) which would probably not be. This matters because the claim that AI is currently an emergency and harms are currently occurring. For a pause to prevent these harms, they would have to be from cutting-edge LLMs, but I’m not aware of any compelling examples of such.
Tracking compute is required for both. These models provide sufficient reason to track compute, and to ensure that other abuses are not occurring, which was the reason I think it’s relevant.
Thanks—this is clarifying. I think my confusion was down to not understanding the remit of the pause you’re proposing. How about we carry on the discussion in the other comment on this?
Hmm, when my friends talk about a government-enforced pause, they most often mean a limit on training compute for LLMs. (Maybe you don’t think that’s “compelling”? Seems at least as compelling as other versions of “pause” to me.)
Ah, seems like my original comment was unclear. I was objecting to the conjunction between (a) AI systems already have the potential to cause harm (as evidenced by the Weapons of Math Destruction, Nature, and MegaSyn links) and (b) a pause in frontier AI would reduce harms. The potential harms due to (a) wouldn’t be at all mitigated by (b) so I think it’s either a bit confused or misleading to link them in this article. Does that clarify my objection?
In general I’m quite uncomfortable by the equivocation I see in a lot of places between “current models are actually causing concrete harm X” and “future models could have the potential to cause harm X” (as well as points in between, and interchanging special-purpose and general AI). I think these equivocations particularly harms the debate on open-sourcing, which I value and feels especially under threat right now.
It’s the other way around – comprehensive enforcement of laws to prevent current harms also prevent “frontier models” from getting developed and deployed. See my comment.
It’s unethical to ignore the harms of uses of open-source models (see laundering of authors’ works, or training on and generation of CSAM).
Harms there need to be prevented too. Both from the perspective of not hurting people in society now, and from the perspective of preventing the build up of risk.
I appreciate the way this post adds a lot of clarity and detail to pause proposals. Thanks for writing it and thanks also to the debate organisers.
However, I think you’re equivocating kind of unhelpfully between LLM development—which would be presumably affected by a pause—and special-purpose model development (e.g. the linked MegaSyn example) which would probably not be. This matters because the claim that AI is currently an emergency and harms are currently occurring. For a pause to prevent these harms, they would have to be from cutting-edge LLMs, but I’m not aware of any compelling examples of such.
Tracking compute is required for both. These models provide sufficient reason to track compute, and to ensure that other abuses are not occurring, which was the reason I think it’s relevant.
Thanks—this is clarifying. I think my confusion was down to not understanding the remit of the pause you’re proposing. How about we carry on the discussion in the other comment on this?
Hmm, when my friends talk about a government-enforced pause, they most often mean a limit on training compute for LLMs. (Maybe you don’t think that’s “compelling”? Seems at least as compelling as other versions of “pause” to me.)
Ah, seems like my original comment was unclear. I was objecting to the conjunction between (a) AI systems already have the potential to cause harm (as evidenced by the Weapons of Math Destruction, Nature, and MegaSyn links) and (b) a pause in frontier AI would reduce harms. The potential harms due to (a) wouldn’t be at all mitigated by (b) so I think it’s either a bit confused or misleading to link them in this article. Does that clarify my objection?
In general I’m quite uncomfortable by the equivocation I see in a lot of places between “current models are actually causing concrete harm X” and “future models could have the potential to cause harm X” (as well as points in between, and interchanging special-purpose and general AI). I think these equivocations particularly harms the debate on open-sourcing, which I value and feels especially under threat right now.
It’s the other way around – comprehensive enforcement of laws to prevent current harms also prevent “frontier models” from getting developed and deployed. See my comment.
It’s unethical to ignore the harms of uses of open-source models (see laundering of authors’ works, or training on and generation of CSAM).
Harms there need to be prevented too. Both from the perspective of not hurting people in society now, and from the perspective of preventing the build up of risk.
Also, this raises the question whether “open-source” models are even “open-source” in the way software is: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4543807