Ah, seems like my original comment was unclear. I was objecting to the conjunction between (a) AI systems already have the potential to cause harm (as evidenced by the Weapons of Math Destruction, Nature, and MegaSyn links) and (b) a pause in frontier AI would reduce harms. The potential harms due to (a) wouldn’t be at all mitigated by (b) so I think it’s either a bit confused or misleading to link them in this article. Does that clarify my objection?
In general I’m quite uncomfortable by the equivocation I see in a lot of places between “current models are actually causing concrete harm X” and “future models could have the potential to cause harm X” (as well as points in between, and interchanging special-purpose and general AI). I think these equivocations particularly harms the debate on open-sourcing, which I value and feels especially under threat right now.
It’s the other way around – comprehensive enforcement of laws to prevent current harms also prevent “frontier models” from getting developed and deployed. See my comment.
It’s unethical to ignore the harms of uses of open-source models (see laundering of authors’ works, or training on and generation of CSAM).
Harms there need to be prevented too. Both from the perspective of not hurting people in society now, and from the perspective of preventing the build up of risk.
Ah, seems like my original comment was unclear. I was objecting to the conjunction between (a) AI systems already have the potential to cause harm (as evidenced by the Weapons of Math Destruction, Nature, and MegaSyn links) and (b) a pause in frontier AI would reduce harms. The potential harms due to (a) wouldn’t be at all mitigated by (b) so I think it’s either a bit confused or misleading to link them in this article. Does that clarify my objection?
In general I’m quite uncomfortable by the equivocation I see in a lot of places between “current models are actually causing concrete harm X” and “future models could have the potential to cause harm X” (as well as points in between, and interchanging special-purpose and general AI). I think these equivocations particularly harms the debate on open-sourcing, which I value and feels especially under threat right now.
It’s the other way around – comprehensive enforcement of laws to prevent current harms also prevent “frontier models” from getting developed and deployed. See my comment.
It’s unethical to ignore the harms of uses of open-source models (see laundering of authors’ works, or training on and generation of CSAM).
Harms there need to be prevented too. Both from the perspective of not hurting people in society now, and from the perspective of preventing the build up of risk.
Also, this raises the question whether “open-source” models are even “open-source” in the way software is: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4543807