This is particularly relevant given the recent letter from Anthropic on SB-1047.
I would like to see a steelman of the letter since it appears to me to significantly undermine Anthropic’s entire raison d’etre (which I understood to be: “have a seat at the table by being one of the big players—use this power to advocate for safer AI policies”). And I haven’t yet heard anyone in the AI Safety community defending it.
A few DC and EU people tell me that in private, Anthropic (and others) are more unequivocally antiregulation than their public statements would suggest.
I’ve tried to get this on the record—person X says that Anthropic said Y at meeting Z, or just Y and Z—but my sources have declined.
This is particularly relevant given the recent letter from Anthropic on SB-1047.
I would like to see a steelman of the letter since it appears to me to significantly undermine Anthropic’s entire raison d’etre (which I understood to be: “have a seat at the table by being one of the big players—use this power to advocate for safer AI policies”). And I haven’t yet heard anyone in the AI Safety community defending it.
I believe that Anthropic’s policy advocacy is (1) bad and (2) worse in private than in public.
But Dario and Jack Clark do publicly oppose strong regulation. See https://ailabwatch.org/resources/company-advocacy/#dario-on-in-good-company-podcast and https://ailabwatch.org/resources/company-advocacy/#jack-clark. So this letter isn’t surprising or a new betrayal — the issue is the preexisting antiregulation position, insofar as it’s unreasonable.
Can you say a bit more about:
?
A few DC and EU people tell me that in private, Anthropic (and others) are more unequivocally antiregulation than their public statements would suggest.
I’ve tried to get this on the record—person X says that Anthropic said Y at meeting Z, or just Y and Z—but my sources have declined.
I’ve heard similar things, as well as Anthropic throwing their weight as a “safety” company to try to unduly influence other safety-concerned actors.