Full Tweet below:
This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.
Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.
Instead, @AnthropicAI and its CEO @DarioAmodei , have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission—a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.
The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.
Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.
As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives.
Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.
In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.
America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.
I’ve never been more proud to be part of the Effective Altruism movement
This was my first thought too. This line made my heart leap with joy!
“Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission”
Anthropic aren’t objecting to killbots as a matter of principle though, they are just saying the tech isn’t reliable yet. The stand on surveillance seems principled and I absolutely admire Amodei for risking his business to do the right thing, but let’s avoid deceiving ourselves about what his stance actually is.
Amodei is not writing a paper to be published in Philosophy and Public Affairs. He is responding to an insane government official serving as the secretary of “war” of the world’s most powerful nation. What do you expect him to say? What would you have said?
Hopefully americans will take this as a strong signal that their administration is in complete support of mass surveillance and autonomous weapons. I could scarcely think of a clearer signal.
I don’t think this is meaningfully different from previous admins (not sure about autonomous weapons but certainly mass surveillance of Americans at home has been going on since the 2010s).
Department of War: “We don’t trade with ants”
The double think involved to somehow try to sell government strong arming of private companies as a move for freedom and American values is wild.
I counted 13 lies. Not bad for 140 characters!
The EA movement has a PR problem with basically half the American political spectrum which includes the US President and also Elon Musk whose xAI is currently the fourth most capable American AI company and judging from the comments here, on X and elsewhere the plan seems to be to make it worse. Frankly the discourse isn’t “effective” at all, but virtue signaling.
There are areas where almost unbridgeable tensions between the EA and the MAGA movement exist, but AI alignement really doesn’t have to be one of them.
In this case Anthropic chose to supply the DoW via a partnership with a company deeply embedded in the administration’s part of the political spectrum, and even pointedly denied any objections to being used to support the administration’s little expedition in Venezuela, and the administration decided that wasn’t enough. There are many criticisms that can be made of Anthropic’s stance on those issues; reluctance to engage with the current US administration isn’t one of them.
If declining to actively support MAGA developing AI with the explicit purpose of being an autonomous killing device is “virtue signalling”, what’s left of “AI alignment” to pursue?
I appreciate the different perspectives @Aithir , and it takes guts to go against the crowd (I upvoted). What do you think EAs should do differently? Both inside and outside of Anthropic
I once had a conversation with a friend who felt that Anthropic advancing the AI frontier (despite their explicit commitment not to) was fine because they’re “leading from the front” in terms of their ethical stance.
It seems like that might not actually work? Advancing the frontier presumably encourages other labs to compete—and if those labs don’t have the same ethical strictures then leading from the front has no effect except to have moved the frontier forward faster than it would have otherwise…
(Referencing OpenAI’s deal with the Pentagon announced shortly after the Anthropic sanctions)
Dario deserves his own memecoin ❤️