The positive media storm for Anthropic is bigger than I thought it would be.
Almost every major news network has featured them and almost all of it puts a halo on Amodei (which feels a bit icky but hey).
And every 4th post on my linkedin is along the lines of
“Claude hits no. 1 on App store”
“the idea that no big tech has morals is dead,”
“my 3 year love affair with GPT Is over”
“I made the switch to Claude and I’ll never look back”
As much as refusing the govt. contact might delay their IPO and give their valuation a temporary hit, they could hardly have hoped for a better PR flood. Every new user that switches more only helps them but hurts their biggest competitor. It’s also good timing for them because right now their product is probably better than Open AI’s which wasn’t the case a year ago and might not be the case 6 months from now.
It’s still unclear whether this will be a good business decision as well as a “moral” one but I suspect it will.
The big difference, however, is that Anthropic is essentially using the contractual vehicle to impose what feel less like technical constraints and more like policy constraints on the military. Think of the difference between “this fighter jet is not certified for flight above such-and-such an altitude, and if you fly above that altitude, you’ve breached your warranty,” and “you may not fly this jet above such-and-such an altitude”). It is probably the case that the military should not agree to terms like this, and private firms should not try to set them.
The contract was not illegal, just perhaps unwise, and even that probably only in retrospect. Note that this is true even if you agree with the underlying substance of the limitations. You can support restrictions on mass domestic surveillance and lethal autonomous weapons, but disagree that a defense contract is the optimal vehicle to achieve that policy outcome. The way you achieve new policy outcomes, under the usual rules of our republic, is to pass a law...
I agree that there’s something iffy/non-democratic in theory about putting that kind of constraint around the Pentagon, and that it would have been prudent for them to decline it in the first place. An analogy I read on Substack: if an epidural manufacturer told a government hospital “you’re welcome to use our drug so long as you don’t use it in any abortions,” it would probably be prudent to decline that contract (too much overhead).
Anyway this reframing put one sentence in particular by Dario into a new light: “To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI.” In other words, because we know what the law should be and what it’s probably going to be, we should implement that policy today. I think many of us can think of examples where we’d be uncomfortable with a billionaire tech CEO saying that.
Surely you could phrase things the other way round?
“We’re pretty sure this will be made illegal in 10 years time, as the law catches up to our technology advances. However, it’s not illegal now, so feel free to buy it from us and use it!”
I’d be really uncomfortable with a billionaire tech CEO openly saying that.
I’m not sure democracy arguments work that well for military stuff. The people who military actions are going to be deployed against are extremely obvious stakeholders, but they get no input into any feasible “democratic” process that determines what the US military does, and procedural democracy is compatible with the US doing literally anything to non-citizens to advance US interests. Given that, attempting to restrain the US military in ways that are legal and non-deceptive doesn’t seem that procedurally dubious to me.
An analogy I read on Substack: if an epidural manufacturer told a government hospital “you’re welcome to use our drug so long as you don’t use it in any abortions,” it would probably be prudent to decline that contract (too much overhead).
The positive media storm for Anthropic is bigger than I thought it would be.
Almost every major news network has featured them and almost all of it puts a halo on Amodei (which feels a bit icky but hey).
And every 4th post on my linkedin is along the lines of
“Claude hits no. 1 on App store”
“the idea that no big tech has morals is dead,”
“my 3 year love affair with GPT Is over”
“I made the switch to Claude and I’ll never look back”
As much as refusing the govt. contact might delay their IPO and give their valuation a temporary hit, they could hardly have hoped for a better PR flood. Every new user that switches more only helps them but hurts their biggest competitor. It’s also good timing for them because right now their product is probably better than Open AI’s which wasn’t the case a year ago and might not be the case 6 months from now.
It’s still unclear whether this will be a good business decision as well as a “moral” one but I suspect it will.
Dean Ball’s commentary on this refamed the issue for me https://www.hyperdimensional.co/p/clawed
I agree that there’s something iffy/non-democratic in theory about putting that kind of constraint around the Pentagon, and that it would have been prudent for them to decline it in the first place. An analogy I read on Substack: if an epidural manufacturer told a government hospital “you’re welcome to use our drug so long as you don’t use it in any abortions,” it would probably be prudent to decline that contract (too much overhead).
Anyway this reframing put one sentence in particular by Dario into a new light: “To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI.” In other words, because we know what the law should be and what it’s probably going to be, we should implement that policy today. I think many of us can think of examples where we’d be uncomfortable with a billionaire tech CEO saying that.
Surely you could phrase things the other way round?
“We’re pretty sure this will be made illegal in 10 years time, as the law catches up to our technology advances. However, it’s not illegal now, so feel free to buy it from us and use it!”
I’d be really uncomfortable with a billionaire tech CEO openly saying that.
I’m not sure democracy arguments work that well for military stuff. The people who military actions are going to be deployed against are extremely obvious stakeholders, but they get no input into any feasible “democratic” process that determines what the US military does, and procedural democracy is compatible with the US doing literally anything to non-citizens to advance US interests. Given that, attempting to restrain the US military in ways that are legal and non-deceptive doesn’t seem that procedurally dubious to me.
This is a great intuition pump.
That is encouraging! Scott’s post linking to various prediction markets for Antrhopic’s implied valuations was also heartening.