I am still confused about what exactly Open AI is requiring here and how (or if) it diverges substantively from Anthropic’s contract. Is this merely a symbolic victory for the DOW? Or is the language about “lawful use” allowing a back door somehow?
I found this Peter Wildeford piece helpful. My rough understanding now is that it was (implicitly?) rejecting “lawful use”, especially within classified contexts, that was the contentious bit all along.
But I’m still uncertain about the extent these contracts can be renegotiated in the future, when capabilities evolve. As well as the extent that black-swan type future capabilities could be “lawfully” used secretly, under classification? And presumably the nature of classified uses will kept secret from Open AI as well?
I am still confused about what exactly Open AI is requiring here and how (or if) it diverges substantively from Anthropic’s contract. Is this merely a symbolic victory for the DOW? Or is the language about “lawful use” allowing a back door somehow?
https://x.com/austinc3301/status/2027639210874966060
I found this Peter Wildeford piece helpful. My rough understanding now is that it was (implicitly?) rejecting “lawful use”, especially within classified contexts, that was the contentious bit all along.
But I’m still uncertain about the extent these contracts can be renegotiated in the future, when capabilities evolve. As well as the extent that black-swan type future capabilities could be “lawfully” used secretly, under classification? And presumably the nature of classified uses will kept secret from Open AI as well?