I found this Peter Wildeford piece helpful. My rough understanding now is that it was (implicitly?) rejecting “lawful use”, especially within classified contexts, that was the contentious bit all along.
But I’m still uncertain about the extent these contracts can be renegotiated in the future, when capabilities evolve. As well as the extent that black-swan type future capabilities could be “lawfully” used secretly, under classification? And presumably the nature of classified uses will kept secret from Open AI as well?
https://x.com/austinc3301/status/2027639210874966060
I found this Peter Wildeford piece helpful. My rough understanding now is that it was (implicitly?) rejecting “lawful use”, especially within classified contexts, that was the contentious bit all along.
But I’m still uncertain about the extent these contracts can be renegotiated in the future, when capabilities evolve. As well as the extent that black-swan type future capabilities could be “lawfully” used secretly, under classification? And presumably the nature of classified uses will kept secret from Open AI as well?