Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.
Would the information in this quote fall under any of the Freedom of Information Act (FOIA) exemptions, particularly those concerning national security or confidential commercial information/trade secrets? Or would there be other reasons why it wouldn’t become public knowledge through FOIA requests?
I stumbled upon this quote in this recent Economist article [archived] about OpenAI. I couldn’t find any good source that supports the claim additionally, so this might not be accurate. The earliest mention I could find for the claim is from January 17th 2023 although it only talks about OpenAI “proposing” the rule change.
If true, this would make the profit cap less meaningful, especially for longer AI timelines. For example, a 1 billion investment in 2023 would be capped at ~1540 times in 2040.