I agree that there’s a lot of evidence that people at OpenAI have thought that AI could be a major risk, and I think that these are good examples.
I said here, “concrete/specific large-scale risks of their products and the corresponding risk-mitigation efforts (outside of things like short-term malicious use by bad API actors, where they are doing better work).”
Just looking at the examples you posted, most feel pretty high-level and vague, and not very related to their specific products.
This was a one-sentence statement. It easily sounds to me like saying, “Someone should deal with this, but not exactly us.”
> The framework itself goes into more detail, proposing scorecards for assessing risk in each category.
I think this is a good step, but it seems pretty vague to me. There’s fairly little quantifiable content here, a lot of words like “medium risk” and “high risk”.
From what I can tell, the “teeth” in the document is, “changes get brought up to management, and our board”, which doesn’t fill me with confidence.
Related, I’d be quite surprised if they actually followed through with this much in the next 1-3 years, but I’d be happy to be wrong!
I agree that there’s a lot of evidence that people at OpenAI have thought that AI could be a major risk, and I think that these are good examples.
I said here, “concrete/specific large-scale risks of their products and the corresponding risk-mitigation efforts (outside of things like short-term malicious use by bad API actors, where they are doing better work).”
Just looking at the examples you posted, most feel pretty high-level and vague, and not very related to their specific products.
> For example, Altman signed the CAIS AI Safety Statement, which reads...
This was a one-sentence statement. It easily sounds to me like saying, “Someone should deal with this, but not exactly us.”
> The framework itself goes into more detail, proposing scorecards for assessing risk in each category.
I think this is a good step, but it seems pretty vague to me. There’s fairly little quantifiable content here, a lot of words like “medium risk” and “high risk”.
From what I can tell, the “teeth” in the document is, “changes get brought up to management, and our board”, which doesn’t fill me with confidence.
Related, I’d be quite surprised if they actually followed through with this much in the next 1-3 years, but I’d be happy to be wrong!