Thanks. I would be interested on others’ thoughts on that point as well.
To clarify, I didn’t read the post’s question as “What did EA do wrong with OpenAI?” but instead as “What should we learn from Altman ‘showing his true colors’?” So it wasn’t intended as a postmortem of OpenAI per se. I strongly suspect that most EAs would have agreed with my general points in 2015 . . . but how much weight did they give them versus other considerations? I wasn’t there, so will have to rely on others to say.
Looking at the posts you shared, both seem to take Altman and OpenAI mostly at their world that the mission was . . . open (as opposed to proprietary) AI. That doesn’t seem to be the mission anymore, and I’d argue OpenAI is more dangerous as a result.[1] So it seems there may have been inadequate consideration of the principles I listed in analyzing the risk of that particular failure mode. Given that OpenAI was primarily funded elsewhere, it’s not clear how much EA could have done about this risk, other than perhaps being more wary about encouraging people to work for/with OpenAI.
Whatever the downsides of publicly releasing AI information, an organization focused on that mission isn’t likely to rake in the massive amounts of $$$ needed to be where OpenAI is now on capabilities. At some point, OpenAI pivoted to acting much more like any for-profit company would be expected to act, as opposed to an organization acting in the public interest.
Thanks. I would be interested on others’ thoughts on that point as well.
To clarify, I didn’t read the post’s question as “What did EA do wrong with OpenAI?” but instead as “What should we learn from Altman ‘showing his true colors’?” So it wasn’t intended as a postmortem of OpenAI per se. I strongly suspect that most EAs would have agreed with my general points in 2015 . . . but how much weight did they give them versus other considerations? I wasn’t there, so will have to rely on others to say.
Looking at the posts you shared, both seem to take Altman and OpenAI mostly at their world that the mission was . . . open (as opposed to proprietary) AI. That doesn’t seem to be the mission anymore, and I’d argue OpenAI is more dangerous as a result.[1] So it seems there may have been inadequate consideration of the principles I listed in analyzing the risk of that particular failure mode. Given that OpenAI was primarily funded elsewhere, it’s not clear how much EA could have done about this risk, other than perhaps being more wary about encouraging people to work for/with OpenAI.
Whatever the downsides of publicly releasing AI information, an organization focused on that mission isn’t likely to rake in the massive amounts of $$$ needed to be where OpenAI is now on capabilities. At some point, OpenAI pivoted to acting much more like any for-profit company would be expected to act, as opposed to an organization acting in the public interest.