Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unfortunately, a significant part of the situation is that people with internal experience and a negative impression feel both constrained and conflicted (in the conflict of interest sense) for public statements. This applies to me: I left OpenAI in 2019 for DeepMind (thus the conflicted).
Note that Eliezer Yudkowski argument in the opening link is that OpenAI’s damage was done by fragmenting the AI Safety community on its launch.
This damage is done—and I am not sure it bears much relation to what OpenAI is trying to do going forward.
(I am not sure I agree with Eliezer on this one, but I lack details to tell if OpenAI’s launch really was net negative)
I’m a complete outsider of all this, but I get the feeling that it may be impolitic of me to write this comment for reasons I don’t know. If so, please warn me and I can remove it.
Here impressions as an observer over the years. I don’t know what’s going on with OpenAI at the moment – just to preempt disappointment – but I remember what it was like in 2015 when it launched.
Maybe 2015? Elon Musk was said to have read Superintelligence. I was pleasantly surprised because I liked Superintelligence.
Late 2015:
OpenAI was announced and I freaked out. I’m a bit on the mild-mannered side, so my “freaking out” might’ve involved the phrase, “this seems bad.” That’s a very strong statement for me. Also this was probably in my inner dialogue only.
Gleb Tsipursky wrote a long open letter also making the case that this is bad. EY asked him not to publish it (further), and now I can’t find it anymore. (I think his exact words were, “Please don’t.”) I concluded that people must be trying to salvage the situation behind the scenes and that verbal hostility was not helping.
Scott Alexander wrote a similar post. I was confused as to why he published it when Gleb wouldn’t? Maybe Scott didn’t ask? In any case, I was glad to have an article to link to when people asked why I was freaking out about OpenAI.
Early 2016: Bostrom’s paper on openness was posted somewhere online or otherwise made accessible enough that I could read it. I didn’t learn anything importantly new from it, so it seemed to me that those were cached thoughts of Bostrom’s that he had specifically reframed to address openness to get it read by OpenAI people. (The academic equivalent of “freaking out”?) It didn’t seem like a strong critique to me, but perhaps that was the strategically best move to try to redirect the momentum of the organization into a less harmful direction without demanding that it change its branding? I wondered whether EY really didn’t contribute to it or whether he had asked to be removed from the acknowledgements.
I reread the paper a few years later together with some friends. One of them strongly disliked the paper for not telling her anything new or interesting. I liked the paper for being a sensible move to alleviate the risk from OpenAI. That must’ve been one of those few times when two people had completely opposite reactions to a paper without even disagreeing on anything about it.
March 30 (?), 2017: It was April 1 when I read a Facebook post to the effect that Open Phil had made a grant of $30m to OpenAI. OpenAI seemed clearly very bad to me and $30m were way more than all previous grants, so my thoughts were almost literally, “C’mon, an April Fools has to be at least remotely plausible for people to fall for it!” I think I didn’t even click the link that day. Embarrassing. I actually quickly acknowledged that getting a seat on the board of the org to try to stear it into a less destructive direction got to be worth a lot (and that $30m weren’t so much for OpenAI that it would greatly accelerate their AGI development). So after my initial shock had settled, I congratulated Open Phil on that bold move. (Internally. I don’t suppose I talk to people much.)
Later I learned that Paul Christiano and other people I trusted or who were trusted by people I trusted had joined OpenAI. That further alleviated my worry.
OpenAI went on to not publish some models they had generated, showing that they were backing away from their dangerous openness focus.
When Paul Christiano left OpenAI, I heard or read about it in some interview where he also mentioned that he’s unsure whether that’s a good decision on balance but that there are safety-minded people left at OpenAI. On the one hand I really want him to have the maximal amount of time available to pursue IDA and other ideas he might have. But on the other hand, his leaving (and mentioning that others left too) did rekindle that old worry about OpenAI.
I can only send hopes and well wishes to all safety-minded people who are still left at OpenAI!
Is Holden still on the board?
He is listed in the website.
> OpenAI is governed by the board of OpenAI Nonprofit, which consists of OpenAI LP employees Greg Brockman (Chairman & CTO), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Holden Karnofsky, Reid Hoffman, Shivon Zilis, Tasha McCauley, and Will Hurd.
It might not be up to date though
It can’t be up to date, since they recently announced that Helen Toner joined the board, and she’s not listed.
The website now lists Helen Toner, but do not list Holden, so it seems he is no longer on the board.
That’s pretty wild, especially considering getting Holden on the board was a major condition of OpenPhilanthropy’s $30,000,000 grant: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support#Details_on_Open_Philanthropy8217s_role
Thought it also says the grant was for 3 years, so maybe it shouldn’t be surprising that his board seat only lasted that long.
Holden might have agreed to have Helen replace him. She used to work at Open Phil, too, so Holden probably knows her well enough. Open Phil bought a board seat, and it’s not weird for them to fill it as they see fit, without having it reserved only for a specific individual.
There is now some meta-discussion on LessWrong.
Happy to see they think this should be discussed in public! Wish there was more on questions #2 and #3.
Also very helpful to see how my question could have been presented in a less contentious way.