This could end up also having really bad consequences for the goals of EA, so it’s perhaps similar to FTX in that way (but things are still developing and it might somehow turn out well).
Or maybe you feel like the board displayed inexperience and that they were in over their heads. I can probably get behind that based on how things look right now (but there’s a chance we learn more details later that put things into a different light).
Still, I feel like inexperience is only unforgivable if it comes combined with hubris. Many commenters seem to think that there must have been hubris involved on the board’s part. To me, that feels like it’s why people seem so mad about this. “Why else would the board have the audicity to oust such a successful and respected CEO, if they cannot point to any smoking-gun-type thing that justifies firing him to the world?”
But notice how that attitude – being risk averse and inclined to just let the experienced tech CEO do his thing without pushback (and possibly amass leverage over the rest of the company and the board by starting or investing into compute startups or stuff like that, as some of the rumors seem to indicate) – is also dangerous and potentially “irresponsible.” It’s not the by-default safe option, after forming concerns about his suitability, to let Sam Altman continue to cash in from the reputational benefits of running OpenAI with the seal of approval from this public good, non-profit, beneficial-mission-focused board structure that OpenAI has installed. This board structure has, from the very start, served as a kind of seal of approval that guarantees a significant amount of goodwill to people who would look at OpenAI skeptically and think “these tech people put the world at risk to attain power/money/the top spot in history.” EAs were arguably quite crucial (via getting Elon Musk to think about AI risk as well as some other pathways) in helping to set up OpenAI with such a board structure and the reputational protection against scrutiny from a concerned public (especially now that AI risk is gaining traction after chat-gpt spooked a bunch of people) that comes with that. So, I mainly want to point out that it’s not obviously “the responsible choice” to not step in when Sam Altman would otherwise de facto keep benefitting from this board structure’s seal of approval, especially if the board that was put in place no longer feels comfortable with his leadership.
To be clear, even if I’m right about the above, this isn’t saying that there wouldn’t have been better ways to handle this. Also, I want to flag that I don’t know what the board members were actually thinking – maybe they did think of this more as coup and less as a “if we put on our board members hats and try to serve our role as well as possible, what should we do?.” In that case, I would disapprove. I don’t know which one applies.
This could end up also having really bad consequences for the goals of EA, so it’s perhaps similar to FTX in that way (but things are still developing and it might somehow turn out well).
Or maybe you feel like the board displayed inexperience and that they were in over their heads. I can probably get behind that based on how things look right now (but there’s a chance we learn more details later that put things into a different light).
Still, I feel like inexperience is only unforgivable if it comes combined with hubris. Many commenters seem to think that there must have been hubris involved on the board’s part. To me, that feels like it’s why people seem so mad about this. “Why else would the board have the audicity to oust such a successful and respected CEO, if they cannot point to any smoking-gun-type thing that justifies firing him to the world?”
But notice how that attitude – being risk averse and inclined to just let the experienced tech CEO do his thing without pushback (and possibly amass leverage over the rest of the company and the board by starting or investing into compute startups or stuff like that, as some of the rumors seem to indicate) – is also dangerous and potentially “irresponsible.” It’s not the by-default safe option, after forming concerns about his suitability, to let Sam Altman continue to cash in from the reputational benefits of running OpenAI with the seal of approval from this public good, non-profit, beneficial-mission-focused board structure that OpenAI has installed. This board structure has, from the very start, served as a kind of seal of approval that guarantees a significant amount of goodwill to people who would look at OpenAI skeptically and think “these tech people put the world at risk to attain power/money/the top spot in history.” EAs were arguably quite crucial (via getting Elon Musk to think about AI risk as well as some other pathways) in helping to set up OpenAI with such a board structure and the reputational protection against scrutiny from a concerned public (especially now that AI risk is gaining traction after chat-gpt spooked a bunch of people) that comes with that. So, I mainly want to point out that it’s not obviously “the responsible choice” to not step in when Sam Altman would otherwise de facto keep benefitting from this board structure’s seal of approval, especially if the board that was put in place no longer feels comfortable with his leadership.
To be clear, even if I’m right about the above, this isn’t saying that there wouldn’t have been better ways to handle this. Also, I want to flag that I don’t know what the board members were actually thinking – maybe they did think of this more as coup and less as a “if we put on our board members hats and try to serve our role as well as possible, what should we do?.” In that case, I would disapprove. I don’t know which one applies.