Yeah, that’s reasonable, as of 5:36pm PST, November 18, 2023 it still seems like a good bet. I definitely am worried about either Sam Altman + Greg Brockman starting a new, less safety-focused lab, or Sam+Greg somehow returning to OpenAI and removing the safety-focused people from the board. Even with this, it seems pretty good to have safety-focused people with some influence over OpenAI. I’m a bit confused about situations where it’s like “Yes, it was good to get influence, but it turned out you made a bad tactical mistake and ended up making things worse.”
Yeah, that’s reasonable, as of 5:36pm PST, November 18, 2023 it still seems like a good bet.
I definitely am worried about either Sam Altman + Greg Brockman starting a new, less safety-focused lab, or Sam+Greg somehow returning to OpenAI and removing the safety-focused people from the board.
Even with this, it seems pretty good to have safety-focused people with some influence over OpenAI. I’m a bit confused about situations where it’s like “Yes, it was good to get influence, but it turned out you made a bad tactical mistake and ended up making things worse.”