This post seems to be making basic errors (in the opening quote, Eliezer Yudkowsky, a rationalist associated public figure involved in AI safety, is complaining about the dynamics of Musk at the creation of OpenAI, not recent events or increasing salience). It is hard to tell if the OP has a model of AI safety or insight into what the recent org dynamics mean, all of which are critical to his post having meaning.
Also relevant to OpenAI and safety (differential progress?) see the discussion in the AMA by Paul Christiano, formerly of OpenAI. This gives one worldview/model for why increasing salience and openness is useful.
Some content from the AMA copied and pasted for convenience below:
dynamics of Musk at the creation of OpenAI, not recent events or increasing salience
Thanks, this is a good clarification.
It is hard to tell if the OP has a model of AI safety or insight into what the recent org dynamics mean, all of which are critical to his post having meaning.
You’re right that I lack insight into what the recent org dynamics mean, this is precisely why I’m asking if anyone has more information. As I write at the end:
To be clear, I’m not advocating any of this. I’m asking why you aren’t. I’m seriously curious and want to understand which part of my mental model of the situation is broken.
The quotes from Paul are helpful, I don’t read LW much and must have missed the interview, thanks for adding these. Having said that, if you see u/irving’s comment below, I think it’s pretty clear that there are good reasons for researchers not to speak up too loudly and shit talk their former employer.
This post seems to be making basic errors (in the opening quote, Eliezer Yudkowsky, a rationalist associated public figure involved in AI safety, is complaining about the dynamics of Musk at the creation of OpenAI, not recent events or increasing salience). It is hard to tell if the OP has a model of AI safety or insight into what the recent org dynamics mean, all of which are critical to his post having meaning.
There’s somewhat more discussion here on LessWrong.
Also relevant to OpenAI and safety (differential progress?) see the discussion in the AMA by Paul Christiano, formerly of OpenAI. This gives one worldview/model for why increasing salience and openness is useful.
Some content from the AMA copied and pasted for convenience below:
Thanks, this is a good clarification.
You’re right that I lack insight into what the recent org dynamics mean, this is precisely why I’m asking if anyone has more information. As I write at the end:
The quotes from Paul are helpful, I don’t read LW much and must have missed the interview, thanks for adding these. Having said that, if you see u/irving’s comment below, I think it’s pretty clear that there are good reasons for researchers not to speak up too loudly and shit talk their former employer.