So it may be that we just have some different object-level views here. I don’t think I could stand behind the first paragraph of what you’ve written there. Here’s a rewrite that would be palatable to me:
OpenAI is a frontier AI company, aiming to develop artificial general intelligence (AGI). We consider poor navigation of the development of AGI to be among the biggest risks to humanity’s future. It is complicated to know how best to respond to this. Many thoughtful people think it would be good to pause AI development; others think that it is good to accelerate progress in the US. We think both of these positions are probably mistaken, although we wouldn’t be shocked to be wrong. Overall we think that if we were able to slow down across the board that would probably be good, and that steps to improve our understanding of the technology relative to absolute progress with the technology are probably good. In contrast to most of the jobs in our job board, therefore, it is not obviously good to help OpenAI with its mission. It may be more appropriate to consider working at OpenAI as more similar to working at a large tobacco company, hoping to reduce the harm that the tobacco company causes, or leveraging this specific tobacco company’s expertise with tobacco to produce more competetive and less harmful variations of tobacco products.
I want to emphasise that this difference is mostly not driven by a desire to be politically acceptable (although the inclusion/wording of the “many thoughtful people …” clauses are a bit for reasons of trying to be courteous), but rather a desire not to give bad advice, nor to be overconfident on things.
… That paragraph doesn’t distinguish at all between OpenAI and, say, Anthropic. Surely you want to include some details specific to the OpenAI situation? (Or do your object-level views really not distinguish between them?)
I was just disagreeing with Habryka’s first paragraph. I’d definitely want to keep content along the lines of his third paragraph (which is pretty similar to what I initially drafted).
So it may be that we just have some different object-level views here. I don’t think I could stand behind the first paragraph of what you’ve written there. Here’s a rewrite that would be palatable to me:
I want to emphasise that this difference is mostly not driven by a desire to be politically acceptable (although the inclusion/wording of the “many thoughtful people …” clauses are a bit for reasons of trying to be courteous), but rather a desire not to give bad advice, nor to be overconfident on things.
… That paragraph doesn’t distinguish at all between OpenAI and, say, Anthropic. Surely you want to include some details specific to the OpenAI situation? (Or do your object-level views really not distinguish between them?)
I was just disagreeing with Habryka’s first paragraph. I’d definitely want to keep content along the lines of his third paragraph (which is pretty similar to what I initially drafted).
Yeah, this paragraph seems reasonable (I disagree, but like, that’s fine, it seems like a defensible position).
Yeah same. (although, this focuses entirely on their harm as an AI organization, and not manipulative practices)
I think it leaves the question “what actually is the above-the-fold-summary” (which’d be some kind of short tag).