This is a quickly-written opinion piece, of what I understand about OpenAI. I first posted it to Facebook, where it had some discussion.
Some arguments that OpenAI is making, simultaneously:
OpenAI will likely reach and own transformative AI (useful for attracting talent to work there).
OpenAI cares a lot about safety (good for public PR and government regulations).
OpenAI isn’t making anything dangerous and is unlikely to do so in the future (good for public PR and government regulations).
OpenAI doesn’t need to spend many resources on safety, and implementing safe AI won’t put it at any competitive disadvantage (important for investors who own most of the company).
Transformative AI will be incredibly valuable for all of humanity in the long term (for public PR and developers).
People at OpenAI have thought long and hard about what will happen, and it will be fine.
We can’t predict concretely what transformative AI will look like or what will happen after (Note: Any specific scenario they propose would upset a lot of people. Vague hand-waving upsets fewer people).
OpenAI can be held accountable to the public because it has a capable board of advisors overseeing Sam Altman (he said this explicitly in an interview).
The previous board scuffle was a one-time random event that was a very minor deal.
OpenAI has a nonprofit structure that provides an unusual focus on public welfare.
The nonprofit structure of OpenAI won’t inconvenience its business prospects or shareholders in any way.
The name “OpenAI,” which clearly comes from the early days when the mission was actually to make open-source AI, is an equally good name for where the company is now.* (I don’t actually care about this, but find it telling that the company doubles down on arguing the name still is applicable).
So they need to simultaneously say:
“We’re making something that will dominate the global economy and outperform humans at all capabilities, including military capabilities, but is not a threat.”
“Our experimental work is highly safe, but in a way that won’t actually cost us anything.”
“We’re sure that the long-term future of transformative change will be beneficial, even though none of us can know or outline specific details of what that might actually look like.”
“We have a great board of advisors that provide accountability. Sure, a few months ago, the board tried to fire Sam, and Sam was able to overpower them within two weeks, but next time will be different.”
“We have all of the benefits of being a nonprofit, but we don’t have any of the costs of being a nonprofit.”
Meta’s messaging is clearer.
“AI development won’t get us to transformative AI, we don’t think that AI safety will make a difference, we’re just going to optimize for profitability.”
Anthropic’s messaging is a bit clearer
“We think that AI development is a huge deal and correspondingly scary, and we’re taking a costlier approach accordingly, though not too costly such that we’d be irrelevant.”
This still requires a strange and narrow worldview to make sense, but it’s still more coherent.
But OpenAI’s messaging has turned into a particularly tangled mess of conflicting promises. It’s the kind of political strategy that can work for a while, especially if you can have most of your conversations in private, but is really hard to pull off when you’re highly public and facing multiple strong competitive pressures.
If I were a journalist interviewing Sam Altman, I’d try to spend as much of it as possible just pinning him down on these countervailing promises they’re making. Some types of questions I’d like him to answer would include:
“Please lay out a specific, year-by-year, story of one specific scenario you can imagine in the next 20 years.”
“You say that you care deeply about long-term AI safety. What percentage of your workforce is solely dedicated to long-term AI safety?”
“You say that you think that globally safe AGI deployments require international coordination to go well. That coordination is happening slowly. Do your plans work conditional on international coordination failing? Explain what your plans would be.”
“What do the current prediction markets and top academics say will happen as a result of OpenAI’s work? Which clusters of these agree with your expectations?”
“Can you lay out any story at all for why we should now expect the board to do a decent job overseeing you?”
What Sam likes to do in interviews, like many public figures, is to shift specific questions into vague generalities and value statements. A great journalist would fight this, force him to say nothing but specifics, and then just have the interview end.
I think that reasonable readers should, and are, quickly learning to just stop listening to this messaging. Most organizational messaging is often dishonest but at least not self-rejecting. Sam’s been unusually good at seeming genuine, but at this point, the set of incoherent promises seems too baffling to take literally.
Instead, I think the thing to do is just ignore the noise. Look at the actual actions taken alone. And those actions seem pretty straightforward to me. OpenAI is taking the actions you’d expect from any conventional high-growth tech startup. From its actions, it comes across a lot like:
“We think AI is a high-growth area that’s not actually that scary. It’s transformative in a way similar to Google and not the Industrial Revolution. We need to solely focus on developing a large moat (i.e. monopoly) in a competitive ecosystem, like other startups do.”
OpenAI really seems almost exactly like a traditional high-growth tech startup now, to me. The main unusual things about it are the facts that:
Its in an area that some people (not the OpenAI management) think is unusually high-risk,
Its messaging is unusually lofty and conflicting, even for a Silicon Valley startup, and
It started out under an unusual nonprofit setup, which now barely seems relevant.
What’s Going on With OpenAI’s Messaging?
This is a quickly-written opinion piece, of what I understand about OpenAI. I first posted it to Facebook, where it had some discussion.
Some arguments that OpenAI is making, simultaneously:
OpenAI will likely reach and own transformative AI (useful for attracting talent to work there).
OpenAI cares a lot about safety (good for public PR and government regulations).
OpenAI isn’t making anything dangerous and is unlikely to do so in the future (good for public PR and government regulations).
OpenAI doesn’t need to spend many resources on safety, and implementing safe AI won’t put it at any competitive disadvantage (important for investors who own most of the company).
Transformative AI will be incredibly valuable for all of humanity in the long term (for public PR and developers).
People at OpenAI have thought long and hard about what will happen, and it will be fine.
We can’t predict concretely what transformative AI will look like or what will happen after (Note: Any specific scenario they propose would upset a lot of people. Vague hand-waving upsets fewer people).
OpenAI can be held accountable to the public because it has a capable board of advisors overseeing Sam Altman (he said this explicitly in an interview).
The previous board scuffle was a one-time random event that was a very minor deal.
OpenAI has a nonprofit structure that provides an unusual focus on public welfare.
The nonprofit structure of OpenAI won’t inconvenience its business prospects or shareholders in any way.
The name “OpenAI,” which clearly comes from the early days when the mission was actually to make open-source AI, is an equally good name for where the company is now.* (I don’t actually care about this, but find it telling that the company doubles down on arguing the name still is applicable).
So they need to simultaneously say:
“We’re making something that will dominate the global economy and outperform humans at all capabilities, including military capabilities, but is not a threat.”
“Our experimental work is highly safe, but in a way that won’t actually cost us anything.”
“We’re sure that the long-term future of transformative change will be beneficial, even though none of us can know or outline specific details of what that might actually look like.”
“We have a great board of advisors that provide accountability. Sure, a few months ago, the board tried to fire Sam, and Sam was able to overpower them within two weeks, but next time will be different.”
“We have all of the benefits of being a nonprofit, but we don’t have any of the costs of being a nonprofit.”
Meta’s messaging is clearer.
“AI development won’t get us to transformative AI, we don’t think that AI safety will make a difference, we’re just going to optimize for profitability.”
Anthropic’s messaging is a bit clearer
“We think that AI development is a huge deal and correspondingly scary, and we’re taking a costlier approach accordingly, though not too costly such that we’d be irrelevant.”
This still requires a strange and narrow worldview to make sense, but it’s still more coherent.
But OpenAI’s messaging has turned into a particularly tangled mess of conflicting promises. It’s the kind of political strategy that can work for a while, especially if you can have most of your conversations in private, but is really hard to pull off when you’re highly public and facing multiple strong competitive pressures.
If I were a journalist interviewing Sam Altman, I’d try to spend as much of it as possible just pinning him down on these countervailing promises they’re making. Some types of questions I’d like him to answer would include:
“Please lay out a specific, year-by-year, story of one specific scenario you can imagine in the next 20 years.”
“You say that you care deeply about long-term AI safety. What percentage of your workforce is solely dedicated to long-term AI safety?”
“You say that you think that globally safe AGI deployments require international coordination to go well. That coordination is happening slowly. Do your plans work conditional on international coordination failing? Explain what your plans would be.”
“What do the current prediction markets and top academics say will happen as a result of OpenAI’s work? Which clusters of these agree with your expectations?”
“Can you lay out any story at all for why we should now expect the board to do a decent job overseeing you?”
What Sam likes to do in interviews, like many public figures, is to shift specific questions into vague generalities and value statements. A great journalist would fight this, force him to say nothing but specifics, and then just have the interview end.
I think that reasonable readers should, and are, quickly learning to just stop listening to this messaging. Most organizational messaging is often dishonest but at least not self-rejecting. Sam’s been unusually good at seeming genuine, but at this point, the set of incoherent promises seems too baffling to take literally.
Instead, I think the thing to do is just ignore the noise. Look at the actual actions taken alone. And those actions seem pretty straightforward to me. OpenAI is taking the actions you’d expect from any conventional high-growth tech startup. From its actions, it comes across a lot like:
“We think AI is a high-growth area that’s not actually that scary. It’s transformative in a way similar to Google and not the Industrial Revolution. We need to solely focus on developing a large moat (i.e. monopoly) in a competitive ecosystem, like other startups do.”
OpenAI really seems almost exactly like a traditional high-growth tech startup now, to me. The main unusual things about it are the facts that:
Its in an area that some people (not the OpenAI management) think is unusually high-risk,
Its messaging is unusually lofty and conflicting, even for a Silicon Valley startup, and
It started out under an unusual nonprofit setup, which now barely seems relevant.