I wonder how the recent turn for the worse at OpenAI should make us feel about e.g. Anthropic and Conjecture and other organizations with a similar structure, or whether we should change our behaviour towards those orgs.
How much do we think that OpenAIâs problems are idiosyncratic vs. structural? If e.g. Sam Altman is the problem, we can still feel good about peer organisations. If instead weighing investor concerns and safety concerns is the root of the problem, we should be worried about whether peer organizations are going to be pushed down the same path sooner or later.
Are there any concerns we have with OpenAI that we should be taking this opportunity to put to its peers as well? For example, have peers been publically asked if they use non-disparagement agreements? I can imagine a situation where another org has really just never thought to use them, and we can use this occasion to encourage them to turn that into a public commitment.
On (1), these issues seem to be structural in nature, but exploited by idiosyncrasies. In theory, both OpenAIâs non-profit board & Anthropicâs LTBT should perform the roughly same oversight function. In reality, a combination of Samâs rebellion, Microsoftâs financial domination, and the collective power of the workers shifted the decision to being about whether OpenAI would continue independently with a new board or re-form under Microsoft. Anthropic is just as susceptible to this kind of coup (led by Amazon), but only if their leadership and their workers collectively want it, which, in all fairness, I think theyâre a lot less likely to.
But in some sense, no corporate structure can protect against all of the key employees organising to direct their productivity somewhere else. Only a state-backed legal structure really has that power. If youâre worried about some bad outcome, I think you either have to trust that the Anthropic people have good intentions and wonât sell themselves to Amazon, or advocate for legal restrictions on AI work.
Thatâs not as obvious, because the employees probably wouldnât work in that jurisdiction to begin with, or theyâd just move to a competitor in such a jurisdiction. Even in such jurisdictions theyâre not as binding as youâd hope!
An industry norm around gardening leave, however, can catch on and play well (companies are concerned about losing their trade secrets). I think it would apply some pressure against such a situation, but it would be possible to engineer similar situations if everyone wanted out of the LTBT (even just not doing the gardening leave and having the new org foot the legal bill)
I wonder how the recent turn for the worse at OpenAI should make us feel about e.g. Anthropic and Conjecture and other organizations with a similar structure, or whether we should change our behaviour towards those orgs.
How much do we think that OpenAIâs problems are idiosyncratic vs. structural? If e.g. Sam Altman is the problem, we can still feel good about peer organisations. If instead weighing investor concerns and safety concerns is the root of the problem, we should be worried about whether peer organizations are going to be pushed down the same path sooner or later.
Are there any concerns we have with OpenAI that we should be taking this opportunity to put to its peers as well? For example, have peers been publically asked if they use non-disparagement agreements? I can imagine a situation where another org has really just never thought to use them, and we can use this occasion to encourage them to turn that into a public commitment.
On (1), these issues seem to be structural in nature, but exploited by idiosyncrasies. In theory, both OpenAIâs non-profit board & Anthropicâs LTBT should perform the roughly same oversight function. In reality, a combination of Samâs rebellion, Microsoftâs financial domination, and the collective power of the workers shifted the decision to being about whether OpenAI would continue independently with a new board or re-form under Microsoft. Anthropic is just as susceptible to this kind of coup (led by Amazon), but only if their leadership and their workers collectively want it, which, in all fairness, I think theyâre a lot less likely to.
But in some sense, no corporate structure can protect against all of the key employees organising to direct their productivity somewhere else. Only a state-backed legal structure really has that power. If youâre worried about some bad outcome, I think you either have to trust that the Anthropic people have good intentions and wonât sell themselves to Amazon, or advocate for legal restrictions on AI work.
If the problem is an employee rebellion, the obvious alternative would be to organize the company in a jurisdiction that allows noncompete agreements?
Thatâs not as obvious, because the employees probably wouldnât work in that jurisdiction to begin with, or theyâd just move to a competitor in such a jurisdiction. Even in such jurisdictions theyâre not as binding as youâd hope!
An industry norm around gardening leave, however, can catch on and play well (companies are concerned about losing their trade secrets). I think it would apply some pressure against such a situation, but it would be possible to engineer similar situations if everyone wanted out of the LTBT (even just not doing the gardening leave and having the new org foot the legal bill)
Say more about Conjectureâs structure?
By that I meant itâs an org doing AI safety which also takes VC capital /â has profitmaking goals /âproduces AI products.