Zach is on Anthropic’s Long-Term Benefit Trust. It’s not super clear what this means, particularly in light of recent events with the Open AI board, but I am a bit concerned about the way that EA views Anthropic, and that the CEO of CEA being affiliated with Anthropic could make it more difficult for people within EA to speak out against Anthropic.
This is a very interesting point given that it seems that Helen’s milquetoast criticism of OpenAI was going to be used as leverage to kick her off the OpenAI board, and that historically EV has aggressively censored its staff on important topics.
I’m not sure that I would use the word censoring, but there were strict policies around what kinds of communications various EV orgs could do around FTX for quite a long time (though I don’t think they were particularly unusual for an organisation of EVs size in a similar legal situation).
EV was fine with me publishing this. My experience was that it was kind of annoying to publish FTX stuff because you had to get review first, but I can’t recall an instance of being prevented from saying something.
“Aggressively censored its staff” doesn’t reflect my experience, but maybe reflects others’, not sure.
In fairness, I was prevented from posting a bunch of stuff and spent a long time (like tens of hours) workshopping text until legal council were happy with it. At least in one case I didn’t end up posting the thing because it didn’t feel useful after the various edits and it had been by then a long time since the event the post was about.
I think in hindsight the response (with the information I think the board had) was probably reasonable—but if similar actions were to be taken by EV when writing a post about Anthropic I’d be pretty upset about that. I wouldn’t use the word censoring in the real ftx case—but idk in the fictional Anthropic case I might?
I think it’s worth not entangling the word ‘censorship’ with whether it is justified. During the Second World War the UK engaged in a lot of censorship, to maintain domestic morale and to prevent the enemy from getting access to information, but this seems to me to have been quite justified, because the moral imperative for defeating Germany was so great.
Similarly, it seems quite possible to me that in the future CEA might be quite justified in instituting AI-related censorship, preventing people from publishing writing that disagrees with the house line. It seems possible to me that the FTX and EV related censorship was justified, though it is hard to tell, given that EV have never really explained their reasons, and I think the policy certainly had very significant costs. In the wake of FTX’s collapse there was a lot of soul-searching and thinking about how to continue in the EA community and we were deprived of input from many of the best informed and most thoughtful people. My guess is this censorship was especially onerous on more junior employees for whom it was harder to justify the attorney review time, leading to a default answer of ‘no’.
So the reason I mentioned it wasn’t that censorship is always a bad choice, or that, conditional on censorship being imposed, it is likely to be a mistake, given the situation. The argument is that who your leader is changes the nature of the situation, changing whether or not censorship is required, and the nature of that censorship. As an analogy, if Helen knew what was going to come, I imagine she might have written that report quite differently—with good reason. A hypothetical alternative CSET with a different leader would not have face such pressures.
It seems possible to me that the FTX and EV related censorship was justified, though it is hard to tell, given that EV have never really explained their reasons, and I think the policy certainly had very significant costs.
I think it is highly likely that imposing a preclearance requirement on employees was justified. It would be extremely difficult for an attorney to envision everything that an employee might conceivably write and determine without even seeing it whether it would cause problem. Even if the attorney could, they would have to update their view of the universe of possible writings every time the situation materially changed. I just don’t think a system without a preclearance requirement would have been workable.
It’s more likely that some of the responses to proposed writings were more censorious than they should have been. That is really hard to determine, as we’ll likely never know the attorney’s reasoning (which is protected by privilege).
This is a very interesting point given that it seems that Helen’s milquetoast criticism of OpenAI was going to be used as leverage to kick her off the OpenAI board, and that historically EV has aggressively censored its staff on important topics.
What are some instances of this: “historically EV has aggressively censored its staff on important topics”?
I’m not sure that I would use the word censoring, but there were strict policies around what kinds of communications various EV orgs could do around FTX for quite a long time (though I don’t think they were particularly unusual for an organisation of EVs size in a similar legal situation).
EV was fine with me publishing this. My experience was that it was kind of annoying to publish FTX stuff because you had to get review first, but I can’t recall an instance of being prevented from saying something.
“Aggressively censored its staff” doesn’t reflect my experience, but maybe reflects others’, not sure.
In fairness, I was prevented from posting a bunch of stuff and spent a long time (like tens of hours) workshopping text until legal council were happy with it. At least in one case I didn’t end up posting the thing because it didn’t feel useful after the various edits and it had been by then a long time since the event the post was about.
I think in hindsight the response (with the information I think the board had) was probably reasonable—but if similar actions were to be taken by EV when writing a post about Anthropic I’d be pretty upset about that. I wouldn’t use the word censoring in the real ftx case—but idk in the fictional Anthropic case I might?
Reasonable because you were all the same org, or reasonable even if EA Funds was its own org
I think reasonable even if EA Funds was its own org.
I think it’s worth not entangling the word ‘censorship’ with whether it is justified. During the Second World War the UK engaged in a lot of censorship, to maintain domestic morale and to prevent the enemy from getting access to information, but this seems to me to have been quite justified, because the moral imperative for defeating Germany was so great.
Similarly, it seems quite possible to me that in the future CEA might be quite justified in instituting AI-related censorship, preventing people from publishing writing that disagrees with the house line. It seems possible to me that the FTX and EV related censorship was justified, though it is hard to tell, given that EV have never really explained their reasons, and I think the policy certainly had very significant costs. In the wake of FTX’s collapse there was a lot of soul-searching and thinking about how to continue in the EA community and we were deprived of input from many of the best informed and most thoughtful people. My guess is this censorship was especially onerous on more junior employees for whom it was harder to justify the attorney review time, leading to a default answer of ‘no’.
So the reason I mentioned it wasn’t that censorship is always a bad choice, or that, conditional on censorship being imposed, it is likely to be a mistake, given the situation. The argument is that who your leader is changes the nature of the situation, changing whether or not censorship is required, and the nature of that censorship. As an analogy, if Helen knew what was going to come, I imagine she might have written that report quite differently—with good reason. A hypothetical alternative CSET with a different leader would not have face such pressures.
I think it is highly likely that imposing a preclearance requirement on employees was justified. It would be extremely difficult for an attorney to envision everything that an employee might conceivably write and determine without even seeing it whether it would cause problem. Even if the attorney could, they would have to update their view of the universe of possible writings every time the situation materially changed. I just don’t think a system without a preclearance requirement would have been workable.
It’s more likely that some of the responses to proposed writings were more censorious than they should have been. That is really hard to determine, as we’ll likely never know the attorney’s reasoning (which is protected by privilege).
The wording of what Larks said makes it seem like over a number of years staff were prevented from expressing their true opinions on central EA topics