I’m excited that Zach is stepping into this role. Zach seems substantially better than my expectations for the new CEA CEO, and I expect the CEO hiring committee + Ben + the EV board had a lot to do with that (and probably lots of other people at CEA that I don’t know about)!
Most CEA users and EA community members probably don’t know Zach, so I thought it would be helpful to share some of my thoughts on them and this position (though I don’t know Zach especially well, and these are just quick subjective takes). Thanks to @Ben_West for the nudge to do this.
Quick takes on my impression of Zach and his fit for this role Zach seems very strong on typical management consultant skills, e.g. communication skills, professionalism, creative problem-solving in typical professional environments, and having difficult conversations.
One aptitude that I would bet Zach is strong on that I think is very neglected in EA organisations is developing and mentoring mid-level staff. Many EA orgs have bright, agentic, competent young people in fairly high-responsibility roles. While you can learn a lot from these roles that you might not learn in a traditional organisation, I worry (particularly for myself) that there might be a lot of gains from being in a more typical management/organisational structure. I’m pretty excited for people like Jessica McCurdy, who runs the EA groups team, to have a solid manager like Zach (Edit: I didn’t mean to imply here that leadership at the time of writing weren’t providing good management; just that relative to people I thought might end up in this role, I expect ~Zach to be quite strong on managing mid-level people). I’d guess that CEA will become a substantially more attractive place to work (for senior people) because of Zach.
While I don’t have much insight into Zach’s vision for CEA, I remember thinking that Zach seemed sharp, thoughtful, and reasonable in conversations about EA and CEA. I also got the sense that he has thought about some of EA’s more intellectual/philosophical parts—it’s imo fairly rare to find people who can both do the philosophy part of EA and the execution part of EA, but both parts seem important for this role.
I do have some reservations about Zach entering this role related to the professional relationships/responsibilities that Zach holds.[1]
Zach previously worked at Open Phil; this relationship seems particularly important for the future of CEA as that is where they get most of their funding from. I think it’s reasonable for people to be increasingly concerned about the epistemic influence of Open Phil on EA, and having an ex-senior Open Phil employee, who is probably still good friends with Open Phil leadership, meaningfully reduces the epistemic independence of CEA. It also could make it hard for CEA or the EV board to push Zach out if he turns out to be a bad fit for CEA—and given CEA’s history, I think this is worth bearing in mind (though I’d guess overall that Zach is net positive for CEA governance).
Zach is on Anthropic’s Long-Term Benefit Trust. It’s not super clear what this means, particularly in light of recent events with the Open AI board, but I am a bit concerned about the way that EA views Anthropic, and that the CEO of CEA being affiliated with Anthropic could make it more difficult for people within EA to speak out against Anthropic. Managing social/professional relationships within EA is challenging,[2] and I’d guess that overall this cost is worth paying to have a CEO of Zach’s calibre—but I think it’s a meaningful cost that people should be tracking.
In an ideal world, I would prefer that Zach [3]was less strongly connected to Open Phil (weak confidence) and also less strongly connected to Anthropic (medium/high confidence).
CEAs future strategy I don’t have many thoughts at this time on what changes to CEA’s strategy I’d like to see. To me it seems like CEA is at a bit of a crossroads in both a strategic and organisational sense:
Organisational—various teams seem to want to spin out and be their own thing where the project leads get more of a say over the direction of their project, and they have less baggage from being part of CEA. I’d be excited about Zach working out whether this is the right move for individual projects and increasing the value that projects get from being part of CEA.
Strategic—many CEA staff seem to have become most concerned about risk from AI and naturally want their work to focus on this topic. At the same time, it seems like relatively little money is available to the non-AI parts of EA for meta work.
I am not sure what (if anything) should change on the funding side, but on the CEA side I’d be excited about:
Zach/CEA figuring out a coherent vision for CEA that is transparent about its motivations and outcomes, CEA staff are excited about and doesn’t leave various parts of the EA community feeling isolated.
Zach figuring out how to increase the value that CEA’s projects get from being part of CEA, or helping them spin out if that’s what they want to do.
Zach/CEA figuring out how to leverage and improve the CEA brand so that it doesn’t restrict the actions of various projects (and ideally is an asset) and doesn’t create negative externalities for organisations outside of CEA.
FYI, I didn’t run this by Zach, but as it’s not really a criticism that could affect his reputation and is mostly just pointing at publicly available information, it didn’t seem warranted to me.
Zach is on Anthropic’s Long-Term Benefit Trust. It’s not super clear what this means, particularly in light of recent events with the Open AI board, but I am a bit concerned about the way that EA views Anthropic, and that the CEO of CEA being affiliated with Anthropic could make it more difficult for people within EA to speak out against Anthropic.
This is a very interesting point given that it seems that Helen’s milquetoast criticism of OpenAI was going to be used as leverage to kick her off the OpenAI board, and that historically EV has aggressively censored its staff on important topics.
I’m not sure that I would use the word censoring, but there were strict policies around what kinds of communications various EV orgs could do around FTX for quite a long time (though I don’t think they were particularly unusual for an organisation of EVs size in a similar legal situation).
EV was fine with me publishing this. My experience was that it was kind of annoying to publish FTX stuff because you had to get review first, but I can’t recall an instance of being prevented from saying something.
“Aggressively censored its staff” doesn’t reflect my experience, but maybe reflects others’, not sure.
In fairness, I was prevented from posting a bunch of stuff and spent a long time (like tens of hours) workshopping text until legal council were happy with it. At least in one case I didn’t end up posting the thing because it didn’t feel useful after the various edits and it had been by then a long time since the event the post was about.
I think in hindsight the response (with the information I think the board had) was probably reasonable—but if similar actions were to be taken by EV when writing a post about Anthropic I’d be pretty upset about that. I wouldn’t use the word censoring in the real ftx case—but idk in the fictional Anthropic case I might?
I think it’s worth not entangling the word ‘censorship’ with whether it is justified. During the Second World War the UK engaged in a lot of censorship, to maintain domestic morale and to prevent the enemy from getting access to information, but this seems to me to have been quite justified, because the moral imperative for defeating Germany was so great.
Similarly, it seems quite possible to me that in the future CEA might be quite justified in instituting AI-related censorship, preventing people from publishing writing that disagrees with the house line. It seems possible to me that the FTX and EV related censorship was justified, though it is hard to tell, given that EV have never really explained their reasons, and I think the policy certainly had very significant costs. In the wake of FTX’s collapse there was a lot of soul-searching and thinking about how to continue in the EA community and we were deprived of input from many of the best informed and most thoughtful people. My guess is this censorship was especially onerous on more junior employees for whom it was harder to justify the attorney review time, leading to a default answer of ‘no’.
So the reason I mentioned it wasn’t that censorship is always a bad choice, or that, conditional on censorship being imposed, it is likely to be a mistake, given the situation. The argument is that who your leader is changes the nature of the situation, changing whether or not censorship is required, and the nature of that censorship. As an analogy, if Helen knew what was going to come, I imagine she might have written that report quite differently—with good reason. A hypothetical alternative CSET with a different leader would not have face such pressures.
It seems possible to me that the FTX and EV related censorship was justified, though it is hard to tell, given that EV have never really explained their reasons, and I think the policy certainly had very significant costs.
I think it is highly likely that imposing a preclearance requirement on employees was justified. It would be extremely difficult for an attorney to envision everything that an employee might conceivably write and determine without even seeing it whether it would cause problem. Even if the attorney could, they would have to update their view of the universe of possible writings every time the situation materially changed. I just don’t think a system without a preclearance requirement would have been workable.
It’s more likely that some of the responses to proposed writings were more censorious than they should have been. That is really hard to determine, as we’ll likely never know the attorney’s reasoning (which is protected by privilege).
Caleb—thanks for this helpful introduction to Zach’s talents, qualifications, and background—very useful for those of us who don’t know him!
I agree that EA organizations should try very hard to avoid entanglements with AI companies such as Anthropic—however well-intentioned they seem. We need to be able to raise genuine concerns about AI risks without feeling beholden to AI corporate interests.
I’m excited that Zach is stepping into this role. Zach seems substantially better than my expectations for the new CEA CEO, and I expect the CEO hiring committee + Ben + the EV board had a lot to do with that (and probably lots of other people at CEA that I don’t know about)!
Most CEA users and EA community members probably don’t know Zach, so I thought it would be helpful to share some of my thoughts on them and this position (though I don’t know Zach especially well, and these are just quick subjective takes). Thanks to @Ben_West for the nudge to do this.
Quick takes on my impression of Zach and his fit for this role
Zach seems very strong on typical management consultant skills, e.g. communication skills, professionalism, creative problem-solving in typical professional environments, and having difficult conversations.
One aptitude that I would bet Zach is strong on that I think is very neglected in EA organisations is developing and mentoring mid-level staff. Many EA orgs have bright, agentic, competent young people in fairly high-responsibility roles. While you can learn a lot from these roles that you might not learn in a traditional organisation, I worry (particularly for myself) that there might be a lot of gains from being in a more typical management/organisational structure. I’m pretty excited for people like Jessica McCurdy, who runs the EA groups team, to have a solid manager like Zach (Edit: I didn’t mean to imply here that leadership at the time of writing weren’t providing good management; just that relative to people I thought might end up in this role, I expect ~Zach to be quite strong on managing mid-level people). I’d guess that CEA will become a substantially more attractive place to work (for senior people) because of Zach.
While I don’t have much insight into Zach’s vision for CEA, I remember thinking that Zach seemed sharp, thoughtful, and reasonable in conversations about EA and CEA. I also got the sense that he has thought about some of EA’s more intellectual/philosophical parts—it’s imo fairly rare to find people who can both do the philosophy part of EA and the execution part of EA, but both parts seem important for this role.
I do have some reservations about Zach entering this role related to the professional relationships/responsibilities that Zach holds.[1]
Zach previously worked at Open Phil; this relationship seems particularly important for the future of CEA as that is where they get most of their funding from. I think it’s reasonable for people to be increasingly concerned about the epistemic influence of Open Phil on EA, and having an ex-senior Open Phil employee, who is probably still good friends with Open Phil leadership, meaningfully reduces the epistemic independence of CEA. It also could make it hard for CEA or the EV board to push Zach out if he turns out to be a bad fit for CEA—and given CEA’s history, I think this is worth bearing in mind (though I’d guess overall that Zach is net positive for CEA governance).
Zach is on Anthropic’s Long-Term Benefit Trust. It’s not super clear what this means, particularly in light of recent events with the Open AI board, but I am a bit concerned about the way that EA views Anthropic, and that the CEO of CEA being affiliated with Anthropic could make it more difficult for people within EA to speak out against Anthropic. Managing social/professional relationships within EA is challenging,[2] and I’d guess that overall this cost is worth paying to have a CEO of Zach’s calibre—but I think it’s a meaningful cost that people should be tracking.
In an ideal world, I would prefer that Zach [3]was less strongly connected to Open Phil (weak confidence) and also less strongly connected to Anthropic (medium/high confidence).
CEAs future strategy
I don’t have many thoughts at this time on what changes to CEA’s strategy I’d like to see. To me it seems like CEA is at a bit of a crossroads in both a strategic and organisational sense:
Organisational—various teams seem to want to spin out and be their own thing where the project leads get more of a say over the direction of their project, and they have less baggage from being part of CEA. I’d be excited about Zach working out whether this is the right move for individual projects and increasing the value that projects get from being part of CEA.
Strategic—many CEA staff seem to have become most concerned about risk from AI and naturally want their work to focus on this topic. At the same time, it seems like relatively little money is available to the non-AI parts of EA for meta work.
I am not sure what (if anything) should change on the funding side, but on the CEA side I’d be excited about:
Zach/CEA figuring out a coherent vision for CEA that is transparent about its motivations and outcomes, CEA staff are excited about and doesn’t leave various parts of the EA community feeling isolated.
Zach figuring out how to increase the value that CEA’s projects get from being part of CEA, or helping them spin out if that’s what they want to do.
Zach/CEA figuring out how to leverage and improve the CEA brand so that it doesn’t restrict the actions of various projects (and ideally is an asset) and doesn’t create negative externalities for organisations outside of CEA.
FYI, I didn’t run this by Zach, but as it’s not really a criticism that could affect his reputation and is mostly just pointing at publicly available information, it didn’t seem warranted to me.
For example, I live with two people who work at Anthropic, and in general, living with people probably has substantive epistemic effects.
Edit 2024-Apr-16: I meant the CEO of EV, as opposed to Zach specifically.
This is a very interesting point given that it seems that Helen’s milquetoast criticism of OpenAI was going to be used as leverage to kick her off the OpenAI board, and that historically EV has aggressively censored its staff on important topics.
What are some instances of this: “historically EV has aggressively censored its staff on important topics”?
I’m not sure that I would use the word censoring, but there were strict policies around what kinds of communications various EV orgs could do around FTX for quite a long time (though I don’t think they were particularly unusual for an organisation of EVs size in a similar legal situation).
EV was fine with me publishing this. My experience was that it was kind of annoying to publish FTX stuff because you had to get review first, but I can’t recall an instance of being prevented from saying something.
“Aggressively censored its staff” doesn’t reflect my experience, but maybe reflects others’, not sure.
In fairness, I was prevented from posting a bunch of stuff and spent a long time (like tens of hours) workshopping text until legal council were happy with it. At least in one case I didn’t end up posting the thing because it didn’t feel useful after the various edits and it had been by then a long time since the event the post was about.
I think in hindsight the response (with the information I think the board had) was probably reasonable—but if similar actions were to be taken by EV when writing a post about Anthropic I’d be pretty upset about that. I wouldn’t use the word censoring in the real ftx case—but idk in the fictional Anthropic case I might?
Reasonable because you were all the same org, or reasonable even if EA Funds was its own org
I think reasonable even if EA Funds was its own org.
I think it’s worth not entangling the word ‘censorship’ with whether it is justified. During the Second World War the UK engaged in a lot of censorship, to maintain domestic morale and to prevent the enemy from getting access to information, but this seems to me to have been quite justified, because the moral imperative for defeating Germany was so great.
Similarly, it seems quite possible to me that in the future CEA might be quite justified in instituting AI-related censorship, preventing people from publishing writing that disagrees with the house line. It seems possible to me that the FTX and EV related censorship was justified, though it is hard to tell, given that EV have never really explained their reasons, and I think the policy certainly had very significant costs. In the wake of FTX’s collapse there was a lot of soul-searching and thinking about how to continue in the EA community and we were deprived of input from many of the best informed and most thoughtful people. My guess is this censorship was especially onerous on more junior employees for whom it was harder to justify the attorney review time, leading to a default answer of ‘no’.
So the reason I mentioned it wasn’t that censorship is always a bad choice, or that, conditional on censorship being imposed, it is likely to be a mistake, given the situation. The argument is that who your leader is changes the nature of the situation, changing whether or not censorship is required, and the nature of that censorship. As an analogy, if Helen knew what was going to come, I imagine she might have written that report quite differently—with good reason. A hypothetical alternative CSET with a different leader would not have face such pressures.
I think it is highly likely that imposing a preclearance requirement on employees was justified. It would be extremely difficult for an attorney to envision everything that an employee might conceivably write and determine without even seeing it whether it would cause problem. Even if the attorney could, they would have to update their view of the universe of possible writings every time the situation materially changed. I just don’t think a system without a preclearance requirement would have been workable.
It’s more likely that some of the responses to proposed writings were more censorious than they should have been. That is really hard to determine, as we’ll likely never know the attorney’s reasoning (which is protected by privilege).
The wording of what Larks said makes it seem like over a number of years staff were prevented from expressing their true opinions on central EA topics
Caleb—thanks for this helpful introduction to Zach’s talents, qualifications, and background—very useful for those of us who don’t know him!
I agree that EA organizations should try very hard to avoid entanglements with AI companies such as Anthropic—however well-intentioned they seem. We need to be able to raise genuine concerns about AI risks without feeling beholden to AI corporate interests.