This reminds me of another related tension I’ve noticed. I think that OP really tries to not take much responsibility for EA organizations, and I believe that this has led to something of a vacuum of leadership.
I think that OP has functionally has great power over EA.
In many professional situations, power comes with corresponding duties and responsibilities.
CEOs have a lot of authority, but they are also expected to be agentic, to keep on the lookout for threats, to be in charge of strategy, to provide guidance, and to make sure many other things are carried out.
The President clearly has a lot of powers, and that goes hand-in-hand with great expectations and duties.
There’s a version of EA funding where the top funders take on both leadership and corresponding responsibilities. These people ultimately have the most power, so arguably they’re best positioned to take on leadership duties and responsibilities.
But I think nonprofit funders often try not to take much in terms of responsibilities, and I don’t think OP is an exception. I’d also flag that I think EA Funds and SFF are in a similar boat, though these are smaller.
My impression is that OP explicitly tries not to claim any responsibility for the EA ecosystem / environment, and correspondingly argues it’s not particularly accountable to EA community members. Their role as I understand it is often meant to be narrow. This varies by OP team, but I think is true for the “GCR Capacity Building” team, which is closest to many “EA” orgs. I think this team mainly thinks of itself as a group responsible for making good decisions on a bunch of specific applications that hits their desk.
Again, this is a far narrower mandate than any conventional CEO would have.
If we had a “CEO or President” that were both responsible for and accountable to these communities, I’d expect things like: 1. A great deal of communication with these communities. 2. Clear and open leadership structures and roles. 3. A good deal of high-level strategizing. 4. Agentic behavior, like taking significant action to “make sure specific key projects happen.” 5. When there are failures, acknowledgement of said failures, as well as plans to fix or change.
I think we basically don’t have this, and none of the funders would claim to be this.
So here’s a question: “Is there anyone in the EA community who’s responsible for these sorts of things?”
I think the first answer I’d give is “no.” The second answer is something like, “Well, CEA is sort of responsible for some parts of this. But CEA really reports to OP given their funding. CEA has very limited power of its own. And CEA has repeatedly try to express limits in its power, plus its gone through lots of management transitions.”
In a well-run bureaucracy, I imagine that key duties would be clearly delegated to specific people or groups, and that groups would have the corresponding powers necessary to actually do a good job at them. You want key duties to be delegated to agents with the power to carry them out.
The ecosystem of EA organizations is not a well-organized bureaucracy. But that doesn’t mean there aren’t a lot of important duties to be performed. In my opinion, the fact that EA represents a highly-fragmented set of small organizations was functionally a decision by the funders (at least, they had a great deal of influence on this), so I’d hope that they would have thoughts on how to make sure the key duties get done somehow.
This might seem pretty abstract, so I’ll try coming up with some more specific examples: 1. Say a tiny and poorly-resourced org gets funded. They put together a board of their friends (the only people available), then proceed to significantly emotionally abuse their staff. Who is ultimately responsible here? I’d expect the founders would not at all want to take responsibility for this. 2. Before the FTX Future Fund blew up, I assumed that EA leaders had vetted it. Later I find out that OP purposefully tried to keep its distance and not get involved (in this case meaning that they didn’t investigate or warn anyone), in part because they didn’t see it as their responsibility, and claimed that because FTX Future Fund was a “competitor”, it wasn’t right for them to get involved. From what I can tell now, it was no one’s responsibility to vet the FTX Future Fund team or FTX organization. You might have assumed CEA, but CEA was funded by FTX and previously even had SBF as a board member—they were clearly not powerful and independent enough for this. 3. There are many people in the EA scene who invest large amounts of time and resources preparing for careers that only exist under the OP umbrella. Many or all of their future jobs will be under this umbrella. At the same time, it’s easy to imagine that they have almost no idea what the power structures at the top of this umbrella are like. This umbrella could change leadership or direction at any time, with very little warning. 4. There were multiple “EAs” on the board of OpenAI during that board member spat. That event seemed like a mess, and it negatively influenced a bunch of other EA organizations. Was that anyone’s responsibility? Can we have any assurances that community members will do a better job next time? (if there is a next time) 5. I’m not sure if many people at all, in positions of power, are spending much time thinking about long-term strategic issues for EA. It seems very easy for me to imagine large failures and opportunities we’re missing out on. This also is true for the nonprofit EA AI Safety Landscape—many of the specific organizations are too small and spread out to be very agentic, especially in cases of dealing with diverse and private information. I’ve heard good things recently about Zach Robinson at CEA, but also would note that CEA has historically been highly focused on some long-running projects (EAG, the EA Forum, Community Health), with fairly limited strategic or agentic capacity, plus being heavily reliant on OP. 6. Say OP decides to shut down the GCR Capacity Building team one day, and gives a 2-years notice. I’d expect this to be a major mess. Few people outside OP understand how the internals of OP decisions get made, so it’s hard for other EA members to see this coming or gauge how likely it is. My guess is that they don’t seem like they’d do this, but I have limited confidence. As such, it’s hard for me to suggest that people make long-term plans (3+ years) in this area. 7. We know that OP generally maximizes expected value. What happens when narrow EV optimization conflicts with honesty and other cooperative values? Would they represent the same choices that other EAs might want? I believe that FTX justified their bad actions using utilitarianism, for instance, and lots of businesses and nonprofits carry out highly Machiavellian and dishonest actions to advance their interests. Is it possible that EAs working under the OP umbrella are unknowingly supporting actions they might not condone? It’s hard to know without much transparency and evaluation.
On the plus side, I think OP and CEA have improved a fair bit on this sort of thing in the last few years. OP seems to be working to assure that grantees follow certain basic managerial criteria. New hires and operations have come in, which has seemed to have helped.
I’ve previously discussed my thinking on the potential limitations we’re getting from having small orgs here. Also, I remember that Oliver Habryka has repeatedly mentioned the lack of leadership around this scene—I think that this topic is one thing he was sort-of referring to.
Ultimately, my guess is that OP has certain goals they want to achieve, and it’s unlikely they or the other funders will want to take many of the responsibilities that I suggest here.
Given that, I think it would be useful for people in the EA ecosystem to understand this and respond accordingly. I think that our funding situation really needs diversification, and I think that funders willing to be more agentic in crucial areas that are currently lacking could do a lot of good. I expect that when it comes to “senior leadership”, there are some significant gains to be made, if the right people and resources can come together.
This reminds me of another related tension I’ve noticed. I think that OP really tries to not take much responsibility for EA organizations, and I believe that this has led to something of a vacuum of leadership.
I think that OP has functionally has great power over EA.
In many professional situations, power comes with corresponding duties and responsibilities.
CEOs have a lot of authority, but they are also expected to be agentic, to keep on the lookout for threats, to be in charge of strategy, to provide guidance, and to make sure many other things are carried out.
The President clearly has a lot of powers, and that goes hand-in-hand with great expectations and duties.
There’s a version of EA funding where the top funders take on both leadership and corresponding responsibilities. These people ultimately have the most power, so arguably they’re best positioned to take on leadership duties and responsibilities.
But I think nonprofit funders often try not to take much in terms of responsibilities, and I don’t think OP is an exception. I’d also flag that I think EA Funds and SFF are in a similar boat, though these are smaller.
My impression is that OP explicitly tries not to claim any responsibility for the EA ecosystem / environment, and correspondingly argues it’s not particularly accountable to EA community members. Their role as I understand it is often meant to be narrow. This varies by OP team, but I think is true for the “GCR Capacity Building” team, which is closest to many “EA” orgs. I think this team mainly thinks of itself as a group responsible for making good decisions on a bunch of specific applications that hits their desk.
Again, this is a far narrower mandate than any conventional CEO would have.
If we had a “CEO or President” that were both responsible for and accountable to these communities, I’d expect things like:
1. A great deal of communication with these communities.
2. Clear and open leadership structures and roles.
3. A good deal of high-level strategizing.
4. Agentic behavior, like taking significant action to “make sure specific key projects happen.”
5. When there are failures, acknowledgement of said failures, as well as plans to fix or change.
I think we basically don’t have this, and none of the funders would claim to be this.
So here’s a question: “Is there anyone in the EA community who’s responsible for these sorts of things?”
I think the first answer I’d give is “no.” The second answer is something like, “Well, CEA is sort of responsible for some parts of this. But CEA really reports to OP given their funding. CEA has very limited power of its own. And CEA has repeatedly try to express limits in its power, plus its gone through lots of management transitions.”
In a well-run bureaucracy, I imagine that key duties would be clearly delegated to specific people or groups, and that groups would have the corresponding powers necessary to actually do a good job at them. You want key duties to be delegated to agents with the power to carry them out.
The ecosystem of EA organizations is not a well-organized bureaucracy. But that doesn’t mean there aren’t a lot of important duties to be performed. In my opinion, the fact that EA represents a highly-fragmented set of small organizations was functionally a decision by the funders (at least, they had a great deal of influence on this), so I’d hope that they would have thoughts on how to make sure the key duties get done somehow.
This might seem pretty abstract, so I’ll try coming up with some more specific examples:
1. Say a tiny and poorly-resourced org gets funded. They put together a board of their friends (the only people available), then proceed to significantly emotionally abuse their staff. Who is ultimately responsible here? I’d expect the founders would not at all want to take responsibility for this.
2. Before the FTX Future Fund blew up, I assumed that EA leaders had vetted it. Later I find out that OP purposefully tried to keep its distance and not get involved (in this case meaning that they didn’t investigate or warn anyone), in part because they didn’t see it as their responsibility, and claimed that because FTX Future Fund was a “competitor”, it wasn’t right for them to get involved. From what I can tell now, it was no one’s responsibility to vet the FTX Future Fund team or FTX organization. You might have assumed CEA, but CEA was funded by FTX and previously even had SBF as a board member—they were clearly not powerful and independent enough for this.
3. There are many people in the EA scene who invest large amounts of time and resources preparing for careers that only exist under the OP umbrella. Many or all of their future jobs will be under this umbrella. At the same time, it’s easy to imagine that they have almost no idea what the power structures at the top of this umbrella are like. This umbrella could change leadership or direction at any time, with very little warning.
4. There were multiple “EAs” on the board of OpenAI during that board member spat. That event seemed like a mess, and it negatively influenced a bunch of other EA organizations. Was that anyone’s responsibility? Can we have any assurances that community members will do a better job next time? (if there is a next time)
5. I’m not sure if many people at all, in positions of power, are spending much time thinking about long-term strategic issues for EA. It seems very easy for me to imagine large failures and opportunities we’re missing out on. This also is true for the nonprofit EA AI Safety Landscape—many of the specific organizations are too small and spread out to be very agentic, especially in cases of dealing with diverse and private information. I’ve heard good things recently about Zach Robinson at CEA, but also would note that CEA has historically been highly focused on some long-running projects (EAG, the EA Forum, Community Health), with fairly limited strategic or agentic capacity, plus being heavily reliant on OP.
6. Say OP decides to shut down the GCR Capacity Building team one day, and gives a 2-years notice. I’d expect this to be a major mess. Few people outside OP understand how the internals of OP decisions get made, so it’s hard for other EA members to see this coming or gauge how likely it is. My guess is that they don’t seem like they’d do this, but I have limited confidence. As such, it’s hard for me to suggest that people make long-term plans (3+ years) in this area.
7. We know that OP generally maximizes expected value. What happens when narrow EV optimization conflicts with honesty and other cooperative values? Would they represent the same choices that other EAs might want? I believe that FTX justified their bad actions using utilitarianism, for instance, and lots of businesses and nonprofits carry out highly Machiavellian and dishonest actions to advance their interests. Is it possible that EAs working under the OP umbrella are unknowingly supporting actions they might not condone? It’s hard to know without much transparency and evaluation.
On the plus side, I think OP and CEA have improved a fair bit on this sort of thing in the last few years. OP seems to be working to assure that grantees follow certain basic managerial criteria. New hires and operations have come in, which has seemed to have helped.
I’ve previously discussed my thinking on the potential limitations we’re getting from having small orgs here. Also, I remember that Oliver Habryka has repeatedly mentioned the lack of leadership around this scene—I think that this topic is one thing he was sort-of referring to.
Ultimately, my guess is that OP has certain goals they want to achieve, and it’s unlikely they or the other funders will want to take many of the responsibilities that I suggest here.
Given that, I think it would be useful for people in the EA ecosystem to understand this and respond accordingly. I think that our funding situation really needs diversification, and I think that funders willing to be more agentic in crucial areas that are currently lacking could do a lot of good. I expect that when it comes to “senior leadership”, there are some significant gains to be made, if the right people and resources can come together.