I think it would be really useful for there to be more public clarification on the relationship between effective altruism and Open Philanthropy.
My impression is that: 1. OP is the large majority funder of most EA activity. 2. Many EAs assume that OP is a highly EA organization, including the top. 3. OP really tries to explicitly not take responsibility for EA and does not claim to themselves be highly EA. 4. EAs somewhat assume that OP leaders are partially accountable to the EA community, but OP leaders would mostly disagree. 5. From the point of view of many EAs, EA represents something like a community of people with similar goals and motivations. There’s some expectations that people will look out for each other. 6. From the point of view of OP, EA is useful insofar as it provides valuable resources (talent, sometimes ideas and money).
My impression is that OP basically treats the OP-EA relationship as a set of transactions, each with positive expected value. Like, they would provide a $20k grant to a certain community, if they expect said community to translate into over $20k of value via certain members who would soon take on jobs at certain companies. Perhaps in-part because there are some overlapping friendships, I think that OP staff often explicitly try to only fund EAs in ways that make the clearest practical sense for specific OP goals, like hiring AI safety researchers.
In comparison, I think a lot of EAs think of EA as some kind of holy-ish venture tied to an extended community of people who will care about each other. To them, EA itself is an incredibly valuable idea and community that itself has the potential to greatly change the world. (I myself is more in this latter camp)
So on one side, we have a group that often views EA through reductive lenses, like as a specific recruiting arm. And on the other side, it’s more of a critical cultural movement.
I think it’s very possible for both sides to live in unison, but I think at the moment there’s a lot of confusion about this. I think a lot of EAs assume that OP shares a lot of the same beliefs they do.
If it is the case that OP is fairly narrow in its views and goals with EA, I would hope that other EAs realize that there might be a gap of [leaders+funders who care about EA for the reasons that EAs care about EA]. It’s just weird and awkward to have your large-majority funder be a group that just treats you very reductively.
As a simple example, if one thinks of EA as something akin to a key social movement/community, one might care a lot about: - The long-term health and growth of EA - The personal health of specific EAs, not just the very most productive ones - EA being an institution known for being highly honest and trustworthy
But if one thinks of them through a reductive lens, I’d expect them to care more about: - Potential short-term hires or funding - Potential liabilities - Community members not being too annoying with criticisms and stuff
I’ve met a few people who felt very betrayed by EA—I suspect that the above confusion is one reason why. I think that a lot of EA recruiters argue that EA represents a healthy community/movement. This seems like the most viral message, it’s not a surprise that people doing recruiting and promotion would lean on this idea. But if much of EA funding is really functionally treating it as a recruiting network, then that would be a disconnect.
Related, I’ve though that “EA Global has major community and social movement vibes, but has the financial incentives in line with a recruiting fair.”
Both perspectives can coexist, but greater clarity seems very useful, and might expose some important gaps.
Ozzie my apologies for not addressing all of your many points here, but I do want to link you to two places where I’ve expressed a lot of my takes on the broad topic:
On medium, I talk about how I see the community at some length. tldr: aligned around the question of how to do the most good vs the answer, heterogenous, and often specifically alien to me and my values.
On the forum, I talk about the theoretical and practical difficulties of OP being “accountable to the community” (and here I also address an endowment idea specifically in a way that people found compelling). Similarly from my POV it’s pretty dang hard to have the community be accountable to OP, in spite of everything people say they believe about that. Yes we can withold funding, after the fact, and at great community reputational cost. But I can’t e.g. get SBF to not do podcasts nor stop the EA (or two?) that seem to have joined DOGE and started laying waste to USAID. (On Bsky, they blame EAs for the whole endeavor)
Yes we can withold funding, after the fact, and at great community reputational cost. But I can’t e.g. get SBF to not do podcasts nor stop the EA (or two?) that seem to have joined DOGE and started laying waste to USAID.
I believe most EAs would agree these examples should never have been in OP’s proverbial sphere of responsibility.
There are other examples we could discuss regarding OP’s role (as makes sense, no organization is perfect), but that might distract from the main topic: clarity on the OP-EA relationship and the mutual expectations between parties.
It seems obvious that such Bsky threads contain significant inaccuracies. The question is how much weight to give such criticisms.
My impression is that many EAs wouldn’t consider these threads important enough to drive major decisions like funding allocations. However, the fact you mention it suggests it’s significant to you, which I respect.
About the OP-EA relationship—if factors like “avoiding criticism from certain groups” are important for OP’s decisions, saying so clearly is the kind of thing that seems useful. I don’t want to get into arguments about if it should[1], the first thing is to just understand that that’s where a line is.
More specifically, I think these discussions could be useful—but I’m afraid they will get in the way of the discussions of how OP will act, which I think is more important.
This is probably off-topic, but I was very surprised to read this, given how much he supported the Harris campaign, how much he gives to reduce global poverty, and how similar your views are on e.g. platforming controversial people.
Just flagging that the EA Forum upvoting system is awkward here. This comment says: 1. “I can’t say that we agree on very much” 2. “you are often a voice of reason” 3. “your voice will be missed”
As such, I’m not sure what the Agree / Disagree reacts are referring to, and I imagine similar for others reading this.
This isn’t a point against David, just a challenge with us trying to use this specific system.
Thanks for the response here! I was not expecting that.
This is a topic that can become frustratingly combative if not handled gracefully, especially in public forums. To clarify, my main point isn’t disagreement with OP’s position, but rather I was trying to help build clarity on the OP-EA relationship.
Some points: 1. The relationship between the “EA Community” and OP is both important (given the resources involved) and complex[1] . 2. In such relationships, there are often unspoken expectations between parties. Clarity might be awkward initially but leads to better understanding and coordination long-term. 3. I understand you’re uncomfortable with OP being considered responsible for much of EA or accountable to EA. This aligns with the hypotheses in my original comment. I’m not sure we’re disagreeing on anything here. 4. I appreciate your comments, though I think many people might reasonably still find the situation confusing. This issue is critical to many people’s long-term plans. The links you shared are helpful but leave some uncertainty—I’ll review them more carefully. 5. At this point, we might be more bottlenecked by EAs analyzing the situation than by additional writing from OP (though both are useful). EAs likely need to better recognize the limitations of the OP-EA relationship and consider what that means for the community. 6. When I asked for clarification, I imagined that EA community members working at the OP-EA intersection would be well positioned to provide insight. One challenge is that many people feel uncomfortable discussing this relationship openly due to the power imbalance.[2]. As well as the funding issue (OP funds EA), there’s also the fact that OP has better ways of privately communicating[3]. (This is also one issue why I’m unusually careful and long with my words with these discussions, sorry if it comes across as harder to read.) That said, comment interactions and assurances from the OP do help build trust.
i.e. For example, say an EA community member writes something that upsets someone at OP. Then that person holds a silent grudge, decides they don’t like that person, then doesn’t fund them later. This is very human, and there’s a clear information asymmetry. The EA community member would never know if this happens, so it would make sense for them to be extra cautious.
People at OP can confidentially discuss with each other how to best handle their side of the OP-EA relationship. But in comparison, EA community members mainly have the public EA Forum, so there’s an inherent disadvantage.
I’m interested in hearing from those who provided downvotes. I could imagine a bunch of reasons why one might have done so (there were a lot of points included here).
(Upon reflection, I don’t think my previous comment was very good. I tried to balance being concise, defensive, and comprehensive, but ended up with something confusing. I’d be happy to clarify my stance on this more at any time if asked, though it might well be too late now for that to be useful. Apologies!)
Many people claim that Elon Musk is an EA person, @Cole Killian has an EA Forum account and mentioned effective altruism on his (now deleted) website, Luke Farritor won the Vesuvius Challenge mentioned in this post (he also allegedly wrote or reposted a tweet mentioning effective altruism, but I can’t find any proof and people are skeptical)
This reminds me of another related tension I’ve noticed. I think that OP really tries to not take much responsibility for EA organizations, and I believe that this has led to something of a vacuum of leadership.
I think that OP has functionally has great power over EA.
In many professional situations, power comes with corresponding duties and responsibilities.
CEOs have a lot of authority, but they are also expected to be agentic, to keep on the lookout for threats, to be in charge of strategy, to provide guidance, and to make sure many other things are carried out.
The President clearly has a lot of powers, and that goes hand-in-hand with great expectations and duties.
There’s a version of EA funding where the top funders take on both leadership and corresponding responsibilities. These people ultimately have the most power, so arguably they’re best positioned to take on leadership duties and responsibilities.
But I think nonprofit funders often try not to take much in terms of responsibilities, and I don’t think OP is an exception. I’d also flag that I think EA Funds and SFF are in a similar boat, though these are smaller.
My impression is that OP explicitly tries not to claim any responsibility for the EA ecosystem / environment, and correspondingly argues it’s not particularly accountable to EA community members. Their role as I understand it is often meant to be narrow. This varies by OP team, but I think is true for the “GCR Capacity Building” team, which is closest to many “EA” orgs. I think this team mainly thinks of itself as a group responsible for making good decisions on a bunch of specific applications that hits their desk.
Again, this is a far narrower mandate than any conventional CEO would have.
If we had a “CEO or President” that were both responsible for and accountable to these communities, I’d expect things like: 1. A great deal of communication with these communities. 2. Clear and open leadership structures and roles. 3. A good deal of high-level strategizing. 4. Agentic behavior, like taking significant action to “make sure specific key projects happen.” 5. When there are failures, acknowledgement of said failures, as well as plans to fix or change.
I think we basically don’t have this, and none of the funders would claim to be this.
So here’s a question: “Is there anyone in the EA community who’s responsible for these sorts of things?”
I think the first answer I’d give is “no.” The second answer is something like, “Well, CEA is sort of responsible for some parts of this. But CEA really reports to OP given their funding. CEA has very limited power of its own. And CEA has repeatedly try to express limits in its power, plus its gone through lots of management transitions.”
In a well-run bureaucracy, I imagine that key duties would be clearly delegated to specific people or groups, and that groups would have the corresponding powers necessary to actually do a good job at them. You want key duties to be delegated to agents with the power to carry them out.
The ecosystem of EA organizations is not a well-organized bureaucracy. But that doesn’t mean there aren’t a lot of important duties to be performed. In my opinion, the fact that EA represents a highly-fragmented set of small organizations was functionally a decision by the funders (at least, they had a great deal of influence on this), so I’d hope that they would have thoughts on how to make sure the key duties get done somehow.
This might seem pretty abstract, so I’ll try coming up with some more specific examples: 1. Say a tiny and poorly-resourced org gets funded. They put together a board of their friends (the only people available), then proceed to significantly emotionally abuse their staff. Who is ultimately responsible here? I’d expect the founders would not at all want to take responsibility for this. 2. Before the FTX Future Fund blew up, I assumed that EA leaders had vetted it. Later I find out that OP purposefully tried to keep its distance and not get involved (in this case meaning that they didn’t investigate or warn anyone), in part because they didn’t see it as their responsibility, and claimed that because FTX Future Fund was a “competitor”, it wasn’t right for them to get involved. From what I can tell now, it was no one’s responsibility to vet the FTX Future Fund team or FTX organization. You might have assumed CEA, but CEA was funded by FTX and previously even had SBF as a board member—they were clearly not powerful and independent enough for this. 3. There are many people in the EA scene who invest large amounts of time and resources preparing for careers that only exist under the OP umbrella. Many or all of their future jobs will be under this umbrella. At the same time, it’s easy to imagine that they have almost no idea what the power structures at the top of this umbrella are like. This umbrella could change leadership or direction at any time, with very little warning. 4. There were multiple “EAs” on the board of OpenAI during that board member spat. That event seemed like a mess, and it negatively influenced a bunch of other EA organizations. Was that anyone’s responsibility? Can we have any assurances that community members will do a better job next time? (if there is a next time) 5. I’m not sure if many people at all, in positions of power, are spending much time thinking about long-term strategic issues for EA. It seems very easy for me to imagine large failures and opportunities we’re missing out on. This also is true for the nonprofit EA AI Safety Landscape—many of the specific organizations are too small and spread out to be very agentic, especially in cases of dealing with diverse and private information. I’ve heard good things recently about Zach Robinson at CEA, but also would note that CEA has historically been highly focused on some long-running projects (EAG, the EA Forum, Community Health), with fairly limited strategic or agentic capacity, plus being heavily reliant on OP. 6. Say OP decides to shut down the GCR Capacity Building team one day, and gives a 2-years notice. I’d expect this to be a major mess. Few people outside OP understand how the internals of OP decisions get made, so it’s hard for other EA members to see this coming or gauge how likely it is. My guess is that they don’t seem like they’d do this, but I have limited confidence. As such, it’s hard for me to suggest that people make long-term plans (3+ years) in this area. 7. We know that OP generally maximizes expected value. What happens when narrow EV optimization conflicts with honesty and other cooperative values? Would they represent the same choices that other EAs might want? I believe that FTX justified their bad actions using utilitarianism, for instance, and lots of businesses and nonprofits carry out highly Machiavellian and dishonest actions to advance their interests. Is it possible that EAs working under the OP umbrella are unknowingly supporting actions they might not condone? It’s hard to know without much transparency and evaluation.
On the plus side, I think OP and CEA have improved a fair bit on this sort of thing in the last few years. OP seems to be working to assure that grantees follow certain basic managerial criteria. New hires and operations have come in, which has seemed to have helped.
I’ve previously discussed my thinking on the potential limitations we’re getting from having small orgs here. Also, I remember that Oliver Habryka has repeatedly mentioned the lack of leadership around this scene—I think that this topic is one thing he was sort-of referring to.
Ultimately, my guess is that OP has certain goals they want to achieve, and it’s unlikely they or the other funders will want to take many of the responsibilities that I suggest here.
Given that, I think it would be useful for people in the EA ecosystem to understand this and respond accordingly. I think that our funding situation really needs diversification, and I think that funders willing to be more agentic in crucial areas that are currently lacking could do a lot of good. I expect that when it comes to “senior leadership”, there are some significant gains to be made, if the right people and resources can come together.
Thanks for writing this Ozzie! :) I think lots of things about the EA community are confusing for people, especially relationships between organizations. As we are currently redesigning EA.org it might be helpful for us to add some explanation on that site. (I would be interested to hear if anyone has specific suggestions!)
From my own limited perspective (I work at CEA but don’t personally interact much with OP directly), your impression sounds about right. I guess my own view of OP is that it’s better to think of them as a funder rather than a collaborator (though as I said I don’t personally interact with them much so haven’t given this much thought, and I wouldn’t be surprised if others at CEA disagree). They have their own goals as an organization, and it’s not necessarily bad if those goals are not exactly aligned with the overall EA community. My understanding is that it’s very standard for projects to adapt their pitches for funders that do not have the same goals/values as them. For example, I’m not running the Forum in a way that would maximize career changes[1] (TBH I don’t think OP would want me to do this anyway), but it’s helpful to include data we have about how the Forum affects career changes when writing a funding proposal[2]. In fact, no one at OP has ever asked me to maximize career changes as a requirement before or after receiving funding, nor do I recall anyone at OP ever asking me to make any changes to the Forum (OP staff do provide feedback but I personally weigh those mostly relative to how much I think they understand the Forum — for example, I’d probably weigh Lizka’s feedback higher than anyone at OP).
I acknowledge that this is complicated by the fact that CEA likely has a unique relationship with OP (due to our large size relative to other community building orgs, long history working in this space, and the fact that our current CEO used to work at OP), so I expect that my own experience with OP does not necessarily generalize to other fundees. Also OP is the overwhelmingly largest funder for EA community building, and so the extent to which they are not aligned with the overall EA community does matter, as money straightforwardly gives them power and influence, though I don’t personally have a good picture of the practical effects.
I think that having these discussions in a public community space is valuable, so I appreciate you sharing this here!
For the sake of this comment, I’m assuming that Ozzie’s description accurately describes OP’s view, though I have never talked with anyone at OP about this so I don’t actually know if it’s accurate.
This seems directionally correct, but I would add more nuance.
While OP, as a grantmaker, has a goal it wants to achieve with its grants (and they wouldn’t be EA aligned if they didn’t), this doesn’t necessarily mean they are very short term. The Open Phil EA/LT Survey seems to me to show best what they care about in outcomes (talent working in impactful areas) but also how hard it is to pinpoint the actions and inputs needed. This leads me to believe that OP instrumentally cares about the community/ecosystem/network as it needs multiple touchpoints and interactions to get most people from being interested in EA ideas to working on impactful things.
On the other side, we use the term community in confusing ways. I was on a Community Builder Grant by CEA for two years when working at EA Germany, which many call national community building. What we were actually doing was working on the talent development pipeline, trying to find promising target groups, developing them and trying to estimate the talent outcomes.
Working on EA as a social movement/community while being paid is challenging. On one hand, I assume OP would find it instrumentally useful (see above) but still desire to track short-term outcomes as a grantmaker. As a grant recipient, I felt I couldn’t justify any actions that lacked a clear connection between outcomes and impact. Hosting closed events for engaged individuals in my local community, mentoring, having one-on-ones with less experienced people, or renting a local space for coworking and group events appeared harder to measure. I also believe in the norm of doing this out of care, wanting to give back to the community, and ensuring the community is a place where people don’t need to be compensated to participate.
I think this is a complex issue. I imagine it would be incredibly hard to give it a really robust write-up, and definitely don’t mean for my post to be definitive.
I think this is downstream of a lot of confusion about what ‘Effective Altruism’ really means, and I realise I don’t have a good definition any more. In fact, because all of the below can be criticised, it sort of explains why EA gets seemingly infinite criticism from all directions.
Is it explicit self-identification?
Is it explicit membership in a community?
Is it implicit membership in a community?
Is it if you get funded by OpenPhilanthropy?
Is it if you are interested or working in some particular field that is deemed “effective”?
Is it if you believe in totalising utilitarianism with no limits?
To always justify your actions with quantitative cost-effectiveness analyses where you’re chosen course of actions is the top ranked one?
Is it if you behave a certain way?
Because in many ways I don’t count as EA based off the above. I certainly feel less like one than I have in a long time.
For example:
I think a lot of EAs assume that OP shares a lot of the same beliefs they do.
I don’t know if this refers to some gestalt ‘belief’ than OP might have, or Dustin’s beliefs, or some kind of ‘intentional stance’ regarding OP’s actions. While many EAs shared some beliefs (I guess) there’s also a whole range of variance within EA itself, and the fundamental issue is that I don’t know if there’s something which can bind it all together.
I guess I think the question should be less “public clarification on the relationship between effective altruism and Open Philanthropy” and more “what does ‘Effective Altruism’ mean in 2025?”
I had Claude rewrite this, if the terminology is confusing. I think it’s edit is decent. ---
The EA-Open Philanthropy Relationship: Clarifying Expectations
The relationship between Effective Altruism (EA) and Open Philanthropy (OP) might suffer from misaligned expectations. My observations:
OP funds the majority of EA activity
Many EAs view OP as fundamentally aligned with EA principles
OP deliberately maintains distance from EA and doesn’t claim to be an “EA organization”
EAs often assume OP leadership is somewhat accountable to the EA community, while OP leadership likely disagrees
Many EAs see their community as a unified movement with shared goals and mutual support
OP appears to view EA more transactionally—as a valuable resource pool for talent, ideas, and occasionally money
This creates a fundamental tension. OP approaches the relationship through a cost-benefit lens, funding EA initiatives when they directly advance specific OP goals (like AI safety research). Meanwhile, many EAs view EA as a transformative cultural movement with intrinsic value beyond any specific cause area.
These different perspectives manifest in competing priorities:
EA community-oriented view prioritizes:
Long-term community health and growth
Individual wellbeing of community members
Building EA’s reputation for honesty and trustworthiness
Transactional view prioritizes:
Short-term talent pipeline and funding opportunities
Risk management (not wanting EA activities to wind up reflecting poorly on OP)
Minimizing EA criticism of OP and OP activities. (This is both annoying to deal with, and could hurt their specific activities)
This disconnect explains why some people might feel betrayed by EA. Recruiters often promote EA as a supportive community/movement (which resonates better), but if the funding reality treats EA more as a talent network, there’s a fundamental misalignment.
Another thought I’ve had: “EA Global has major community and social movement vibes, but has the financial incentives in line with a recruiting fair.”
Both perspectives can coexist, but greater clarity could be really useful here.
I think it would be really useful for there to be more public clarification on the relationship between effective altruism and Open Philanthropy.
My impression is that:
1. OP is the large majority funder of most EA activity.
2. Many EAs assume that OP is a highly EA organization, including the top.
3. OP really tries to explicitly not take responsibility for EA and does not claim to themselves be highly EA.
4. EAs somewhat assume that OP leaders are partially accountable to the EA community, but OP leaders would mostly disagree.
5. From the point of view of many EAs, EA represents something like a community of people with similar goals and motivations. There’s some expectations that people will look out for each other.
6. From the point of view of OP, EA is useful insofar as it provides valuable resources (talent, sometimes ideas and money).
My impression is that OP basically treats the OP-EA relationship as a set of transactions, each with positive expected value. Like, they would provide a $20k grant to a certain community, if they expect said community to translate into over $20k of value via certain members who would soon take on jobs at certain companies. Perhaps in-part because there are some overlapping friendships, I think that OP staff often explicitly try to only fund EAs in ways that make the clearest practical sense for specific OP goals, like hiring AI safety researchers.
In comparison, I think a lot of EAs think of EA as some kind of holy-ish venture tied to an extended community of people who will care about each other. To them, EA itself is an incredibly valuable idea and community that itself has the potential to greatly change the world. (I myself is more in this latter camp)
So on one side, we have a group that often views EA through reductive lenses, like as a specific recruiting arm. And on the other side, it’s more of a critical cultural movement.
I think it’s very possible for both sides to live in unison, but I think at the moment there’s a lot of confusion about this. I think a lot of EAs assume that OP shares a lot of the same beliefs they do.
If it is the case that OP is fairly narrow in its views and goals with EA, I would hope that other EAs realize that there might be a gap of [leaders+funders who care about EA for the reasons that EAs care about EA]. It’s just weird and awkward to have your large-majority funder be a group that just treats you very reductively.
As a simple example, if one thinks of EA as something akin to a key social movement/community, one might care a lot about:
- The long-term health and growth of EA
- The personal health of specific EAs, not just the very most productive ones
- EA being an institution known for being highly honest and trustworthy
But if one thinks of them through a reductive lens, I’d expect them to care more about:
- Potential short-term hires or funding
- Potential liabilities
- Community members not being too annoying with criticisms and stuff
I’ve met a few people who felt very betrayed by EA—I suspect that the above confusion is one reason why. I think that a lot of EA recruiters argue that EA represents a healthy community/movement. This seems like the most viral message, it’s not a surprise that people doing recruiting and promotion would lean on this idea. But if much of EA funding is really functionally treating it as a recruiting network, then that would be a disconnect.
Related, I’ve though that “EA Global has major community and social movement vibes, but has the financial incentives in line with a recruiting fair.”
Both perspectives can coexist, but greater clarity seems very useful, and might expose some important gaps.
Ozzie my apologies for not addressing all of your many points here, but I do want to link you to two places where I’ve expressed a lot of my takes on the broad topic:
On medium, I talk about how I see the community at some length. tldr: aligned around the question of how to do the most good vs the answer, heterogenous, and often specifically alien to me and my values.
On the forum, I talk about the theoretical and practical difficulties of OP being “accountable to the community” (and here I also address an endowment idea specifically in a way that people found compelling). Similarly from my POV it’s pretty dang hard to have the community be accountable to OP, in spite of everything people say they believe about that. Yes we can withold funding, after the fact, and at great community reputational cost. But I can’t e.g. get SBF to not do podcasts nor stop the EA (or two?) that seem to have joined DOGE and started laying waste to USAID. (On Bsky, they blame EAs for the whole endeavor)
Minor points, from your comment:
I believe most EAs would agree these examples should never have been in OP’s proverbial sphere of responsibility.
There are other examples we could discuss regarding OP’s role (as makes sense, no organization is perfect), but that might distract from the main topic: clarity on the OP-EA relationship and the mutual expectations between parties.
It seems obvious that such Bsky threads contain significant inaccuracies. The question is how much weight to give such criticisms.
My impression is that many EAs wouldn’t consider these threads important enough to drive major decisions like funding allocations. However, the fact you mention it suggests it’s significant to you, which I respect.
About the OP-EA relationship—if factors like “avoiding criticism from certain groups” are important for OP’s decisions, saying so clearly is the kind of thing that seems useful. I don’t want to get into arguments about if it should[1], the first thing is to just understand that that’s where a line is.
More specifically, I think these discussions could be useful—but I’m afraid they will get in the way of the discussions of how OP will act, which I think is more important.
Ozzie I’m not planning to discuss it any further and don’t plan to participate on the forum anymore.
Please come back. I can’t say that we agree on very much, but you are often a voice of reason and your voice will be missed.
This is probably off-topic, but I was very surprised to read this, given how much he supported the Harris campaign, how much he gives to reduce global poverty, and how similar your views are on e.g. platforming controversial people.
Presumably https://reflectivealtruism.com/category/billionaire-philanthropy/?
Just flagging that the EA Forum upvoting system is awkward here. This comment says:
1. “I can’t say that we agree on very much”
2. “you are often a voice of reason”
3. “your voice will be missed”
As such, I’m not sure what the Agree / Disagree reacts are referring to, and I imagine similar for others reading this.
This isn’t a point against David, just a challenge with us trying to use this specific system.
This seems like quite a stretch.
Thanks for the response here! I was not expecting that.
This is a topic that can become frustratingly combative if not handled gracefully, especially in public forums. To clarify, my main point isn’t disagreement with OP’s position, but rather I was trying to help build clarity on the OP-EA relationship.
Some points:
1. The relationship between the “EA Community” and OP is both important (given the resources involved) and complex[1] .
2. In such relationships, there are often unspoken expectations between parties. Clarity might be awkward initially but leads to better understanding and coordination long-term.
3. I understand you’re uncomfortable with OP being considered responsible for much of EA or accountable to EA. This aligns with the hypotheses in my original comment. I’m not sure we’re disagreeing on anything here.
4. I appreciate your comments, though I think many people might reasonably still find the situation confusing. This issue is critical to many people’s long-term plans. The links you shared are helpful but leave some uncertainty—I’ll review them more carefully.
5. At this point, we might be more bottlenecked by EAs analyzing the situation than by additional writing from OP (though both are useful). EAs likely need to better recognize the limitations of the OP-EA relationship and consider what that means for the community.
6. When I asked for clarification, I imagined that EA community members working at the OP-EA intersection would be well positioned to provide insight. One challenge is that many people feel uncomfortable discussing this relationship openly due to the power imbalance.[2]. As well as the funding issue (OP funds EA), there’s also the fact that OP has better ways of privately communicating[3]. (This is also one issue why I’m unusually careful and long with my words with these discussions, sorry if it comes across as harder to read.) That said, comment interactions and assurances from the OP do help build trust.
there’s a fair bit of nuance involved—I’m sure that you have noticed confusion on the side of EAs at least
i.e. For example, say an EA community member writes something that upsets someone at OP. Then that person holds a silent grudge, decides they don’t like that person, then doesn’t fund them later. This is very human, and there’s a clear information asymmetry. The EA community member would never know if this happens, so it would make sense for them to be extra cautious.
People at OP can confidentially discuss with each other how to best handle their side of the OP-EA relationship. But in comparison, EA community members mainly have the public EA Forum, so there’s an inherent disadvantage.
I’m interested in hearing from those who provided downvotes. I could imagine a bunch of reasons why one might have done so (there were a lot of points included here).
(Upon reflection, I don’t think my previous comment was very good. I tried to balance being concise, defensive, and comprehensive, but ended up with something confusing. I’d be happy to clarify my stance on this more at any time if asked, though it might well be too late now for that to be useful. Apologies!)
I’m out of the loop, who’s this allegedly EA person who works at DOGE?
Many people claim that Elon Musk is an EA person, @Cole Killian has an EA Forum account and mentioned effective altruism on his (now deleted) website, Luke Farritor won the Vesuvius Challenge mentioned in this post (he also allegedly wrote or reposted a tweet mentioning effective altruism, but I can’t find any proof and people are skeptical)
This reminds me of another related tension I’ve noticed. I think that OP really tries to not take much responsibility for EA organizations, and I believe that this has led to something of a vacuum of leadership.
I think that OP has functionally has great power over EA.
In many professional situations, power comes with corresponding duties and responsibilities.
CEOs have a lot of authority, but they are also expected to be agentic, to keep on the lookout for threats, to be in charge of strategy, to provide guidance, and to make sure many other things are carried out.
The President clearly has a lot of powers, and that goes hand-in-hand with great expectations and duties.
There’s a version of EA funding where the top funders take on both leadership and corresponding responsibilities. These people ultimately have the most power, so arguably they’re best positioned to take on leadership duties and responsibilities.
But I think nonprofit funders often try not to take much in terms of responsibilities, and I don’t think OP is an exception. I’d also flag that I think EA Funds and SFF are in a similar boat, though these are smaller.
My impression is that OP explicitly tries not to claim any responsibility for the EA ecosystem / environment, and correspondingly argues it’s not particularly accountable to EA community members. Their role as I understand it is often meant to be narrow. This varies by OP team, but I think is true for the “GCR Capacity Building” team, which is closest to many “EA” orgs. I think this team mainly thinks of itself as a group responsible for making good decisions on a bunch of specific applications that hits their desk.
Again, this is a far narrower mandate than any conventional CEO would have.
If we had a “CEO or President” that were both responsible for and accountable to these communities, I’d expect things like:
1. A great deal of communication with these communities.
2. Clear and open leadership structures and roles.
3. A good deal of high-level strategizing.
4. Agentic behavior, like taking significant action to “make sure specific key projects happen.”
5. When there are failures, acknowledgement of said failures, as well as plans to fix or change.
I think we basically don’t have this, and none of the funders would claim to be this.
So here’s a question: “Is there anyone in the EA community who’s responsible for these sorts of things?”
I think the first answer I’d give is “no.” The second answer is something like, “Well, CEA is sort of responsible for some parts of this. But CEA really reports to OP given their funding. CEA has very limited power of its own. And CEA has repeatedly try to express limits in its power, plus its gone through lots of management transitions.”
In a well-run bureaucracy, I imagine that key duties would be clearly delegated to specific people or groups, and that groups would have the corresponding powers necessary to actually do a good job at them. You want key duties to be delegated to agents with the power to carry them out.
The ecosystem of EA organizations is not a well-organized bureaucracy. But that doesn’t mean there aren’t a lot of important duties to be performed. In my opinion, the fact that EA represents a highly-fragmented set of small organizations was functionally a decision by the funders (at least, they had a great deal of influence on this), so I’d hope that they would have thoughts on how to make sure the key duties get done somehow.
This might seem pretty abstract, so I’ll try coming up with some more specific examples:
1. Say a tiny and poorly-resourced org gets funded. They put together a board of their friends (the only people available), then proceed to significantly emotionally abuse their staff. Who is ultimately responsible here? I’d expect the founders would not at all want to take responsibility for this.
2. Before the FTX Future Fund blew up, I assumed that EA leaders had vetted it. Later I find out that OP purposefully tried to keep its distance and not get involved (in this case meaning that they didn’t investigate or warn anyone), in part because they didn’t see it as their responsibility, and claimed that because FTX Future Fund was a “competitor”, it wasn’t right for them to get involved. From what I can tell now, it was no one’s responsibility to vet the FTX Future Fund team or FTX organization. You might have assumed CEA, but CEA was funded by FTX and previously even had SBF as a board member—they were clearly not powerful and independent enough for this.
3. There are many people in the EA scene who invest large amounts of time and resources preparing for careers that only exist under the OP umbrella. Many or all of their future jobs will be under this umbrella. At the same time, it’s easy to imagine that they have almost no idea what the power structures at the top of this umbrella are like. This umbrella could change leadership or direction at any time, with very little warning.
4. There were multiple “EAs” on the board of OpenAI during that board member spat. That event seemed like a mess, and it negatively influenced a bunch of other EA organizations. Was that anyone’s responsibility? Can we have any assurances that community members will do a better job next time? (if there is a next time)
5. I’m not sure if many people at all, in positions of power, are spending much time thinking about long-term strategic issues for EA. It seems very easy for me to imagine large failures and opportunities we’re missing out on. This also is true for the nonprofit EA AI Safety Landscape—many of the specific organizations are too small and spread out to be very agentic, especially in cases of dealing with diverse and private information. I’ve heard good things recently about Zach Robinson at CEA, but also would note that CEA has historically been highly focused on some long-running projects (EAG, the EA Forum, Community Health), with fairly limited strategic or agentic capacity, plus being heavily reliant on OP.
6. Say OP decides to shut down the GCR Capacity Building team one day, and gives a 2-years notice. I’d expect this to be a major mess. Few people outside OP understand how the internals of OP decisions get made, so it’s hard for other EA members to see this coming or gauge how likely it is. My guess is that they don’t seem like they’d do this, but I have limited confidence. As such, it’s hard for me to suggest that people make long-term plans (3+ years) in this area.
7. We know that OP generally maximizes expected value. What happens when narrow EV optimization conflicts with honesty and other cooperative values? Would they represent the same choices that other EAs might want? I believe that FTX justified their bad actions using utilitarianism, for instance, and lots of businesses and nonprofits carry out highly Machiavellian and dishonest actions to advance their interests. Is it possible that EAs working under the OP umbrella are unknowingly supporting actions they might not condone? It’s hard to know without much transparency and evaluation.
On the plus side, I think OP and CEA have improved a fair bit on this sort of thing in the last few years. OP seems to be working to assure that grantees follow certain basic managerial criteria. New hires and operations have come in, which has seemed to have helped.
I’ve previously discussed my thinking on the potential limitations we’re getting from having small orgs here. Also, I remember that Oliver Habryka has repeatedly mentioned the lack of leadership around this scene—I think that this topic is one thing he was sort-of referring to.
Ultimately, my guess is that OP has certain goals they want to achieve, and it’s unlikely they or the other funders will want to take many of the responsibilities that I suggest here.
Given that, I think it would be useful for people in the EA ecosystem to understand this and respond accordingly. I think that our funding situation really needs diversification, and I think that funders willing to be more agentic in crucial areas that are currently lacking could do a lot of good. I expect that when it comes to “senior leadership”, there are some significant gains to be made, if the right people and resources can come together.
Thanks for writing this Ozzie! :) I think lots of things about the EA community are confusing for people, especially relationships between organizations. As we are currently redesigning EA.org it might be helpful for us to add some explanation on that site. (I would be interested to hear if anyone has specific suggestions!)
From my own limited perspective (I work at CEA but don’t personally interact much with OP directly), your impression sounds about right. I guess my own view of OP is that it’s better to think of them as a funder rather than a collaborator (though as I said I don’t personally interact with them much so haven’t given this much thought, and I wouldn’t be surprised if others at CEA disagree). They have their own goals as an organization, and it’s not necessarily bad if those goals are not exactly aligned with the overall EA community. My understanding is that it’s very standard for projects to adapt their pitches for funders that do not have the same goals/values as them. For example, I’m not running the Forum in a way that would maximize career changes[1] (TBH I don’t think OP would want me to do this anyway), but it’s helpful to include data we have about how the Forum affects career changes when writing a funding proposal[2]. In fact, no one at OP has ever asked me to maximize career changes as a requirement before or after receiving funding, nor do I recall anyone at OP ever asking me to make any changes to the Forum (OP staff do provide feedback but I personally weigh those mostly relative to how much I think they understand the Forum — for example, I’d probably weigh Lizka’s feedback higher than anyone at OP).
I acknowledge that this is complicated by the fact that CEA likely has a unique relationship with OP (due to our large size relative to other community building orgs, long history working in this space, and the fact that our current CEO used to work at OP), so I expect that my own experience with OP does not necessarily generalize to other fundees. Also OP is the overwhelmingly largest funder for EA community building, and so the extent to which they are not aligned with the overall EA community does matter, as money straightforwardly gives them power and influence, though I don’t personally have a good picture of the practical effects.
I think that having these discussions in a public community space is valuable, so I appreciate you sharing this here!
For the sake of this comment, I’m assuming that Ozzie’s description accurately describes OP’s view, though I have never talked with anyone at OP about this so I don’t actually know if it’s accurate.
Note that I care about improving the world, and I think that getting people to do high-impact jobs is in fact a good way to make the world better.
This seems directionally correct, but I would add more nuance.
While OP, as a grantmaker, has a goal it wants to achieve with its grants (and they wouldn’t be EA aligned if they didn’t), this doesn’t necessarily mean they are very short term. The Open Phil EA/LT Survey seems to me to show best what they care about in outcomes (talent working in impactful areas) but also how hard it is to pinpoint the actions and inputs needed. This leads me to believe that OP instrumentally cares about the community/ecosystem/network as it needs multiple touchpoints and interactions to get most people from being interested in EA ideas to working on impactful things.
On the other side, we use the term community in confusing ways. I was on a Community Builder Grant by CEA for two years when working at EA Germany, which many call national community building. What we were actually doing was working on the talent development pipeline, trying to find promising target groups, developing them and trying to estimate the talent outcomes.
Working on EA as a social movement/community while being paid is challenging. On one hand, I assume OP would find it instrumentally useful (see above) but still desire to track short-term outcomes as a grantmaker. As a grant recipient, I felt I couldn’t justify any actions that lacked a clear connection between outcomes and impact. Hosting closed events for engaged individuals in my local community, mentoring, having one-on-ones with less experienced people, or renting a local space for coworking and group events appeared harder to measure. I also believe in the norm of doing this out of care, wanting to give back to the community, and ensuring the community is a place where people don’t need to be compensated to participate.
Thanks for the details here!
> would add more nuance
I think this is a complex issue. I imagine it would be incredibly hard to give it a really robust write-up, and definitely don’t mean for my post to be definitive.
I think this is downstream of a lot of confusion about what ‘Effective Altruism’ really means, and I realise I don’t have a good definition any more. In fact, because all of the below can be criticised, it sort of explains why EA gets seemingly infinite criticism from all directions.
Is it explicit self-identification?
Is it explicit membership in a community?
Is it implicit membership in a community?
Is it if you get funded by OpenPhilanthropy?
Is it if you are interested or working in some particular field that is deemed “effective”?
Is it if you believe in totalising utilitarianism with no limits?
To always justify your actions with quantitative cost-effectiveness analyses where you’re chosen course of actions is the top ranked one?
Is it if you behave a certain way?
Because in many ways I don’t count as EA based off the above. I certainly feel less like one than I have in a long time.
For example:
I don’t know if this refers to some gestalt ‘belief’ than OP might have, or Dustin’s beliefs, or some kind of ‘intentional stance’ regarding OP’s actions. While many EAs shared some beliefs (I guess) there’s also a whole range of variance within EA itself, and the fundamental issue is that I don’t know if there’s something which can bind it all together.
I guess I think the question should be less “public clarification on the relationship between effective altruism and Open Philanthropy” and more “what does ‘Effective Altruism’ mean in 2025?”
I had Claude rewrite this, if the terminology is confusing. I think it’s edit is decent.
---
The EA-Open Philanthropy Relationship: Clarifying Expectations
The relationship between Effective Altruism (EA) and Open Philanthropy (OP) might suffer from misaligned expectations. My observations:
OP funds the majority of EA activity
Many EAs view OP as fundamentally aligned with EA principles
OP deliberately maintains distance from EA and doesn’t claim to be an “EA organization”
EAs often assume OP leadership is somewhat accountable to the EA community, while OP leadership likely disagrees
Many EAs see their community as a unified movement with shared goals and mutual support
OP appears to view EA more transactionally—as a valuable resource pool for talent, ideas, and occasionally money
This creates a fundamental tension. OP approaches the relationship through a cost-benefit lens, funding EA initiatives when they directly advance specific OP goals (like AI safety research). Meanwhile, many EAs view EA as a transformative cultural movement with intrinsic value beyond any specific cause area.
These different perspectives manifest in competing priorities:
EA community-oriented view prioritizes:
Long-term community health and growth
Individual wellbeing of community members
Building EA’s reputation for honesty and trustworthiness
Transactional view prioritizes:
Short-term talent pipeline and funding opportunities
Risk management (not wanting EA activities to wind up reflecting poorly on OP)
Minimizing EA criticism of OP and OP activities. (This is both annoying to deal with, and could hurt their specific activities)
This disconnect explains why some people might feel betrayed by EA. Recruiters often promote EA as a supportive community/movement (which resonates better), but if the funding reality treats EA more as a talent network, there’s a fundamental misalignment.
Another thought I’ve had: “EA Global has major community and social movement vibes, but has the financial incentives in line with a recruiting fair.”
Both perspectives can coexist, but greater clarity could be really useful here.