Fund Causes Open Phil Underfunds (Instead of Your Most Preferred Causes)
Key Takeaways
Optimizing your givingâs effect on âEAâs portfolioâ implies you should fund the causes your value system thinks are most underfunded by EAâs largest allocators (e.g. Open Phil and SFF).
These causes arenât necessarily your value systemâs most preferred causes. (âPreferredâ = the ones youâd allocate the plurality of EAâs resources to.)
For the typical EA, this would likely imply donating more to animal welfare, which is currently heavily underfunded under the typical EAâs value system.
Opportunities Open Phil is exiting from, including invertebrates, digital minds, and wild animals, may be especially impactful.
Aliceâs Investing Dilemma: A Thought Experiment
Alice is a conservative investor who prefers the risk-adjusted return of a portfolio of 70% stocks and 30% bonds. Along with 9 others, Alice has been allocated $1M to split between stocks and bonds however she sees fit. The combined $10M portfolio will be held for 10 years, and its profits or losses will be split equally among the 10 portfolio managers. The other 9 portfolio managers tell Alice theyâre planning to go with 100% stocks.
Aliceâs preferred asset is stocks (in the sense that if she could control the whole combined portfolio, sheâd allocate the majority to stocks). However, the underallocated asset (by Aliceâs risk-adjusted return preference) is bonds. In this case, Alice best realizes her preferences by allocating her entire $1M to bonds! This holds even though Alice prefers stocks to bonds.
In Charity, We Should Optimize The Portfolio of Everyoneâs Actions
In Aliceâs investing dilemma, the premise thatâs doing the work is that Alice wants to optimize the combined portfolio instead of her particular $1M share.
In the case of effective giving, we typically focus on our givingâs direct impact, but not on how it fits into the portfolio of the net effect of everyoneâs actions. But optimizing the portfolio of everyoneâs actions seems to directly follow from EA principles:
The recipient of charity doesnât care whoâs giving it, so it would seem like a bias to be focused on the part of âthe portfolio of everyoneâs actionsâ that is your actions rather than the whole (or any other particular part).
Reducing funging to ensure counterfactual impact is already one way of reasoning about the effect of your giving on the portfolio of everyoneâs actions. This proposal simply extends that idea to also optimize for your value systemâs objectives.
There are many legitimate reasons to not overemphasize optimizing the portfolio of everyoneâs actions, such as many peopleâs concerns about personally making a difference. However, I think we should put more thought into optimizing the portfolio of everyoneâs actions than we currently do.
Theoretical Implications
You should prefer funding the causes your value system thinks are the worldâs most underallocated.
These causes are not necessarily your value systemâs most preferred causes! (âPreferredâ = The ones which youâd allocate the plurality of the worldâs resources to!)
âThe Portfolio of Everyoneâs Actionsâ vs âEAâs Portfolioâ
In theory, this post argues that you should be optimizing the portfolio of the net effect of anything anyone will ever do (under your value system). But thatâs obviously intractable!
To deduce practical recommendations, one can assume that non-EA-aligned actions have negligible net effect relative to EA-aligned actions. In that case, âoptimizing the portfolio of everyoneâs actionsâ reduces to optimizing the portfolio of EAâs resource allocation. If you think this assumption is generally accurate, we can deduce some practical recommendations.
Practical Recommendations
Many EAs split their personal donations between cause areas including global health, animal welfare, and longtermism. If youâre optimizing EAâs portfolio, you probably shouldnât do this. Instead, you should identify which cause area your value system says EA most underfunds, and only donate there.
I personally believe longtermist interventions have the highest expected value, and I would allocate the plurality of EA resources to them if I could. But due to risk aversion, I think a substantial portion of our resources should go towards reducing near-term suffering, which animal welfare interventions do most cost-effectively. Since my value system says animal welfare is more underfunded than longtermism, when optimizing EAâs portfolio, it seems best for me to donate only to animal welfare. This holds even though longtermism is my preferred cause area.
For other value systems, the implications could be completely different! If Bob doesnât care about animals, but would want to split EAâs resources between 30% global health and 70% longtermism, optimizing EAâs portfolio by Bobâs value system means he should donate only to longtermist interventions.
EAâs Current Resource Allocations
Knowing EAâs current resource allocations would be helpful if you think this postâs recommendations have merit. The most complete and up-to-date reference I know of is Tyler Mauleâs from November 2023:
Global health | Animal welfare | Longtermism | Meta |
70.4% | 5.5% | 16.2% | 7.9% |
The consensus of EA leaders and the EA community is that global health is overfunded. If global health is excluded, Tylerâs aggregation gives:
Animal welfare | Longtermism | Meta |
18.7% | 54.5% | 26.8% |
Some other potentially helpful aggregations:
Open Phil grants by cause area by Hamish McDoodles (updated daily)
Resource allocations by cause area by Ben Todd (as of 2019)
If anyone is interested in maintaining a more complete and up-to-date aggregation, that could be impactful. The EA community could use that as a canonical resource to better target EAâs most underallocated causes.
- 18 Nov 2024 20:40 UTC; 28 points) 's comment on DonaÂtion ElecÂtion DisÂcusÂsion Thread by (
While I think this piece is right in some sense, seeing it written out clearly it feels like there is something uncooperative and possibly destructive about it. To take the portfolio management case:
Why do the other fund managers prefer 100% stocks? Is this a thoughtful decision you are unthinkingly countering?
Each fund manager gets better outcomes if they keep their allocation secret from others.
I think Iâm most worried about (2): it would be bad if OP made their grants secret or individuals lied about their funding allocation in EA surveys.
Tweaking the fund manager scenario to be a bit more stark:
There are 100 fund managers
50 of them prefer fully stocks, 50 prefer an even split between stocks and bonds
If they each decide individually youâd get an overall allocation of 75% stocks and 25% bonds.
If instead they all are fully following the lessons of this post, the ones that prefer bonds go 100% bonds, and the overall allocation is 50% stocks and 50% bonds.
It feels to me that the 75-25 outcome is essentially the right one, if the two groups are equally likely to be correct. On the other hand, the adversarial 50-50 outcome is one group getting everything they want.
Note that I donât think this is an issue with other groups covering the gaps left by the recent OP shift away from some areas. Itâs not that OP thought that those areas should receive less funding, but that GV wanted to pick their battles. In that case, external groups that do accept the case for funding responding by supporting work in these areas seems fine and good. Which Moskovitz confirms: âIâm explicitly pro-funding by othersâ And: âIâd much prefer to just see someone who actually feels strongly about that take the wheel.â
(This also reminds me about the perpetual debate about whether you should vote things on the Forum up/âdown directionally vs based on how close the vote total currently is to where you think it should be.)
I think these unsavory implications you enumerate are just a consequence of applying game theory to donations, rather than following specifically from my postâs arguments.
For example, if Bob is all-in on avoiding funging and doesnât care about norms like collaboration and transparency, his incentives are exactly as you describe: Give zero information about his value system, and make donations secretly after other funders have shown their hands.
I think youâre completely right that those are awful norms, and we shouldnât go all-in on applying game theory to donations. This goes both for avoiding funging and for my postâs argument about optimizing âEAâs portfolioâ.
However, just as we can learn important lessons from the concept of funging while discouraging the bad, I still think this post is valuable and includes some nontrivial practical recommendations.
I initially found myself nodding in agreement but then I realised a confusion I have:
Why should a donor/âgrantmaker limit their consideration of what is most underfunded to the EA community?
After all, the EA community is a nebulous community with porous boundaries. E.g. we count Open Phil, but what about The Navigation Fund? Bill and Melinda Gates Foundation? And even if we can define the boundaries, what do we actually gain by focusing on this specific subset of donors?
If you instead focus on âwhat is most underfunded at the global levelâ then the question returns to the same broad question of cause prioritisation (âyour value systemâs most preferred causesâ).
I think thatâs a great point! Theoretically, we should count all of those foundations and more, since theyâre all parts of âthe portfolio of everyoneâs actionsâ. (Though this would simply further cement the takeaway that global health is overfunded.)
Some reasons for focusing our optimization on âEAâs portfolioâ specifically:
Believing that non-EA-aligned actions have negligible effect compared to EA-aligned actions.
Since we wouldnât have planned to donate to ineffective interventions/âcause areas anyway, itâs unclear what effect including those in the portfolio would have on our decisionmaking, which is one reason why they may be safely ignorable.
Itâs far more tractable to derive EAâs portfolio than the portfolio of everyoneâs actions, or even the portfolio of everyoneâs charitable giving.
But I agree that these reasons arenât necessarily decisive. I just think there are enough reasons to do so, and this assumption has enough simplifying power, that for me itâs worth making.
Yeah it might be more tractable.
Focusing solely in EAs has a bunch of weird effects though.
E.g. Iâve been thinking about some âsafeguarding democracyâ type interventions for longtermist reasons. If I looked at EA funding Iâd presumably conclude that the area was massively underfundedâalmost no one working on this. Whereas looking in a global sense the initial impression is that itâs a very large, well-funded area. (Maybe itâs still a useful heuristic though because explicitly longtermist funding and effort might focus on quite different subcomponents of the broad topic?)
And another one is just that how liberal you are in your definitions of whatâs EA or not can make quite a big difference. E.g. plausibly by a factor of 2 in the case of animal advocacy.
(No need to reply, Iâm just musing.)
Thanks!
Another consideration I just encountered in a grantmaking decision:
Other decision-makers in EA might be those views we are most inclined to defer to or cooperate with. So upon noticing that an opportunity is underfunded in EA specifically but not the world at large, arguably I should update away from wanting to fund it upon considering Open Phil and EA donations specifically as opposed to donations in the world more broadly. Whereas I think the thrust of your post implies the opposite.
(@Ariel Simnegar đ¸, although again no need to reply. Possibly Iâm getting in an unnecessary tangle by considering this âEA spending vs world spendingâ lens)
I think the presentation of this argument here misses some important considerations:
The way that you want us to act with respect to OP is already the way that OP is trying to act with respect to the rest of the world.
The same considerations that lead OP to choose not to allocate all their funds to the highest expected value cause should also be relevant for individual donors, and could legitimately mean that they should diversify as well. There seems to be an inconsistency in saying these considerations are valid for OP but not for individuals.
Even if you are a pure marginal EV maximizer (you think these considerations are invalid for OP and for invidiauls), OPâs donations wonât always be relevant to your donation decisions, and if they are, it is the absolute amounts donated, rather than the percentages, that are relevant.
The way that you want us to act with respect to OP is already the way that OP is trying to act with respect to the rest of the world
EAs donât fund the most important causes, based purely on scale (otherwise tonnes of things EAs ignore would score highly, e.g. vaccination programs in rich countries). A core part of EA is looking for causes which are neglected. We look for the areas that are receiving the least funding relative to what they would receive in our ideal world, because these are likely to be the areas where our donations will have the highest marginal impact.
This is the reply to people who argue âoh you want local charities to disappear and to send all the money to malaria netsâ. The reply is: âNo! In my ideal world, malaria nets would quickly attract all the funding they need. Then there would still be plenty of money left over for other things. But I think I should look at the world I actually live in, recognize that malaria nets are outrageously underfunded, and give all my resources there.â
So in a sense, the argument you are making here isnât anything new. You are just saying we should try to act towards other EAs in a similar way to how EAs as a group act towards the rest of the world. And I donât disagree with this. But I think we should go all the way. I think we should treat other EAs in the same way that we treat the rest of the world. If I understand your argument correctly, you are trying to draw a distinction between the EA community and everyone else.
The same considerations that lead OP to choose not to allocate all their funds to the highest expected value cause should also be relevant for individual donors
OP do not allocate all of their funding to the âbestâ cause. Even if OP were a pure EV maximizer, they might have valid reasons not to do this, because they have such a big budget. It may be that diminishing marginal returns mean that the âbestâ cause stops being the best once OP have given a certain level of funds to it, at which point they should switch to funding another cause instead.
But my impression is that this is not OPâs reason for donating to multiple causes (or at least not their only reason). They are not purely trying to maximize expected value, or at least not in a naive first order way. One reason to diversify might be donor risk aversion, like you mention (e.g. you want to maximize EV while bounding the risk that you have no positive impact at all), and there are plenty of other considerations that might come into it too, e.g. sense of duty to a certain cause, reputation, belief in unquantifiable uncertainty and impossibility of making certain cause comparisons etc
But if these considerations are valid for OP then they should also be relevant for individual donors. For example, if an individual donor wants to bound the risk that they have no impact, then that might well mean not donating everything to the cause they think is most underfunded by OP. It would only make sense to do this if they had a weird type of risk aversion where they want to bound the risk that the EA community as a whole has no positive impact, but are unconcerned about their own donationsâ risk. This seems very arbitrary! Either they should care about the risk for their own donations, and should diversify, or they should be concerned with all of humanityâs donations, in which case OP should not be diversifying either!
Pure EV maximizers donât care about percentages anyway
You could bite the bullet and say that neither OP nor individual donors should be diversifying their donations (except when faced with diminishing marginal utility). For these individual donors, they should be donating everything to one cause (and probably one charity unless they have a lot to give!) But even for these donors, itâs not which causes OP underfund that really matters. Itâs what causes all of humanity underfund. So it is not the percentages of OPâs funding allocation that matter, itâs the absolute value.
If OP are a relatively small player in a cause area (global health..?) then their donation decisions are unlikely to be especially relevant to the individual donor. If they thought global health was the top cause before OP donations were taken into account, it probably still will be afterwards. But if OP are a relatively big player (animal welfare..?) then their donations are more relevant, due to diminishing marginal utility. But it is the absolute amount of funding they are moving, not the percentages, which will determine this.
Note, from this post:
I suppose this wouldnât include new orgs in wild animal welfare and invertebrate welfare, though.
Iâm curious how many people actually split their individual giving across cause areas. It seems like a strange decision, for all the reasons you outline.
Anecdotally, most people I know who Iâve asked do that!
I strongly agree: the comparative underfunding of these areas always felt off to me, given their very large numbers of individuals and low-hanging fruits.
However, it feels like more and more people are recognizing the need for more funding for animal welfare, given the results of the recent debate.
I think this is a really compelling addition to EA portfolio theory. Two half-formed thoughts:
Does portfolio theory apply better at the individual level than the community level? I think something like treating your own contributions (giving + career) as a portfolio makes a lot of sense, if youâre explicitly trying to hedge personal epistemic risk. I think this is a slightly different angle on one of Jeffâs points: is this âk-level 2â aggregate portfolio a âbetterâ aggregation of everyoneâs information than the âk-level 1âł of whatever portfolio emerges from everyone individually optimising their own portfolios? You could probably look at this analytically⌠might put that on the to-do list.
At some point what matters is specific projects...? Like when I think about âunderfundedâ, Iâm normally thinking thereâs good projects with high expected ROI that arenât being done, relative to some other cause area where the marginal project has a lower ROI. Maybe my point is something likeâunderfunding and accounting for it should be done at a different stage of the donation process, rather than in looking at overall what the % breakdown of the portfolio is. Maybe weâre more private equity than index fund.
I think the individual level applies if you have risk aversion on a personal level. For example, I care about having personally made a difference, which biases me towards certain individually less risky ideas.
I think itâs a tough situation because k=2 includes these unsavory implications Jeff and I discuss. But as I wrote, I think k=2 is just what happens when people think about everyoneâs donations game-theoretically. If everyone else is thinking in k=2 mode but youâre thinking in k=1 mode, youâre going to get funged such that your value systemâs expression in the portfolio could end up being much less than what is âfairâ. Itâs a bit like how the Nash equilibrium in the Prisonerâs Dilemma is âdefect-defectâ.
I agree with this. My post frames the discussion in terms of cause areas for simplicity and since the lessons generalize to more people, but I think your point is correct.
This is an understandable point to leave out, but one issue with the portfolio analogy is that, as far as I can tell, it assumes all âEAâ money is basically the same. However, big donors might have advantages in certain areas, for instance if a project is hard to evaluate without extensive consultation with experts, or if a project can only be successful if it has a large and guaranteed funding stream. As such, Iâm not sure it holds that, if somebody thinks Open Phil is underinvesting in longtermism compared to the ideal allocation, then they should give to longtermist charities- the opportunities available to Open Phil might be significantly stronger than the ones available to donors, especially ones who donât have a technical background in the area.
âTopping upâ OP grants does reasonably well in this scenario, no?
I agree with the overall conclusion of this post but not completely with the reasoning. In particular, there is an important difference between allocating investments and allocating charitable donations in that for investments it makes sense to be (at least somewhat) risk averse, while for donations a simple strategy maximizing expected benefits makes perfect sense.
Even a risk-neutral approach to charitable donations will have to spread its investments however, because there is only so much money that the most effective charity can absorb before reaching its funding gap, which makes the next best charity the new most effective one.
For a big organization such as OP, this can become a real problem for a cause area where there are many charities with high effectiveness but (relatively) low funding gaps. This might be part of the explanation why OP pays more to global health, where there are very large organizations that can effectively absorb a lot of funding, over animal welfare.
For small individual donors, this means that there are likely opportunities to make very effective donations to organizations that might be too new or too small to be picked up by the big donors. You might even help them grow to the size where they can effectively absorb much larger donations.
So to reiterate, I think it makes sense to prefer donating to smaller charities and cause areas as an individual donor, but the reason is that they might be overlooked by the big donors, not to âbalance outâ some imaginary overall EA portfolio.
I donât think most people take as a given that maximizing expected value makes perfect sense for donations. In the theoretical limit, many people balk at conclusions like accepting a gamble with a 51% chance of doubling the universeâs value and a 49% chance of destroying it. (Especially so at the implication of continuing to accept that gamble until the universe is almost surely destroyed.) In practice, people have all sorts of risk aversion, including difference-making risk aversion, avoiding worst case scenarios, and reducing ambiguity.
I argue here against the view that animal welfareâs diminishing marginal returns would be sufficient for global health to win out against it at OP levels of funding, even if one is risk neutral.
So long as small orgs apply to large grantmakers like OP, so long as one is locally confident that OP is trying to maximize expected value, Iâd actually expect that OPâs full-time staff would generally be much better positioned to make these kinds of judgments than you or I. Under your value system, Iâd echo Jeffâs suggestion that you should âtop upâ OPâs grants.
My main reason for trying to be mostly risk-neutral in my donations is that my donations are very small relative to the total size of the problem, while this is not the case for my personal investments. I would donate differently (more risk-averse) if I had control over a significant part of all charitable donations in a given area. In particular, I do not endorse double-or-nothing gambling on the fate of the universe.
You make a good point that OP is more likely to make judgements regarding small donation opportunities, so Iâll have to revise my position that small donors should specifically seek out smaller organizations to donate to. But the same argument for âtopping upâ OP donations could equally be made to support simply donating to an EA fund (which I expect will also take into account how their donations funge with OP).