In principle, we try to find the best giving opportunities by comparing many possibilities. However, many of the comparisons we’d like to make hinge on very debatable, uncertain questions.
For example:
Some people think that animals such as chickens have essentially no moral significance compared to that of humans; others think that they should be considered comparably important, or at least 1-10% as important. If you accept the latter view, farm animal welfare looks like an extraordinarily outstanding cause, potentially to the point of dominating other options: billions of chickens are treated incredibly cruelly each year on factory farms, and we estimate that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. But if you accept the former view, this work is arguably a poor use of money.
We have additional uncertainty over how we should resolve these sorts of uncertainty. We could try to quantify our uncertainties using probabilities (e.g. “There’s a 10% chance that I should value chickens 10% as much as humans”), and arrive at a kind of expected value calculation for each of many broad approaches to giving. But most of the parameters in such a calculation would be very poorly grounded and non-robust, and it’s unclear how to weigh calculations with that property. In addition, such a calculation would run into challenges around normative uncertainty (uncertainty about morality), and it’s quite unclear how to handle such challenges.
In this post, I’ll use “worldview” to refer to a set of highly debatable (and perhaps impossible to evaluate) beliefs that favor a certain kind of giving. One worldview might imply that evidence-backed charities serving the global poor are far more worthwhile than either of the types of giving discussed above; another might imply that farm animal welfare is; another might imply that global catastrophic risk reduction is. A given worldview represents a combination of views, sometimes very difficult to disentangle, such that uncertainty between worldviews is constituted by a mix of empirical uncertainty (uncertainty about facts), normative uncertainty (uncertainty about morality), and methodological uncertainty (e.g. uncertainty about how to handle uncertainty, as laid out in the third bullet point above). Some slightly more detailed descriptions of example worldviews are in a footnote.[1]
A challenge we face is that we consider multiple different worldviews plausible. We’re drawn to multiple giving opportunities that some would consider outstanding and others would consider relatively low-value. We have to decide how to weigh different worldviews, as we try to do as much good as possible with limited resources.
When deciding between worldviews, there is a case to be made for simply taking our best guess[2] and sticking with it. If we did this, we would focus exclusively on animal welfare, or on global catastrophic risks, or global health and development, or on another category of giving, with no attention to the others. However, that’s not the approach we’re currently taking.
Instead, we’re practicing worldview diversification: putting significant resources behind each worldview that we find highly plausible. We think it’s possible for us to be a transformative funder in each of a number of different causes, and we don’t—as of today—want to pass up that opportunity to focus exclusively on one and get rapidly diminishing returns.
This post outlines the reasons we practice worldview diversification. In a nutshell:
I will first discuss the case against worldview diversification. When seeking to maximize expected positive impact, without being worried about the “risk” of doing no good, there is a case that we should simply put all available resources behind the worldview that our best-guess thinking favors.
I will then list several reasons for practicing worldview diversification, in situations where (a) we have high uncertainty and find multiple worldviews highly plausible; (b) there would be strongly diminishing returns if we put all our resources behind any one worldview.
First, under a set of basic assumptions including (a) and (b) above, worldview diversification can maximize expected value.
Second, if we imagined that different worldviews represented different fundamental values (not just different opinions, such that one would ultimately be “the right one” if we had perfect information), and that the people holding different values were trying to reach agreement on common principles behind a veil of ignorance (explained more below), it seems likely that they would agree to some form of worldview diversification as a desirable practice for anyone who ends up with outsized resources.
Practicing worldview diversification means developing staff capacity to work in many causes. This provides option value (the ability to adjust if our best-guess worldview changes over time). It also increases our long-run odds of having large effects on the general dialogue around philanthropy, since we can provide tangibly useful information to a larger set of donors.
There are a number of other practical benefits to working in a broad variety of causes, including the opportunity to use lessons learned in one area to improve our work in another; presenting an accurate public-facing picture of our values; and increasing the degree to which, over the long run, our expected impact matches our actual impact. (The latter could be beneficial for our own, and others’, ability to evaluate how we’re doing.)
Finally, I’ll briefly discuss the key conditions under which worldview diversification seems like a good idea, and give some rough notes on how we currently implement it in practice.
Note that worldview diversification is simply a broad term for putting significant resources behind multiple worldviews—it does not mean anything as specific as “divide resources evenly between worldviews.” This post discusses benefits of worldview diversification, without saying exactly how (or to what degree) one should allocate resources between worldviews. In the future, we hope to put more effort into reflecting on—and discussing—which specific worldviews we find most compelling and how we weigh them against each other.
Also note that this post focuses on deciding how to allocate resources between different plausible already-identified causes, not on the process for identifying promising causes.
The case against worldview diversification
It seems likely that if we had perfect information and perfect insight into our own values, we’d see that some worldviews are much better guides to giving than others. For a relatively clear example, consider GiveWell’s top charities vs. our work so far on farm animal welfare:
GiveWell estimates that its top charity (Against Malaria Foundation) can prevent the loss of one year of life for every $100 or so.
We’ve estimated that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. If we roughly imagine that each hen gains two years of 25%-improved life, this is equivalent to one hen-life-year for every $0.01 spent.
If you value chicken life-years equally to human life-years, this implies that corporate campaigns do about 10,000x as much good per dollar as top charities. If you believe that chickens do not suffer in a morally relevant way, this implies that corporate campaigns do no good.[3]
One could, of course, value chickens while valuing humans more. If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x). If one values humans astronomically more, this still implies that top charities are a far better use of funds. It seems unlikely that the ratio would be in the precise, narrow range needed for these two uses of funds to have similar cost-effectiveness.
One might therefore imagine that there is some “best worldview” (if we had perfect information) that can guide us to do far more good than any of the others. And if that’s right, one might argue that we should focus exclusively on a “best guess worldview”[4] in order to maximize how much good we do in expected value terms. For example, if we think that one worldview seems 10,000x better than the others, even a 1-10% chance of being right would still imply that we can do much more good by focusing on that worldview.
This argument presumes that we are “risk neutral”: that our goal is only to maximize the expected value of how much good we do. That is, it assumes we are comfortable with the “risk” that we make the wrong call, put all of our resources into a misguided worldview, and ultimately accomplish very little. Being risk neutral to such a degree often seems strange to people who are used to investing metaphors: investors rarely feel that the possibility of doubling one’s money fully compensates for the possibility of losing it all, and they generally use diversification to reduce the variance of their returns (they aren’t just focused on expected returns). However, we don’t have the same reasons to fear failure that for-profit investors do. There are no special outsized consequences for “failing to do any good,” as there are for going bankrupt, so it’s a risk we’re happy to take as long as it’s balanced by the possibility of doing a great deal of good. The Open Philanthropy Project aims to be risk neutral in the way laid out here, though there are some other reasons (discussed below) that putting all our eggs in one basket could be problematic.
The case for worldview diversification
I think the case for worldview diversification largely hinges on a couple of key factors:
Strong uncertainty about which worldviews are most reasonable. We recognize that any given worldview might turn out to look misguided if we had perfect information—but even beyond that, we believe that any given worldview might turn out to look misguided if we reflected more rationally on the information that is available. In other words, we feel there are multiple worldviews that each might qualify for “what we should consider the best worldview to be basing our giving on, and the worldview that conceptually maximizes our expected value, if we thought more intelligently about the matter.” We could imagine someday finding any of these worldviews to be the best-seeming one. We feel this way partly because we see intelligent, reasonable people who are aware of the arguments for each worldview and still reject it.
Some people recognize that their best-guess worldview might be wrong, but still think that it is clearly the best to bet on in expected-value terms. For example, some argue that focusing on the far future is best even if there is a >99% chance that the arguments in favor of doing so are misguided, because the value of focusing on the far future is so great if the arguments turn out to be valid. In effect, these people seem to be leaving open no realistic possibility of changing their minds on this front. We have a different kind of uncertainty, that I find difficult to model formally, but that is probably something along the lines of cluster thinking. All things considered—including things like our uncertainty about our fundamental way of modeling expected value—I tend to think of the different plausible worldviews as being in the same ballpark of expected value.
Diminishing returns to putting resources behind any given worldview. When looking at a focus area such as farm animal welfare or potential risks from advanced AI, it seems to me that giving in the range of tens of millions of dollars per year (over the next decade or so) can likely fund the best opportunities, help relevant fields and disciplines grow, and greatly improve the chances that the cause pulls in other sources of funding (both private donors and governments). Giving much more than this would hit strongly diminishing returns. For causes like these, I might roughly quantify my intuition by saying that (at the relevant margin) giving 10x as much would only accomplish about 2x as much. (There are other causes where this dynamic does not apply nearly as much; for example, we don’t see much in the way of diminishing returns when it comes to supporting cash transfers to the global poor.)
With these two factors in mind, there are a number of arguments for worldview diversification.
Expected value
When accounting for strong uncertainty and diminishing returns, worldview diversification can maximize expected value even when one worldview looks “better” than the others in expectation. One way of putting this is that if we were choosing between 10 worldviews, and one were 5x as good as the other nine, investing all our resources in that one would—at the relevant margin, due to the “diminishing returns” point—be worse than spreading across the ten.[5]
I think this dynamic is enhanced by the fact that there is so much we don’t know, and any given worldview could turn out to be much better or much worse than it appears for subtle and unanticipated reasons, including those related to flow-through effects.[6]
It isn’t clear to me how much sense it makes to think in these terms. Part of our uncertainty about worldviews is our uncertainty about moral values: to a significant degree, different worldviews might be incommensurate, in that there is no meaningful way to compare “good accomplished” between them. Some explicit frameworks have been proposed for dealing with uncertainty between incommensurate moral systems,[7] but we have significant uncertainty about how useful these frameworks these are and how to use them.
Note that the argument in this section only holds for worldviews with reasonably similar overall expected value. If one believes that a particular worldview points to giving opportunities that are orders of magnitude better than others’, this likely outweighs the issue of diminishing returns.
The ethics of the “veil of ignorance”
Another case for worldview diversification derives from, in some sense, the opposite approach. Rather than thinking of different worldviews as different “guesses” at how to do the most good, such that each has an expected value and they are ultimately compared in the same terms, presume that different worldviews represent the perspectives of different people[8] with different, incommensurable values and frameworks. For example, it may be the case that some people care as deeply about animals as they do about people, while others don’t value animal welfare at all, and that no amount of learning or reflection would change any of this. When choosing between worldviews, we’re choosing which sorts of people we most identify and sympathize with, and we have strong uncertainty on the matter.
One way of thinking about the ethics of how people with different values should interact with each other is to consider a kind of veil of ignorance: imagine the agreements such people would come to about how they should use resources, if they were negotiating before knowing how much resources each of them would individually have available.[9] One such agreement might be: “If one of us ends up with access to vastly more resources than the others, that person should put some resources into the causes most important to each of us—up to some point of diminishing returns—rather than putting all the resources into that person’s own favorite cause.” Each person might accept (based on the diminishing returns model above) that if they end up with vastly more resources than the others, this agreement will end up making them worse off, but only by 50%; whereas if someone else ends up with vastly more resources, this agreement will end up making them far better off.
This is only a rough outline of what an appealing principle might look like. Additional details might be added, such as “The person with outsized resources should invest more in areas where they can be more transformative, e.g. in more neglected areas.”
We see multiple appealing worldviews that seem to have relatively few resources behind them, and we have the opportunity to have a transformative impact according to multiple such worldviews. Taking this opportunity is the ethical thing to do in the sense that it reflects an agreement we would have made under a “veil of ignorance,” and it means that we can improve the world greatly according to multiple different value sets that we feel uncertain between. I think that considering and putting weight on “veil of ignorance” based ethical concerns such as this one is a generally good heuristic for consequentialists and non-consequentialists alike, especially when one does not have a solid framework for comparing “expected good accomplished” across different options.
“Our goals, and our efforts, have revolved around (a) selecting focus areas; (b) hiring people to lead our work in these areas (see our most recent update); (c) most recently, working intensively with new hires and trial hires on their early proposed grant recommendations.
Collectively, we think of these activities as capacity building. If we succeed, the end result will be an expanded team of people who are (a) working on well-chosen focus areas; (b) invested (justifiably) with a great deal of trust and autonomy; (c) capable of finding many great giving opportunities in the areas they’re working on.”
In addition to building internal capacity (staff), we are hoping to support the growth of the fields we work in, and to gain knowledge over time that makes us more effective at working in each cause. Collectively, all of this is “capacity building” in the sense that it will, in the long run, improve our ability to give effectively at scale. There are a number of benefits to building capacity in a variety of causes that are appealing according to different worldviews (i.e., to building capacity in criminal justice reform, farm animal welfare, biosecurity and pandemic preparedness and more).
One benefit is option value. Over time, we expect that our thinking on which worldviews are most appealing will evolve. For example, I recently discussed three key issues I’ve changed my mind about over the last several years, with major implications for how promising I find different causes. It’s very possible that ten years from now, some particular worldview (and its associated causes) will look much stronger to us than the others—and that it won’t match our current best guess. If this happens, we’ll be glad to have invested in years of capacity building so we can quickly and significantly ramp up our support.
Another long-term benefit is that we can be useful to donors with diverse worldviews. If we worked exclusively in causes matching our “best guess” worldview, we’d primarily be useful to donors with the same best guess; if we do work corresponding to all of the worldviews we find highly compelling, we’ll be useful to any donor whose values and approach are broadly similar to ours. That’s a big difference: I believe there are many people with fundamentally similar values to ours, but different best guesses on some highly uncertain but fundamental questions—for example, how to value reducing global catastrophic risks vs. accelerating scientific research vs. improving policy.
With worldview diversification, we can hope to appeal to—and be referred to—any donor looking to maximize the positive impact of their giving. Over the long run, I think this means we have good prospects for making many connections via word-of-mouth, helping many donors give more effectively, and affecting the general dialogue around philanthropy.
Other benefits to worldview diversification
Worldview diversification means working on a variety of causes that differ noticeably from each other. There are a number of practical benefits to this.
We can use lessons learned in one area to improve our work in another. For example:
Some of the causes we work in are very neglected and “thin,” in the sense that there are few organizations working on them. Others were chosen for reasons other than neglectedness, and have many organizations working on them. Understanding the latter can give us a sense for what kinds of activities we might hope to eventually support in the former.
Some of the causes we work on involve very long-term goals with little in the way of intermediate feedback (this tends to be true of efforts to reduce global catastrophic risks). In other causes, we can more expect to see progress and learn from our results (for example, criminal justice reform, which we selected largely for its tractability).
Different causes have different cultures, and by working in a number of disparate ones, we work with a number of Program Officers whose different styles and approaches can inform each other.
It is easier for casual observers (such as the press) to understand our values and motivations. Some of the areas we work in are quite unconventional for philanthropy, and we’ve sometimes come across people who question our motivations. By working in a broad variety of causes, some of which are easier to see the case for than others, we make it easier for casual observers to discern the pattern behind our choices and get an accurate read on our core values. Since media coverage affects many people’s preconceptions, this benefit could make a substantial long-term difference to our brand and credibility.
Over the long run, our actual impact will better approximate our expected impact. Our hits-based giving approach means that in many cases, we’ll put substantial resources into a cause even though we think it’s more likely than not that we’ll fail to have any impact. (Potential risks from artificial intelligence is one such cause.) If we put all our resources behind our best-guess worldview, we might never have any successful grants even if we make intelligent, high-expected-value grants. Conversely, we might “get lucky” and appear far more reliably correct and successful than we actually are. In either case, our ability to realistically assess our own track record, and learn from it, is severely limited. Others’ ability to assess our work, in order to decide how much weight they should put on our views, is as well.
Worldview diversification lessens this problem, to a degree. If we eventually put substantial resources into ten very different causes, then we can reasonably hope to get one or more “hits” even if each cause is a long shot. If we get no “hits,” we have some evidence that we’re doing something wrong, and if we get one or more, this is likely to help our credibility.
We’re still ultimately making a relatively small number of “bets,” and there are common elements to the reasoning and approach we bring to each, so the benefit we get on this front is limited.
Morale and recruiting. Working in a variety of causes makes our organization a more interesting place to work. It means that our work remains exciting and motivating even as our views and our “best guesses” shift, and even when there is little progress on a particular cause for a long time. It means that our work resonates with more people, broadening the community we can engage with positively. This point wouldn’t be enough by itself to make the case for worldview diversification, but it is a factor in my mind, and I’d be remiss not to mention it.
When and how should one practice worldview diversification?
As discussed above, the case for worldview diversification relies heavily on two factors: (a) we have high uncertainty and find multiple worldviews highly plausible; (b) there would be strongly diminishing returns if we put all our resources behind any one worldview. Some of the secondary benefits discussed in the previous section are also specific to a public-facing organization with multiple staff. I think worldview diversification makes sense for relatively large funders, especially those with the opportunity to have a transformative impact according to multiple different highly appealing worldviews. I do not think it makes sense for an individual giving $100 or even $100,000 per year. I also do not think it makes sense for someone who is highly confident that one cause is far better than the rest.
We haven’t worked out much detail regarding the “how” of worldview diversification. In theory, one might be able to develop a formal approach that accounts for both the direct benefits of each potential grant and the myriad benefits of worldview diversification in order to arrive at conclusions about how much to allocate to each cause. One might also incorporate considerations like “I’m not sure whether worldviews A and B are commensurate or not; there’s an X% chance they are, in which case we should allocate one way, and a Y% chance they aren’t, in which case we should allocate another way.” But while we’ve discussed these sorts of issues, we haven’t yet come up with a detailed framework along these lines. Nor have we thoroughly reflected on, and explicitly noted, which specific worldviews we find most compelling and how we weigh them against each other.
We will likely put in more effort on this front in the coming year, though it won’t necessarily lead to a complete or satisfying account of our views and framework. For now, some very brief notes on our practices to date:
Currently, we tend to invest resources in each cause up to the point where it seems like there are strongly diminishing returns, or the point where it seems the returns are clearly worse than what we could achieve by reallocating the resources—whichever comes first. A bit more specifically:
In terms of staff capacity, so far it seems to me that there is a huge benefit to having one full-time staffer working on a given cause, supported by 1-3 other staff who spend enough time on the cause to provide informed feedback. Allocating additional staff beyond this seems generally likely to have rapidly diminishing returns, though we are taking a case-by-case approach and allocating additional staff to a cause when it seems like this could substantially improve our grantmaking.
In terms of money, so far we have tried to roughly benchmark potential grants against direct cash transfers; when it isn’t possible to make a comparison, we’ve often used heuristics such as “Does this grant seem reasonably likely to substantially strengthen an important aspect of the community of people/organizations working on this cause?” as a way to very roughly and intuitively locate the point of strongly diminishing returns. We tend to move forward with any grant that we understand the case for reasonably well and that seems—intuitively, heuristically—strong by the standards of its cause/associated worldview (and appears at least reasonably likely, given our high uncertainty, to be competitive with grants in other causes/worldviews, including cash transfers). For causes that seem particularly promising, and/or neglected (such that we can be particularly transformative in them), we use the lower bar of funding “reasonably strong” opportunities; for other causes, we tend more to look for “very strong” opportunities. This approach is far from ideal, but has the advantage that it is fairly easy to execute in practice, given that we currently have enough resources to move forward with all grants fitting these descriptions.
As noted above, we hope to put more thought into these issues in the coming year. Ideas for more principled, systematic ways of practicing worldview diversification would be very interesting to us.
One might fully accept total utilitarianism, plus the argument in Astronomical Waste, as well as some other premises, and believe that work on global catastrophic risks has far higher expected value than work on other causes.
One might accept total utilitarianism and the idea that the moral value of the far future overwhelms other considerations—but also believe that our impact on the far future is prohibitively hard to understand and predict, and that the right way to handle radical uncertainty about our impact is to instead focus on improving the world in measurable, robustly good ways. This view could be consistent with a number of different opinions about which causes are most worth working on.
One might put some credence in total utilitarianism and some credence in the idea that we have special duties to persons who live in today’s society, suffer unjustly, and can benefit tangibly and observably from our actions. Depending on how one handles the “normative uncertainty” between the two, this could lead to a variety of different conclusions about which causes to prioritize.
Any of the above could constitute a “worldview” as I’ve defined it. Views about the moral weight of animals vs. humans could additionally complicate the points above.
Specifically, our best guess about which worldview or combination of worldviews is most worth operating on in order to accomplish as much good as possible. This isn’t necessarily the same as which worldview is most likely to represent a set of maximally correct beliefs, values and approaches; it could be that a particular worldview is only 20% likely to represent a set of maximally correct beliefs, values, and approaches, but that if it does, following it would lead to >100x the positive impact of following any other worldview. If such a thing were true (and knowable), then this would be the best worldview to operate on.
(Bayesian adjustments should attenuate this difference to some degree, though it’s unclear how much, if you believe—as I do—that both estimates are fairly informed and reasonable though far from precise or reliable. I will put this consideration aside here.)
Specifically, our best guess about which worldview is most worth operating on in order to accomplish as much good as possible. This isn’t necessarily the same as which worldview is most likely to represent a set of maximally correct beliefs, values and approaches; it could be that a particular worldview is only 20% likely to represent a set of maximally correct beliefs, values, and approaches, but that if it does, following it would lead to >100x the positive impact of following any other worldview. If such a thing were true (and knowable), then this would be the best worldview to operate on.
Specifically, say X is the amount of good we could accomplish by investing $Y in any of the nine worldviews other than the “best” one, and imagine that $Y is around the point of diminishing returns where investing 10x as much only accomplishes 2x as much good. This would then imply that putting $Y into each of the ten worldviews would have good accomplished equal to 14X (5X for the “best” one, X for each of the other nine), while putting $10*Y into the “best” worldview would have good accomplished equal to 10X. So the diversified approach is about as 1.4x as good by these assumptions.
For example, say we return to the above hypothetical (see previous footnote) but also imagine that our estimates of the worldviews’ value includes some mistakes, such that an unknown one of the ten actually has 1000X value and another unknown one actually has 0 value at the relevant margin. (The diminishing returns continue to work the same way.) Then putting $Y into each of the ten worldviews would have good accomplished equal to at least 1008X while putting $10Y into the “best” worldview would have good accomplished equal to about 208X (the latter is 2(10%*1000X + 10%*8 + 80%*5X)). While in the previous case the diversified approach looked about 1.4X as good, here it looks nearly 5x as good.
I’m using the term “people” for simplicity, though in theory I could imagine extending the analysis in this section to the value systems of animals etc.
I recognize that this setup has some differences with the well-known “veil of ignorance” proposed by Rawls, but still think it is useful for conveying intuitions in this case.
Worldview Diversification
Link post
In principle, we try to find the best giving opportunities by comparing many possibilities. However, many of the comparisons we’d like to make hinge on very debatable, uncertain questions.
For example:
Some people think that animals such as chickens have essentially no moral significance compared to that of humans; others think that they should be considered comparably important, or at least 1-10% as important. If you accept the latter view, farm animal welfare looks like an extraordinarily outstanding cause, potentially to the point of dominating other options: billions of chickens are treated incredibly cruelly each year on factory farms, and we estimate that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. But if you accept the former view, this work is arguably a poor use of money.
Some have argued that the majority of our impact will come via its effect on the long-term future. If true, this could be an argument that reducing global catastrophic risks has overwhelming importance, or that accelerating scientific research does, or that improving the overall functioning of society via policy does. Given how difficult it is to make predictions about the long-term future, it’s very hard to compare work in any of these categories to evidence-backed interventions serving the global poor.
We have additional uncertainty over how we should resolve these sorts of uncertainty. We could try to quantify our uncertainties using probabilities (e.g. “There’s a 10% chance that I should value chickens 10% as much as humans”), and arrive at a kind of expected value calculation for each of many broad approaches to giving. But most of the parameters in such a calculation would be very poorly grounded and non-robust, and it’s unclear how to weigh calculations with that property. In addition, such a calculation would run into challenges around normative uncertainty (uncertainty about morality), and it’s quite unclear how to handle such challenges.
In this post, I’ll use “worldview” to refer to a set of highly debatable (and perhaps impossible to evaluate) beliefs that favor a certain kind of giving. One worldview might imply that evidence-backed charities serving the global poor are far more worthwhile than either of the types of giving discussed above; another might imply that farm animal welfare is; another might imply that global catastrophic risk reduction is. A given worldview represents a combination of views, sometimes very difficult to disentangle, such that uncertainty between worldviews is constituted by a mix of empirical uncertainty (uncertainty about facts), normative uncertainty (uncertainty about morality), and methodological uncertainty (e.g. uncertainty about how to handle uncertainty, as laid out in the third bullet point above). Some slightly more detailed descriptions of example worldviews are in a footnote.[1]
A challenge we face is that we consider multiple different worldviews plausible. We’re drawn to multiple giving opportunities that some would consider outstanding and others would consider relatively low-value. We have to decide how to weigh different worldviews, as we try to do as much good as possible with limited resources.
When deciding between worldviews, there is a case to be made for simply taking our best guess[2] and sticking with it. If we did this, we would focus exclusively on animal welfare, or on global catastrophic risks, or global health and development, or on another category of giving, with no attention to the others. However, that’s not the approach we’re currently taking.
Instead, we’re practicing worldview diversification: putting significant resources behind each worldview that we find highly plausible. We think it’s possible for us to be a transformative funder in each of a number of different causes, and we don’t—as of today—want to pass up that opportunity to focus exclusively on one and get rapidly diminishing returns.
This post outlines the reasons we practice worldview diversification. In a nutshell:
I will first discuss the case against worldview diversification. When seeking to maximize expected positive impact, without being worried about the “risk” of doing no good, there is a case that we should simply put all available resources behind the worldview that our best-guess thinking favors.
I will then list several reasons for practicing worldview diversification, in situations where (a) we have high uncertainty and find multiple worldviews highly plausible; (b) there would be strongly diminishing returns if we put all our resources behind any one worldview.
First, under a set of basic assumptions including (a) and (b) above, worldview diversification can maximize expected value.
Second, if we imagined that different worldviews represented different fundamental values (not just different opinions, such that one would ultimately be “the right one” if we had perfect information), and that the people holding different values were trying to reach agreement on common principles behind a veil of ignorance (explained more below), it seems likely that they would agree to some form of worldview diversification as a desirable practice for anyone who ends up with outsized resources.
Practicing worldview diversification means developing staff capacity to work in many causes. This provides option value (the ability to adjust if our best-guess worldview changes over time). It also increases our long-run odds of having large effects on the general dialogue around philanthropy, since we can provide tangibly useful information to a larger set of donors.
There are a number of other practical benefits to working in a broad variety of causes, including the opportunity to use lessons learned in one area to improve our work in another; presenting an accurate public-facing picture of our values; and increasing the degree to which, over the long run, our expected impact matches our actual impact. (The latter could be beneficial for our own, and others’, ability to evaluate how we’re doing.)
Finally, I’ll briefly discuss the key conditions under which worldview diversification seems like a good idea, and give some rough notes on how we currently implement it in practice.
Note that worldview diversification is simply a broad term for putting significant resources behind multiple worldviews—it does not mean anything as specific as “divide resources evenly between worldviews.” This post discusses benefits of worldview diversification, without saying exactly how (or to what degree) one should allocate resources between worldviews. In the future, we hope to put more effort into reflecting on—and discussing—which specific worldviews we find most compelling and how we weigh them against each other.
Also note that this post focuses on deciding how to allocate resources between different plausible already-identified causes, not on the process for identifying promising causes.
The case against worldview diversification
It seems likely that if we had perfect information and perfect insight into our own values, we’d see that some worldviews are much better guides to giving than others. For a relatively clear example, consider GiveWell’s top charities vs. our work so far on farm animal welfare:
GiveWell estimates that its top charity (Against Malaria Foundation) can prevent the loss of one year of life for every $100 or so.
We’ve estimated that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. If we roughly imagine that each hen gains two years of 25%-improved life, this is equivalent to one hen-life-year for every $0.01 spent.
If you value chicken life-years equally to human life-years, this implies that corporate campaigns do about 10,000x as much good per dollar as top charities. If you believe that chickens do not suffer in a morally relevant way, this implies that corporate campaigns do no good.[3]
One could, of course, value chickens while valuing humans more. If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x). If one values humans astronomically more, this still implies that top charities are a far better use of funds. It seems unlikely that the ratio would be in the precise, narrow range needed for these two uses of funds to have similar cost-effectiveness.
I think similar considerations broadly apply to other comparisons, such as reducing global catastrophic risks vs. improving policy, though quantifying such causes is much more fraught.
One might therefore imagine that there is some “best worldview” (if we had perfect information) that can guide us to do far more good than any of the others. And if that’s right, one might argue that we should focus exclusively on a “best guess worldview”[4] in order to maximize how much good we do in expected value terms. For example, if we think that one worldview seems 10,000x better than the others, even a 1-10% chance of being right would still imply that we can do much more good by focusing on that worldview.
This argument presumes that we are “risk neutral”: that our goal is only to maximize the expected value of how much good we do. That is, it assumes we are comfortable with the “risk” that we make the wrong call, put all of our resources into a misguided worldview, and ultimately accomplish very little. Being risk neutral to such a degree often seems strange to people who are used to investing metaphors: investors rarely feel that the possibility of doubling one’s money fully compensates for the possibility of losing it all, and they generally use diversification to reduce the variance of their returns (they aren’t just focused on expected returns). However, we don’t have the same reasons to fear failure that for-profit investors do. There are no special outsized consequences for “failing to do any good,” as there are for going bankrupt, so it’s a risk we’re happy to take as long as it’s balanced by the possibility of doing a great deal of good. The Open Philanthropy Project aims to be risk neutral in the way laid out here, though there are some other reasons (discussed below) that putting all our eggs in one basket could be problematic.
The case for worldview diversification
I think the case for worldview diversification largely hinges on a couple of key factors:
Strong uncertainty about which worldviews are most reasonable. We recognize that any given worldview might turn out to look misguided if we had perfect information—but even beyond that, we believe that any given worldview might turn out to look misguided if we reflected more rationally on the information that is available. In other words, we feel there are multiple worldviews that each might qualify for “what we should consider the best worldview to be basing our giving on, and the worldview that conceptually maximizes our expected value, if we thought more intelligently about the matter.” We could imagine someday finding any of these worldviews to be the best-seeming one. We feel this way partly because we see intelligent, reasonable people who are aware of the arguments for each worldview and still reject it.
Some people recognize that their best-guess worldview might be wrong, but still think that it is clearly the best to bet on in expected-value terms. For example, some argue that focusing on the far future is best even if there is a >99% chance that the arguments in favor of doing so are misguided, because the value of focusing on the far future is so great if the arguments turn out to be valid. In effect, these people seem to be leaving open no realistic possibility of changing their minds on this front. We have a different kind of uncertainty, that I find difficult to model formally, but that is probably something along the lines of cluster thinking. All things considered—including things like our uncertainty about our fundamental way of modeling expected value—I tend to think of the different plausible worldviews as being in the same ballpark of expected value.
Diminishing returns to putting resources behind any given worldview. When looking at a focus area such as farm animal welfare or potential risks from advanced AI, it seems to me that giving in the range of tens of millions of dollars per year (over the next decade or so) can likely fund the best opportunities, help relevant fields and disciplines grow, and greatly improve the chances that the cause pulls in other sources of funding (both private donors and governments). Giving much more than this would hit strongly diminishing returns. For causes like these, I might roughly quantify my intuition by saying that (at the relevant margin) giving 10x as much would only accomplish about 2x as much. (There are other causes where this dynamic does not apply nearly as much; for example, we don’t see much in the way of diminishing returns when it comes to supporting cash transfers to the global poor.)
With these two factors in mind, there are a number of arguments for worldview diversification.
Expected value
When accounting for strong uncertainty and diminishing returns, worldview diversification can maximize expected value even when one worldview looks “better” than the others in expectation. One way of putting this is that if we were choosing between 10 worldviews, and one were 5x as good as the other nine, investing all our resources in that one would—at the relevant margin, due to the “diminishing returns” point—be worse than spreading across the ten.[5]
I think this dynamic is enhanced by the fact that there is so much we don’t know, and any given worldview could turn out to be much better or much worse than it appears for subtle and unanticipated reasons, including those related to flow-through effects.[6]
It isn’t clear to me how much sense it makes to think in these terms. Part of our uncertainty about worldviews is our uncertainty about moral values: to a significant degree, different worldviews might be incommensurate, in that there is no meaningful way to compare “good accomplished” between them. Some explicit frameworks have been proposed for dealing with uncertainty between incommensurate moral systems,[7] but we have significant uncertainty about how useful these frameworks these are and how to use them.
Note that the argument in this section only holds for worldviews with reasonably similar overall expected value. If one believes that a particular worldview points to giving opportunities that are orders of magnitude better than others’, this likely outweighs the issue of diminishing returns.
The ethics of the “veil of ignorance”
Another case for worldview diversification derives from, in some sense, the opposite approach. Rather than thinking of different worldviews as different “guesses” at how to do the most good, such that each has an expected value and they are ultimately compared in the same terms, presume that different worldviews represent the perspectives of different people[8] with different, incommensurable values and frameworks. For example, it may be the case that some people care as deeply about animals as they do about people, while others don’t value animal welfare at all, and that no amount of learning or reflection would change any of this. When choosing between worldviews, we’re choosing which sorts of people we most identify and sympathize with, and we have strong uncertainty on the matter.
One way of thinking about the ethics of how people with different values should interact with each other is to consider a kind of veil of ignorance: imagine the agreements such people would come to about how they should use resources, if they were negotiating before knowing how much resources each of them would individually have available.[9] One such agreement might be: “If one of us ends up with access to vastly more resources than the others, that person should put some resources into the causes most important to each of us—up to some point of diminishing returns—rather than putting all the resources into that person’s own favorite cause.” Each person might accept (based on the diminishing returns model above) that if they end up with vastly more resources than the others, this agreement will end up making them worse off, but only by 50%; whereas if someone else ends up with vastly more resources, this agreement will end up making them far better off.
This is only a rough outline of what an appealing principle might look like. Additional details might be added, such as “The person with outsized resources should invest more in areas where they can be more transformative, e.g. in more neglected areas.”
We see multiple appealing worldviews that seem to have relatively few resources behind them, and we have the opportunity to have a transformative impact according to multiple such worldviews. Taking this opportunity is the ethical thing to do in the sense that it reflects an agreement we would have made under a “veil of ignorance,” and it means that we can improve the world greatly according to multiple different value sets that we feel uncertain between. I think that considering and putting weight on “veil of ignorance” based ethical concerns such as this one is a generally good heuristic for consequentialists and non-consequentialists alike, especially when one does not have a solid framework for comparing “expected good accomplished” across different options.
Capacity building and option value
Last year, we described our process of capacity building:
“Our goals, and our efforts, have revolved around (a) selecting focus areas; (b) hiring people to lead our work in these areas (see our most recent update); (c) most recently, working intensively with new hires and trial hires on their early proposed grant recommendations.
Collectively, we think of these activities as capacity building. If we succeed, the end result will be an expanded team of people who are (a) working on well-chosen focus areas; (b) invested (justifiably) with a great deal of trust and autonomy; (c) capable of finding many great giving opportunities in the areas they’re working on.”
In addition to building internal capacity (staff), we are hoping to support the growth of the fields we work in, and to gain knowledge over time that makes us more effective at working in each cause. Collectively, all of this is “capacity building” in the sense that it will, in the long run, improve our ability to give effectively at scale. There are a number of benefits to building capacity in a variety of causes that are appealing according to different worldviews (i.e., to building capacity in criminal justice reform, farm animal welfare, biosecurity and pandemic preparedness and more).
One benefit is option value. Over time, we expect that our thinking on which worldviews are most appealing will evolve. For example, I recently discussed three key issues I’ve changed my mind about over the last several years, with major implications for how promising I find different causes. It’s very possible that ten years from now, some particular worldview (and its associated causes) will look much stronger to us than the others—and that it won’t match our current best guess. If this happens, we’ll be glad to have invested in years of capacity building so we can quickly and significantly ramp up our support.
Another long-term benefit is that we can be useful to donors with diverse worldviews. If we worked exclusively in causes matching our “best guess” worldview, we’d primarily be useful to donors with the same best guess; if we do work corresponding to all of the worldviews we find highly compelling, we’ll be useful to any donor whose values and approach are broadly similar to ours. That’s a big difference: I believe there are many people with fundamentally similar values to ours, but different best guesses on some highly uncertain but fundamental questions—for example, how to value reducing global catastrophic risks vs. accelerating scientific research vs. improving policy.
With worldview diversification, we can hope to appeal to—and be referred to—any donor looking to maximize the positive impact of their giving. Over the long run, I think this means we have good prospects for making many connections via word-of-mouth, helping many donors give more effectively, and affecting the general dialogue around philanthropy.
Other benefits to worldview diversification
Worldview diversification means working on a variety of causes that differ noticeably from each other. There are a number of practical benefits to this.
We can use lessons learned in one area to improve our work in another. For example:
Some of the causes we work in are very neglected and “thin,” in the sense that there are few organizations working on them. Others were chosen for reasons other than neglectedness, and have many organizations working on them. Understanding the latter can give us a sense for what kinds of activities we might hope to eventually support in the former.
Some of the causes we work on involve very long-term goals with little in the way of intermediate feedback (this tends to be true of efforts to reduce global catastrophic risks). In other causes, we can more expect to see progress and learn from our results (for example, criminal justice reform, which we selected largely for its tractability).
Different causes have different cultures, and by working in a number of disparate ones, we work with a number of Program Officers whose different styles and approaches can inform each other.
It is easier for casual observers (such as the press) to understand our values and motivations. Some of the areas we work in are quite unconventional for philanthropy, and we’ve sometimes come across people who question our motivations. By working in a broad variety of causes, some of which are easier to see the case for than others, we make it easier for casual observers to discern the pattern behind our choices and get an accurate read on our core values. Since media coverage affects many people’s preconceptions, this benefit could make a substantial long-term difference to our brand and credibility.
Over the long run, our actual impact will better approximate our expected impact. Our hits-based giving approach means that in many cases, we’ll put substantial resources into a cause even though we think it’s more likely than not that we’ll fail to have any impact. (Potential risks from artificial intelligence is one such cause.) If we put all our resources behind our best-guess worldview, we might never have any successful grants even if we make intelligent, high-expected-value grants. Conversely, we might “get lucky” and appear far more reliably correct and successful than we actually are. In either case, our ability to realistically assess our own track record, and learn from it, is severely limited. Others’ ability to assess our work, in order to decide how much weight they should put on our views, is as well.
Worldview diversification lessens this problem, to a degree. If we eventually put substantial resources into ten very different causes, then we can reasonably hope to get one or more “hits” even if each cause is a long shot. If we get no “hits,” we have some evidence that we’re doing something wrong, and if we get one or more, this is likely to help our credibility.
We’re still ultimately making a relatively small number of “bets,” and there are common elements to the reasoning and approach we bring to each, so the benefit we get on this front is limited.
Morale and recruiting. Working in a variety of causes makes our organization a more interesting place to work. It means that our work remains exciting and motivating even as our views and our “best guesses” shift, and even when there is little progress on a particular cause for a long time. It means that our work resonates with more people, broadening the community we can engage with positively. This point wouldn’t be enough by itself to make the case for worldview diversification, but it is a factor in my mind, and I’d be remiss not to mention it.
When and how should one practice worldview diversification?
As discussed above, the case for worldview diversification relies heavily on two factors: (a) we have high uncertainty and find multiple worldviews highly plausible; (b) there would be strongly diminishing returns if we put all our resources behind any one worldview. Some of the secondary benefits discussed in the previous section are also specific to a public-facing organization with multiple staff. I think worldview diversification makes sense for relatively large funders, especially those with the opportunity to have a transformative impact according to multiple different highly appealing worldviews. I do not think it makes sense for an individual giving $100 or even $100,000 per year. I also do not think it makes sense for someone who is highly confident that one cause is far better than the rest.
We haven’t worked out much detail regarding the “how” of worldview diversification. In theory, one might be able to develop a formal approach that accounts for both the direct benefits of each potential grant and the myriad benefits of worldview diversification in order to arrive at conclusions about how much to allocate to each cause. One might also incorporate considerations like “I’m not sure whether worldviews A and B are commensurate or not; there’s an X% chance they are, in which case we should allocate one way, and a Y% chance they aren’t, in which case we should allocate another way.” But while we’ve discussed these sorts of issues, we haven’t yet come up with a detailed framework along these lines. Nor have we thoroughly reflected on, and explicitly noted, which specific worldviews we find most compelling and how we weigh them against each other.
We will likely put in more effort on this front in the coming year, though it won’t necessarily lead to a complete or satisfying account of our views and framework. For now, some very brief notes on our practices to date:
Currently, we tend to invest resources in each cause up to the point where it seems like there are strongly diminishing returns, or the point where it seems the returns are clearly worse than what we could achieve by reallocating the resources—whichever comes first. A bit more specifically:
In terms of staff capacity, so far it seems to me that there is a huge benefit to having one full-time staffer working on a given cause, supported by 1-3 other staff who spend enough time on the cause to provide informed feedback. Allocating additional staff beyond this seems generally likely to have rapidly diminishing returns, though we are taking a case-by-case approach and allocating additional staff to a cause when it seems like this could substantially improve our grantmaking.
In terms of money, so far we have tried to roughly benchmark potential grants against direct cash transfers; when it isn’t possible to make a comparison, we’ve often used heuristics such as “Does this grant seem reasonably likely to substantially strengthen an important aspect of the community of people/organizations working on this cause?” as a way to very roughly and intuitively locate the point of strongly diminishing returns. We tend to move forward with any grant that we understand the case for reasonably well and that seems—intuitively, heuristically—strong by the standards of its cause/associated worldview (and appears at least reasonably likely, given our high uncertainty, to be competitive with grants in other causes/worldviews, including cash transfers). For causes that seem particularly promising, and/or neglected (such that we can be particularly transformative in them), we use the lower bar of funding “reasonably strong” opportunities; for other causes, we tend more to look for “very strong” opportunities. This approach is far from ideal, but has the advantage that it is fairly easy to execute in practice, given that we currently have enough resources to move forward with all grants fitting these descriptions.
As noted above, we hope to put more thought into these issues in the coming year. Ideas for more principled, systematic ways of practicing worldview diversification would be very interesting to us.
One might fully accept total utilitarianism, plus the argument in Astronomical Waste, as well as some other premises, and believe that work on global catastrophic risks has far higher expected value than work on other causes. One might accept total utilitarianism and the idea that the moral value of the far future overwhelms other considerations—but also believe that our impact on the far future is prohibitively hard to understand and predict, and that the right way to handle radical uncertainty about our impact is to instead focus on improving the world in measurable, robustly good ways. This view could be consistent with a number of different opinions about which causes are most worth working on. One might put some credence in total utilitarianism and some credence in the idea that we have special duties to persons who live in today’s society, suffer unjustly, and can benefit tangibly and observably from our actions. Depending on how one handles the “normative uncertainty” between the two, this could lead to a variety of different conclusions about which causes to prioritize. Any of the above could constitute a “worldview” as I’ve defined it. Views about the moral weight of animals vs. humans could additionally complicate the points above.
Specifically, our best guess about which worldview or combination of worldviews is most worth operating on in order to accomplish as much good as possible. This isn’t necessarily the same as which worldview is most likely to represent a set of maximally correct beliefs, values and approaches; it could be that a particular worldview is only 20% likely to represent a set of maximally correct beliefs, values, and approaches, but that if it does, following it would lead to >100x the positive impact of following any other worldview. If such a thing were true (and knowable), then this would be the best worldview to operate on.
(Bayesian adjustments should attenuate this difference to some degree, though it’s unclear how much, if you believe—as I do—that both estimates are fairly informed and reasonable though far from precise or reliable. I will put this consideration aside here.)
Specifically, our best guess about which worldview is most worth operating on in order to accomplish as much good as possible. This isn’t necessarily the same as which worldview is most likely to represent a set of maximally correct beliefs, values and approaches; it could be that a particular worldview is only 20% likely to represent a set of maximally correct beliefs, values, and approaches, but that if it does, following it would lead to >100x the positive impact of following any other worldview. If such a thing were true (and knowable), then this would be the best worldview to operate on.
Specifically, say X is the amount of good we could accomplish by investing $Y in any of the nine worldviews other than the “best” one, and imagine that $Y is around the point of diminishing returns where investing 10x as much only accomplishes 2x as much good. This would then imply that putting $Y into each of the ten worldviews would have good accomplished equal to 14X (5X for the “best” one, X for each of the other nine), while putting $10*Y into the “best” worldview would have good accomplished equal to 10X. So the diversified approach is about as 1.4x as good by these assumptions.
For example, say we return to the above hypothetical (see previous footnote) but also imagine that our estimates of the worldviews’ value includes some mistakes, such that an unknown one of the ten actually has 1000X value and another unknown one actually has 0 value at the relevant margin. (The diminishing returns continue to work the same way.) Then putting $Y into each of the ten worldviews would have good accomplished equal to at least 1008X while putting $10Y into the “best” worldview would have good accomplished equal to about 208X (the latter is 2(10%*1000X + 10%*8 + 80%*5X)). While in the previous case the diversified approach looked about 1.4X as good, here it looks nearly 5x as good.
For example, see MacAskill 2014.
I’m using the term “people” for simplicity, though in theory I could imagine extending the analysis in this section to the value systems of animals etc.
I recognize that this setup has some differences with the well-known “veil of ignorance” proposed by Rawls, but still think it is useful for conveying intuitions in this case.