I was thinking about this recently too, and vaguely remember it being discussed somewhere and would appreciate a link myself.
To answer the question, here’s a rationale for diversification that’s illustrated in the picture below that I just whipped up.
Imagine you have two causes where you believe their cost-effectiveness trajectories cross at some point. Cause A does more good per unit resources than cause B at the start but hits diminishing marginal returns faster than B. Suppose you have enough resources to get to the crossover point. What do you do? Well, you fund A up to that point, then switch to B. Hey presto, you’re doing the most good by diversifying.
This scenario seems somewhat plausible in reality. Notice it’s a justification for diversification that doesn’t rely on appeals to uncertainty, either epistemic or moral. Adding empirical uncertainty doesn’t change the picture: empirical uncertainty basically means you should draw fuzzy lines instead of precise ones, and it’ll be less clear when you hit the crossover.
What’s confusing for me about the worldview diversification post is that it seems to run together two justifications for, in practice, diversifying (i.e. supporting more than one thing) that are very different in nature.
One justification for diversification is based on this view about ‘crossovers’ illustrated above: basically, Open Phil has so much money, they can fund stuff in one area to the point of crossover, then start funding something else. Here, you diversify because you can compare different causes in common units and you so happen to hit crossovers. Call this “single worldview diversification” (SWD).
The other seems to rely on the idea there are different “worldviews” (some combination of beliefs about morality and the facts) which are, in some important way, incommensurable: you can’t stick things into the same units. You might think Utilitarianism and Kantianism are incommensurable in this way: they just don’t talk in the same ethical terms. Apples ‘n’ oranges. In the EA case, one might think the “worldviews” needed to e.g. compare the near-term to the long-term are, in some relevant sense incommensurable—I won’t to try to explain that here, but may have a stab at in another post. Here, you might think you can’t (sensibly) compare different causes in common units. What should you do? Well, maybe you give each of them some of your total resources, rather than giving it all to one. How much do you give each? This is a bit fishy, but one might do it on the basis of how likely you think each cause is really the best (leaving aside the awkward fact you’ve already said you don’t think you can compare tem). So if you’re totally unsure, each gets 50%. Call this “multiple worldview diversification” (MWD).*
Spot the difference: the first justification for diversification comes because you can compare causes, the second because you can’t. I’m not sure if anyone has pointed this out before.
*I think MWD is best understood as an approach dealing with moral and/or empirical uncertainty. Depending on the type of uncertainty at hand, there are extant responses about how to deal with the problem that I won’t go into here. One quick example: for moral uncertainty, you might opt for ‘my favourite theory’ and give everything to the theory in which you have most credence; see Bykvist (2017) for a good summary article on moral uncertainty.
Imagine you have two causes where you believe their cost-effectiveness trajectories cross at some point. Cause A does more good per unit resources than cause B at the start but hits diminishing marginal returns faster than B. Suppose you have enough resources to get to the crossover point. What do you do? Well, you fund A up to that point, then switch to B. Hey presto, you’re doing the most good by diversifying.
It might be helpful to draw a dashed horizontal line at the maximum value for B, since you would fund A at least until the intersection of that and the curve for A, and start funding B from there (but possibly switching thereafter, and maybe back and forth). Basically, you want to start funding B once the marginal returns from A are lower than the marginal returns for B. It doesn’t actually matter for whether you fund B at all that B hits diminishing marginal returns more slowly, only that A’s marginal returns are eventually lower than B’s initial marginal returns before you exhaust your budget.
If you’re including more than just A and B, and A’s marginal expected returns are eventually lower than B’s initial marginal expected returns before you would exhaust your budget on A, then we can still at least say it wouldn’t be optimal to exhaust your budget on A (possibly you would exhaust it on B, if B also started with better marginal returns, or some completely different option(s)).
It isn’t clear what you meant by crossover point. I assumed it was where the curves intersect, but if you did mean where A’s curve reaches the maximum value of B’s, then it’s fine.
Adding empirical uncertainty doesn’t change the picture: empirical uncertainty basically means you should draw fuzzy lines instead of precise ones, and it’ll be less clear when you hit the crossover.
If the uncertainty is precisely quantified (no imprecise probabilities), and the expected returns of each option depends only on how much you fund that option (and not how much you fund others), then you can just use the expected value functions.
Right. You’d have a fuzzy line to represent the confidence interval of ex post value, but you would still have a precise line that represented the expected value.
I was thinking about this recently too, and vaguely remember it being discussed somewhere and would appreciate a link myself.
To answer the question, here’s a rationale for diversification that’s illustrated in the picture below that I just whipped up.
Imagine you have two causes where you believe their cost-effectiveness trajectories cross at some point. Cause A does more good per unit resources than cause B at the start but hits diminishing marginal returns faster than B. Suppose you have enough resources to get to the crossover point. What do you do? Well, you fund A up to that point, then switch to B. Hey presto, you’re doing the most good by diversifying.
This scenario seems somewhat plausible in reality. Notice it’s a justification for diversification that doesn’t rely on appeals to uncertainty, either epistemic or moral. Adding empirical uncertainty doesn’t change the picture: empirical uncertainty basically means you should draw fuzzy lines instead of precise ones, and it’ll be less clear when you hit the crossover.
What’s confusing for me about the worldview diversification post is that it seems to run together two justifications for, in practice, diversifying (i.e. supporting more than one thing) that are very different in nature.
One justification for diversification is based on this view about ‘crossovers’ illustrated above: basically, Open Phil has so much money, they can fund stuff in one area to the point of crossover, then start funding something else. Here, you diversify because you can compare different causes in common units and you so happen to hit crossovers. Call this “single worldview diversification” (SWD).
The other seems to rely on the idea there are different “worldviews” (some combination of beliefs about morality and the facts) which are, in some important way, incommensurable: you can’t stick things into the same units. You might think Utilitarianism and Kantianism are incommensurable in this way: they just don’t talk in the same ethical terms. Apples ‘n’ oranges. In the EA case, one might think the “worldviews” needed to e.g. compare the near-term to the long-term are, in some relevant sense incommensurable—I won’t to try to explain that here, but may have a stab at in another post. Here, you might think you can’t (sensibly) compare different causes in common units. What should you do? Well, maybe you give each of them some of your total resources, rather than giving it all to one. How much do you give each? This is a bit fishy, but one might do it on the basis of how likely you think each cause is really the best (leaving aside the awkward fact you’ve already said you don’t think you can compare tem). So if you’re totally unsure, each gets 50%. Call this “multiple worldview diversification” (MWD).*
Spot the difference: the first justification for diversification comes because you can compare causes, the second because you can’t. I’m not sure if anyone has pointed this out before.
*I think MWD is best understood as an approach dealing with moral and/or empirical uncertainty. Depending on the type of uncertainty at hand, there are extant responses about how to deal with the problem that I won’t go into here. One quick example: for moral uncertainty, you might opt for ‘my favourite theory’ and give everything to the theory in which you have most credence; see Bykvist (2017) for a good summary article on moral uncertainty.
It might be helpful to draw a dashed horizontal line at the maximum value for B, since you would fund A at least until the intersection of that and the curve for A, and start funding B from there (but possibly switching thereafter, and maybe back and forth). Basically, you want to start funding B once the marginal returns from A are lower than the marginal returns for B. It doesn’t actually matter for whether you fund B at all that B hits diminishing marginal returns more slowly, only that A’s marginal returns are eventually lower than B’s initial marginal returns before you exhaust your budget.
If you’re including more than just A and B, and A’s marginal expected returns are eventually lower than B’s initial marginal expected returns before you would exhaust your budget on A, then we can still at least say it wouldn’t be optimal to exhaust your budget on A (possibly you would exhaust it on B, if B also started with better marginal returns, or some completely different option(s)).
I’m not sure if you’re disagreeing with my toy examples, or elaborating on the details—I think the latter.
It isn’t clear what you meant by crossover point. I assumed it was where the curves intersect, but if you did mean where A’s curve reaches the maximum value of B’s, then it’s fine.
If the uncertainty is precisely quantified (no imprecise probabilities), and the expected returns of each option depends only on how much you fund that option (and not how much you fund others), then you can just use the expected value functions.
Right. You’d have a fuzzy line to represent the confidence interval of ex post value, but you would still have a precise line that represented the expected value.