# Even Allocation Strategy under High Model Ambiguity

Summary of summary: This is a summary of The 1/​N investment strategy is optimal under high model ambiguity by Georg Ch. Pflug, Alois Pichler, and David Wozabal (pdfs here and here), with some additional implications for EAs. Basically, as model uncertainty/​ambiguity is similar and high for all options in a set of options under consideration, whether that’s investments or donations, allocating evenly between options is approximately robustly optimal, in terms of maximizing the minimum expected value.

Furthermore, “doing nothing”/​”not investing” is only one option among our multiple options, and if it’s equally ambiguous, then it will only make up 1/​Nth of the optimal portfolio. This is an argument against paralysis, i.e. doing nothing, when faced with complex cluelessness.

### Their result

Take a set of stocks (or other possible investments or classes of investments) each with a reference probability distribution (or, allowing for dependence, random variables on the same measure space) for possible returns/​losses, your first guesses. Suppose you’re not actually confident in these first guesses, so you also entertain all distributions that are within a distance of their joint distribution. is a measure of ambiguity. Their result is basically that as approaches infinity, allocating your money uniformly across investments, i.e. 1/​N of your money for each of N stocks, becomes optimal for maximizing the minimum expected return (maxmin expected utility), a rule for decision-making under deep uncertainty for those who are ambiguity-averse.

(Actually, rather than maximizing the expected return, they minimized risk measures, one example being expected loss + standard deviation, . I think this means some risk-aversion.)

(EDIT: I’m not sure if this is a good explanation, and thanks to Flodorner for bringing this up.) One intuitive way to think about this might be considering circles of radius centered around fixed points in the plane, representing your first guesses for your options. As becomes very large, the intersection of the interiors of these circles will approach 100% of their interiors, and the distance between the centres becomes small relative to their radii. Basically, you can’t tell the options apart anymore for huge , but also in these neighbourhoods, you’ll find options which are less correlated than their centres, if not already uncorrelated. Allocating evenly between identically distributed and uncorrelated options minimizes the variance. The real explanation is the proof, though, which I don’t find that intuitive, since it involves taking duals.

It would be interesting to see what would happen for different but all high degrees of model uncertainty, since similar degrees of model uncertainty is a pretty strong assumption. My first guess would be an allocation that’s inversely proportional to the model uncertainty, similar to risk parity, but I suspect that won’t follow, since there are multiple ways to measure the distance between probability distributions.

### Implications for doing good

I expect the same to be true for charitable donations, assuming

1. constant or decreasing marginal returns, and

2. donating to one charity doesn’t affect the returns of the other charities (their returns need not be statistically independent, though).

If you’re very undecided between N charities or causes (and possibly keeping the money) because of high model uncertainty for each of them, you should consider splitting your donations approximately evenly between them (and possibly keeping an even portion for yourself). Specifically, you must be similarly deeply uncertain about the value of each of these N charities/​causes, not just some of them. If there’s one charity that looks robustly good and for which there’s relatively little ambiguity, then you might just pick that one.

There’s an important conclusion hidden in the parentheses of the last paragraph: under the given assumptions, keeping the money would only be a small part of your optimal charity portfolio. This is an argument against mostly doing nothing.

GiveWell’s life-saving charities are all estimated to save a life for \$3000-5000 in expectation (although their distributions might differ), at least according to this page, so an even allocation across them up until their room for funding is filled might make sense, or perhaps proportionally to room for funding.

Donor matches might also be situations with relatively high ambiguity, if you have little information about the counterfactual use of those matching funds if unmatched.

If you have so much deep uncertainty that none of them alone or together looks unambiguously better than doing nothing, I think you should probably just do or fund research instead or save and invest your money until you find something that looks better than nothing. This could involve patient philanthropy, but need not.

• So for the maximin we are minimizing over all joint distributions that are -close to our initial guess?

“One intuitive way to think about this might be considering circles of radius centered around fixed points, representing your first guesses for your options, in the plane. As becomes very large, the intersection of the interiors of these circles will approach 100% of their interiors. The distance between the centres becomes small relative to their radii. Basically, you can’t tell the options apart anymore for huge . (I might edit this post with a picture...)”

If I can’t tell the options apart any more, how is the 1/​n strategy better than just investing everything into a random option? Is it just about variance reduction? Or is the distance metric designed such that shifting the distributions into “bad territories” for more than one of the options requires more movement?

• So for the maximin we are minimizing over all joint distributions that are -close to our initial guess?

Yes. That’s more accurate than what I said (originally), since you use a single joint distribution for all of the options, basically a distribution over , for options, and you look at distributions -close to that joint distribution.

If I can’t tell the options apart any more, how is the 1/​n strategy better than just investing everything into a random option? Is it just about variance reduction? Or is the distance metric designed such that shifting the distributions into “bad territories” for more than one of the options requires more movement?

Hmm, good point. I was just thinking about this, too. It’s worth noting that in Proposition 3, they aren’t just saying that the 1/​N distribution is optimal, but actually that in the limit as , it’s the only distribution that’s optimal.

I think it might be variance reduction, and it might require risk-aversion, since they require the risk functionals/​measures to be convex (I assume strictly), and one of the two example they use of risk measures explicitly penalizes the variance of the allocation (and I think it’s the case for the other). When you increase , the radius of the neighbourhood around the joint distribution, you can end up with options which are less correlated or even inversely correlated with one another, and diversification is more useful in those cases. They also allow negative allocations, too, so because the optimal allocation is positive for each, I expect that it’s primarily because of variance reduction from diversification across (roughly) uncorrelated options. I made some edits.

For donations, maybe decreasing marginal returns could replace risk-aversion for those who aren’t actually risk-averse with respect to states of the world, but I don’t think it will follow from their result, which assumes constant marginal returns.