Even Allocation Strategy under High Model Ambiguity
Summary of summary: This is a summary of The 1/N investment strategy is optimal under high model ambiguity by Georg Ch. Pflug, Alois Pichler, and David Wozabal (pdfs here and here), with some additional implications for EAs. Basically, as model uncertainty/ambiguity is similar and high for all options in a set of options under consideration, whether that’s investments or donations, allocating evenly between options is approximately robustly optimal, in terms of maximizing the minimum expected value.
Furthermore, “doing nothing”/”not investing” is only one option among our multiple options, and if it’s equally ambiguous, then it will only make up 1/Nth of the optimal portfolio. This is an argument against paralysis, i.e. doing nothing, when faced with complex cluelessness.
Take a set of stocks (or other possible investments or classes of investments) each with a reference probability distribution (or, allowing for dependence, random variables on the same measure space) for possible returns/losses, your first guesses. Suppose you’re not actually confident in these first guesses, so you also entertain all distributions that are within a distance of their joint distribution. is a measure of ambiguity. Their result is basically that as approaches infinity, allocating your money uniformly across investments, i.e. 1/N of your money for each of N stocks, becomes optimal for maximizing the minimum expected return (maxmin expected utility), a rule for decision-making under deep uncertainty for those who are ambiguity-averse.
(Actually, rather than maximizing the expected return, they minimized risk measures, one example being expected loss + standard deviation, . I think this means some risk-aversion.)
(EDIT: I’m not sure if this is a good explanation, and thanks to Flodorner for bringing this up.) One intuitive way to think about this might be considering circles of radius centered around fixed points in the plane, representing your first guesses for your options. As becomes very large, the intersection of the interiors of these circles will approach 100% of their interiors, and the distance between the centres becomes small relative to their radii. Basically, you can’t tell the options apart anymore for huge , but also in these neighbourhoods, you’ll find options which are less correlated than their centres, if not already uncorrelated. Allocating evenly between identically distributed and uncorrelated options minimizes the variance. The real explanation is the proof, though, which I don’t find that intuitive, since it involves taking duals.
It would be interesting to see what would happen for different but all high degrees of model uncertainty, since similar degrees of model uncertainty is a pretty strong assumption. My first guess would be an allocation that’s inversely proportional to the model uncertainty, similar to risk parity, but I suspect that won’t follow, since there are multiple ways to measure the distance between probability distributions.
Implications for doing good
I expect the same to be true for charitable donations, assuming
constant or decreasing marginal returns, and
donating to one charity doesn’t affect the returns of the other charities (their returns need not be statistically independent, though).
If you’re very undecided between N charities or causes (and possibly keeping the money) because of high model uncertainty for each of them, you should consider splitting your donations approximately evenly between them (and possibly keeping an even portion for yourself). Specifically, you must be similarly deeply uncertain about the value of each of these N charities/causes, not just some of them. If there’s one charity that looks robustly good and for which there’s relatively little ambiguity, then you might just pick that one.
There’s an important conclusion hidden in the parentheses of the last paragraph: under the given assumptions, keeping the money would only be a small part of your optimal charity portfolio. This is an argument against mostly doing nothing.
GiveWell’s life-saving charities are all estimated to save a life for $3000-5000 in expectation (although their distributions might differ), at least according to this page, so an even allocation across them up until their room for funding is filled might make sense, or perhaps proportionally to room for funding.
Donor matches might also be situations with relatively high ambiguity, if you have little information about the counterfactual use of those matching funds if unmatched.
If you have so much deep uncertainty that none of them alone or together looks unambiguously better than doing nothing, I think you should probably just do or fund research instead or save and invest your money until you find something that looks better than nothing. This could involve patient philanthropy, but need not.