Hedging against deep and moral uncertainty
Summary: Like for quantified risk, we can sometimes hedge against deep uncertainty and moral uncertainty: we can sometimes choose a portfolio of interventions which looks good in expectation to all (or more) worldviews—empirical and ethical beliefs—we find plausible, even if each component intervention is plausibly harmful or not particularly good in expectation according to some plausible worldview. We can sometimes do better than nothing in expectation when this wasn’t possible by choosing a single intervention, and we can often improve the minimum expected value. I think doing so can therefore sometimes reduce complex cluelessness.
My recommendations are the following:
We should, when possible, avoid portfolios (and interventions) which are robustly dominated by any other in expectation - those worse in expectation than another under some plausible worldview and no worse on any plausible worldview (or those ruled out by the maximality rule; EA Forum post on the paper). I think this is rationally required under consequentialism, assuming standard rationality axioms under uncertainty.
I further endorse choosing portfolios among those that are robustly positive in expectation—better in expectation than doing nothing under all worldviews we find plausible—if any are available. However, this is more a personal preference than a (conditional) requirement like 1, although I think something that’s often implicitly assumed in EA. I think this would lead us to allocate more to work for nonhuman animals and s-risks.
EAs should account for interactions between causes and conflicts in judgements about the sign of the expected value of different interventions according to different worldviews. I think this is being somewhat neglected as many EA organizations or divisions in EA organizations are cause-specific.
Approaches for moral uncertainty and deep uncertainty are better applied to portfolios of interventions than to each intervention in isolation, since portfolios can promote win-wins.
We should not commit to priors arbitrarily. If you don’t feel justified in choosing one prior over all others (see the reference class problem), this is what sensitivity analysis and other approaches to decision making under deep uncertainty are for, and sometimes hedging can help, as I hope to illustrate in this post.
EDIT: It’s not always necessary to hedge against the negative side effects of our interventions we choose at the same time as we work on or fund those interventions. We can sometimes address them later or assume we (or others) will.
Introduction
EAs have often argued against diversification and for funding only the most cost-effective intervention, at least for individual donors where the marginal returns on donations are roughly constant. However, this assumes away a lot of uncertainty we could have; we might not believe any specific intervention is the most cost-effective. Due to deep uncertainty, we might not be willing to commit to a specific single joint probability distribution for the effects of our interventions, since we can’t justify any choice over all others. Due to moral uncertainty, we might not be confident in how to ethically value different outcomes or actions. This can result in complex cluelessness, according to which we just don’t know whether we should believe a given intervention is better or worse than another in expectation; it could go either way.
Sometimes, using a portfolio of interventions can be robustly better in expectation than doing nothing, while none of the best individual interventions according to some worldview are, since they’re each plausibly harmful in expectation (whether or not we’re committed to the claim that they definitely are harmful in expectation, since we may have deep or moral uncertainty about that). For example, cost-effective work in one cause might plausibly harm another cause more in expectation, and we don’t know how to trade off between the two causes.
We might expect to find such robustly positive portfolios in practice where the individual interventions are not robust, because the interventions most robustly cost-effective in one domain, effect or worldview will not systematically be the most harmful in others, and if they aren’t so harmful that they can’t be cost-effectively compensated for with interventions optimized for cost-effectiveness in those other domains, effects or worldviews. The aim of this post is to make a more formal and EA-relevant illustration of the following reason for hedging:
We can sometimes choose portfolios which look good to all (or more) worldviews we find plausible, even if each component intervention is plausibly harmful or not particularly good in expectation according to some plausible worldview.
Diversification is of course not new in the EA community; it’s an approach taken by Open Phil, and this post builds upon their “Strong uncertainty” factor, although most organizations tend not to consider the effects of interventions on non-target causes/worldviews, where hedging becomes useful.
An illustrative example
I will assume, for simplicity, constant marginal cost-effectiveness across each domain/effect/worldview, and that the effects of the different interventions are independent of one another. Decreasing marginal cost-effectiveness is also a separate reason for diversification, so by assuming a constant rate (which I expect is also approximately true for small donors), we can consider the uncertainty argument independently. (Thanks to Michael_Wiebe for pointing this out.) EDIT: I’m also assuming away moral uncertainty (e.g. about relative moral weight of nonhuman animals vs humans, conditional on empirical facts) here; see this article for some discussion.
Suppose you have deep or moral uncertainty about the effects of a given global health and poverty intervention on nonhuman animals, farmed or wild, enough uncertainty that your expected value for the intervention ranges across positive and negative values, where the negative comes from effects on nonhuman animals, due to moral empirical uncertainty related to how to weigh the experiences of nonhuman animals and welfare in the wild, the meat eater problem (it may increase animal product consumption) and deep uncertainty about the effects on wild animals.
You could represent the expected cost-effectiveness across these components as a set of vectors with ranges. Assuming independent effects on each component, you might write this as the following set, a box in 3 dimensions:
Here, the first component, , is the range of expected cost-effectiveness for the humans (living in poverty), the second, , for farmed animals, and the third, , for wild animals. These aren’t necessarily in the same comparable utility units across these three components. The point is that two of the components are plausibly negative in expectation, while the first is only positive in expectation, and it’s plausible that the intervention does more harm than good in expectation or does more good than harm in expectation. (Depending on your kind of uncertainty, you might be able to just add each of the components instead, e.g. as , but I will continue to illustrate with separate components, since that’s more general and can capture deeper uncertainty and worse moral uncertainty.)
You might also have an intervention targeting farmed animals and deep or moral uncertainty about its effects on wild animals. Suppose you represent the expected cost-effectiveness as follows, with the effects on humans first, then on farmed animals and then on wild animals, as before:
You represent the expected effectiveness for a wild animal intervention as follows, with the effects on humans first, then on farmed animals and then on wild animals:
And finally, you have a default “do nothing” or “business as usual” option, e.g. spending selfishly:
I model as all 0s, since I’m considering the differences in value with doing nothing, not the expected value in the universe.
Now, based on this example, we aren’t confident that any of these interventions are better in expectation than , doing nothing, and generally, none of them definitely beat any other in expectation, so on this basis, we might say all of them are permissible according to the maximality rule. However, there are portfolios of these interventions that are better than doing nothing. Assuming a budget of 10 units, one such portfolio (better than ) is the following:
That is, you spend 4 times as much on as , and 5 times as much on as . We can divide by 10 to normalize. Notice that each components is strictly positive, so that this portfolio is good in expectation—better than (and ) - for humans, farmed animals and wild animals simultaneously.
According to recommendation 1, , doing nothing, is now ruled out, impermissible. This does not depend on the fact that I took differences with doing nothing, since we can shift them all the same way.
According to recommendation 2, which I only weakly endorse, each individual intervention is now ruled out, since each was plausibly negative (compared to doing nothing) in expectation, and we must choose among the portfolios that are robustly positive in expectation. Similarly, portfolios with only two interventions are also ruled out, since at least one of their components will have negative values in its range.
I constructed this example by thinking about how I could offset each intervention’s harms with another’s. The objection that offsetting is suboptimal doesn’t apply, since, by construction, I can’t decide which of the interventions is best in expectation, although I know it’s not doing nothing.
Note also that the cost-effectiveness values do not depend on when the effects occur. Similarly, we can hedge over time: the plausible negative effects of one intervention can be made up for with positive effects from another that occur far earlier or later in time.
Dependence
We assumed the interventions’ components were independent of one another and of the other interventions’ components. With dependence, all the portfolios that were robustly at least as good as doing nothing will still be robustly as good as doing nothing, since the lower bounds under the independent case are lower bounds for the dependent case, but we could have more such portfolios. On the other hand, different portfolios could become dominated by others when modelling dependence that weren’t under the assumption of independence.
Lexicality and deontological constraints
Under some deontological ethical theories, rule violations (you commit) can’t be compensated for, no matter how small. You could represent rule violations as - or multiples of it, without multiplying through, or using vectors for individual components to capture lexicality. Portfolios that include interventions that violate some rule will generally also violate that rule. However, we should be careful to not force cardinalization on theories that are only meant to be ordinal and do not order risky lotteries according to standard rationality axioms; see some quotes from MacAskill’s thesis here on this, and this section from MichaelA’s post.
Other potential examples
Pairing human life-saving interventions with family planning interventions can potentially minimize externalities due to human population sizes, which we may have deep uncertainty about (although this requires taking a close look at the population effects of each, and it may not work out). These interventions could even target different regions based on particular characteristics, e.g. average quality of life, meat consumption. Counterfactually reducing populations where average welfare is worse (or meat consumption is higher) and increasing it the same amount where it’s better (or meat consumption is lower) increases average and total human welfare (or total farmed animal welfare, assuming net negative lives) without affecting human population size. Of course, this is a careful balancing act, especially under deep uncertainty. Furthermore, there may remain other important externalities.
We might find it plausible that incremental animal welfare reform contributes to complacency and moral licensing and have deep uncertainty about whether this is actually the case in expectation, but we might find more direct advocacy interventions that can compensate for this potential harm so that their combination is robustly positive.
Extinction risk interacts with animal welfare in many ways: extinction would end factory farming, could wipe out all wild animals if complete, could prevent us from addressing wild animal suffering if only humans go extinct, and if we don’t go extinct, we could spread animal suffering to other planets. There are other interactions and correlations with s-risks, too, since things that risk extinction could also lead to far worse outcomes (e.g. AI risk, conflict), or could prevent s-risks.
Animal advocacy seems good for reducing s-risks due to moral circle expansion, but there are also plausible effects going in the opposite direction, like correlations with environmentalism or “wrong” population ethics or near-misses.
In the wild animal welfare space, I’ve been told about pairing interventions that reduce painful causes of death with population control methods to get around uncertainty about the net welfare in the wild. In principle, with a portfolio approach, it may not be necessary to pair these interventions on the same population to ensure a positive outcome in expectation, although applying them to the same population may prevent ecological risks and reduce uncertainty further.
Substitution effects between animal products. We might have moral uncertainty about the sign of the expected value of an intervention raising the price of fish, in case it leads consumers to eat more chicken, and similarly for an intervention raising the price of chicken, in case it leads consumers to eat more fish. Combining both interventions can reduce both chicken and fish consumption. As before, these interventions do not have to even target the same region, as long as the increase in fish consumption in the one region is smaller than the increase in the other (assuming similar welfare, amount of product per animal, etc. or taking these into account), and the same for chicken consumption.
Questions and possible implications
I think recommendation 2 would push us partially away from global health and poverty work and extinction risk work and towards work for nonhuman animals and s-risks, due to the interactions I discuss above.
Should we choose portfolios as individuals or as a community? If as a community, and we endorse recommendation 2 for the community, i.e. we should do robustly better in expectation than doing nothing, individuals may be required to focus on plausible domains/worldviews/effects according to which the community is plausibly doing more harm than good in expectation, if any exists. This could mean many more EAs should focus on work for nonhuman animals and s-risks, since global health and poverty work and extinction risk work, some of the largest parts of the EA portfolio, are plausibly net negative due to interactions with these.
I personally doubt that we have fundamental reasons to decide as a community (coordination and cooperation are instrumental reasons). Either our (moral) reasons are agent-relative or agent-neutral/universal; they are not relative to some specific and fairly arbitrarily defined group like the EA community.
Should we model the difference compared to doing nothing and use doing nothing as a benchmark as I endorse in recommendation 2, or just model the overall outcomes under each intervention (or, more tractably, all pairwise differences, allowing us to ignore what’s unaffected)? What I endorse seems similar to similar risk-aversion with respect to the difference you make by centering the agent, which Snowden claims is incompatible with impartiality. In this case, rather than risk-aversion, it’s closer to uncertainty/ambiguity aversion. It also seems non-consequentialist, since it treats one option differently from the rest, and consequentialism usually assumes no fundamental difference between acts and omissions (and the concept of omission itself may be shaky).
What other plausible EA-relevant examples are there where hedging can help by compensating for plausible expected harms?
Can we justify stronger rules if we assume more structure to our uncertainty, short of specifying full distributions? What if I think one worldview is more likely than another, but I can’t commit to actual probabilities? What if I’m willing to say something about the difference or ratio of probabilities?
- Participate in the Donation Election and the first weekly theme (starting 7 November) by 2 Nov 2023 17:02 UTC; 84 points) (
- “Longtermist causes” is a tricky classification by 29 Aug 2023 17:41 UTC; 63 points) (
- 3 Nov 2020 7:07 UTC; 57 points) 's comment on Evidence, cluelessness, and the long term—Hilary Greaves by (
- Donation Election Fund Announcement: Matching, Rewards and FAQ. by 1 Nov 2024 19:09 UTC; 48 points) (
- Why does (any particular) AI safety work reduce s-risks more than it increases them? by 3 Oct 2021 16:55 UTC; 48 points) (
- 17 Oct 2022 15:52 UTC; 28 points) 's comment on Ask Charity Entrepreneurship Anything by (
- Is this a good way to bet on short timelines? by 28 Nov 2020 14:31 UTC; 16 points) (
- Even Allocation Strategy under High Model Ambiguity by 31 Dec 2020 9:10 UTC; 15 points) (
- Post on maximizing EV by diversifying w/ declining marginal returns and uncertainty by 11 Aug 2021 4:32 UTC; 12 points) (
- 6 Mar 2023 15:08 UTC; 12 points) 's comment on Model-Based Policy Analysis under Deep Uncertainty by (
- 22 Oct 2022 18:25 UTC; 10 points) 's comment on Consequentialism and Cluelessness by (
- I’m interviewing Oxford philosopher, global priorities researcher and early thinker in EA, Andreas Mogensen. What should I ask him? by 10 Jun 2022 14:24 UTC; 9 points) (
- EA towards humans = effective violence towards farm animals? by 3 Dec 2020 17:22 UTC; 7 points) (
- 14 Feb 2024 22:43 UTC; 6 points) 's comment on Summary: Maximal Cluelessness (Andreas Mogensen) by (
- 16 Feb 2021 19:49 UTC; 4 points) 's comment on Complex cluelessness as credal fragility by (
- 12 Mar 2021 7:04 UTC; 4 points) 's comment on Complex cluelessness as credal fragility by (
- 18 Feb 2023 18:21 UTC; 4 points) 's comment on The deathprint of replacing beef by chicken and insect meat by (
- 23 Oct 2020 20:22 UTC; 4 points) 's comment on Use resilience, instead of imprecision, to communicate uncertainty by (
- 21 Dec 2022 18:34 UTC; 3 points) 's comment on A Case for Voluntary Abortion Reduction by (
- 27 Dec 2020 9:52 UTC; 2 points) 's comment on Strong Longtermism, Irrefutability, and Moral Progress by (
- 2 Jan 2023 3:03 UTC; 2 points) 's comment on The property rights approach to moral uncertainty by (
- 27 Aug 2021 3:44 UTC; 2 points) 's comment on Changes in conditions are a priori bad for average animal welfare by (
- 23 Dec 2020 17:55 UTC; 2 points) 's comment on A case against strong longtermism by (
- 6 May 2021 2:22 UTC; 2 points) 's comment on Thoughts on “A case against strong longtermism” (Masrani) by (
- 6 May 2021 7:13 UTC; 2 points) 's comment on Thoughts on “A case against strong longtermism” (Masrani) by (
- 14 Feb 2024 22:42 UTC; 2 points) 's comment on Summary: Maximal Cluelessness (Andreas Mogensen) by (
- 12 Nov 2023 21:58 UTC; 2 points) 's comment on Zach Stein-Perlman’s Quick takes by (
- 15 May 2021 2:41 UTC; 2 points) 's comment on Animal Welfare Fund: Ask us anything! by (
- 13 Sep 2020 22:07 UTC; 2 points) 's comment on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs by (
Thanks Michael for the post. I happened to be thinking in similar terms recently regarding how to divide donations between saving human lives and increasing welfare of farmed animals (though nothing like as thoroughly and generally). I thought perhaps this could be an interesting real-world example to analyse:
This review estimated that saving a life in a very poor country would result in a reduction in births of 0.33-0.5, hence giving 0.5-0.67 extra lives. Though, the uncertainty in the various studies included indicates to me it could plausibly give 1 extra life.
1 extra human life perhaps gives ~60 years of extra life (not counting extra descendants, so maybe it’s an underestimate).
Then I remember reading estimates that for every typical westerner there are around 5-10 farmed animals alive at a given time to produce the animals products they eat (though I can’t remember the source, and I’m not sure if this includes fish). An extra person in a developing world country isn’t going to consume as much as a typical westerner straight away of course, but I suppose the long-term consumption could be this large if they or their descendants reached present western levels of wealth and factory farming was still prevalent then, so it could be a reasonable pessimistic estimate of the effect of saving a human life on the increase in the number of farmed animals.
This 2018 Founders Pledge report gives a mean estimate of the effect of The Humane League’s corporate campaigns alone as “10 hen-years shift from battery cages to aviaries [equivalent] per dollar received” [p.68].
Perhaps shifting a hen from a battery to aviary system could be taken to be a tenth as good as removing an animal from the system (just based on a not-very-informed intuition).
So an estimate of an amount to donate to THL to be likely to offset any negative impact on animal welfare from saving a human life (using pessimistic figures to give the rough upper end of the uncertainty range) is [no. of extra human lives] x [no. of extra human years lived per life] x [no. of farmed animals per person]/[no. of animal years saved per $] = 1 x 60 x 10/(10*0.1) = $600.
This is ~20% of the cost of saving one life through Malaria Consortium from GiveWell’s 2020 cost effectiveness analysis. So perhaps this indicates that if you wanted to donate to save lives from malaria but were worried about potential negative impacts on farm animal welfare, splitting donations between MC and THL in a 5:1 ratio would be an option robustly better than doing nothing. (But the THL fraction may need to be higher if impacts on fish, long-term impacts of increasing the human population or other things I’ve not thought of need to be included).
Does this sound reasonable?
(Edited to correct “4:1 ratio” to “5:1 ratio”)
I think the overall approach you’ve taken is good, and it’s cool to see you’ve worked through this. This is also the kind of example I had in mind, although I didn’t bother to work with estimates.
I do think it would be better to use some projections for animal product consumption and fertility rates in the regions MC works in (I expect consumption per capita to increase and fertility to decrease) to include effects of descendants and changing consumption habits, since these plausibly could end up dominating the effects of MC, or at least on animals (and you also have to decide on your population ethics: does the happiness of the additional descendants contribute to the good compared to if they were never born?). Then, there are also timelines for alternatives proteins (e.g. here), but these are much more speculative to me.
I also personally worry that cage-free campaigns could be net negative in expectation (at least in the short-term, without further improvements), mostly since on-farm mortality rates are higher in cage-free systems. See some context and further discussion here. I believe that corporate campaigns work, though, so I think we could come up with a target for a corporate campaign that we’d expect to be robustly positive for animals. I think work for more humane slaughter is robustly positive. Family planning interventions might be the most promising, see this new charity incubated by Charity Entrepreneurship and their supporting report, including their estimated cost-effectiveness of:
“$144 per unintended birth averted”, and
“377 welfare points gained per dollar spent” for farmed animals. (I don’t know off-hand if they’re including descendants or projected changes in consumption in this figure.)
However, this new charity doesn’t have any track record yet, so it’s in some ways more speculative than GiveWell charities or THL. CE does use success probabilities in their models, but this is a parameter that you might want to do a sensitivity analysis to. (Disclosure: I’m an animal welfare research intern for Charity Entrepreneurship.)
Finally, Founders Pledge did a direct comparison between THL and AMF, including sensitivity analysis to moral weights, that might be useful.
Thanks for your thoughts and the links. I agree that more consideration of long-term effects and population ethics seems important (also, I would have thought, for the impact of accelerating animal welfare improvements). I don’t know anything to go on for quantitative estimates of long-term effects myself, though.
Regarding the possibility of cage-free campaigns as being net negative, I agree this sounds like a risk, so perhaps I was loose in saying donating a certain amount to THL could be “robustly better”. I’m not sure it’s going to be possible to be 100% sure that any set of interventions won’t have a negative impact, though—I was basically going for being able to feel “quite confident” that the impact on farmed animals wouldn’t be negative (edit: given the assumptions I’ve made—all things considered I’m not as confident as that), and haven’t been able yet to be precise about what that means.
Thinking about it, in general, it seems to me that the ranges of possible effects of interventions could be unbounded, so then you’d have to accept some chance of having a negative impact in the corresponding cause areas. Perhaps this is something your general framework could be augmented to take into account e.g. could one set a maximum allowed probability of having a negative effect in one cause area, or would it be sufficient to have a positive expected effect in each area?
So, it’s worth distinguishing between
quantified uncertainty, or, risk, when you can put a single probability on something, and
unquantified uncertainty, when you can’t decide among multiple probabilities).
If there’s a quantified risk of negative, but your expected value is positive under all of the worldviews you find plausible enough to consider anyway (e.g. for all cause areas), then you’re still okay under the framework I propose in this post. I am effectively suggesting that it’s sufficient to have a positive expected effect in each area (although there may be important considerations that go beyond cause areas).
However, you might have enough cluelessness that you can’t find any portfolio that’s positive in expected value under all plausible worldviews like this. That would suck, but I would normally accept continuing to look for robustly positive expected value portfolios as a good option (whether or not it is robustly positive).
Great post, Michael! The more I have realised how uncertain the world is, the more I have come to appreciate this post.
I think y ⇐ 100 should be y ⇐ 50.
I think the sum of the 2nd component should be 200 + 50 + 50 = 300 (not 200 + 200 + 50 = 450).
Good point. Personally, I think our reasons are agent-neutral, i.e. that we should think about how to improve the portfolio of the universe, not our own portfolio or that of the EA community.
Thank you for the kind words and the corrections!
I think my framework/illustration doesn’t really handle moral uncertainty well, since it effectively assumes a particular normalization, but I think the general idea can still be useful in those cases, and you should consider compensating moral worldviews that are harmed by an intervention in your portfolio and/or allocating less to interventions that are harmful on other moral worldviews than interventions that are neutral on other moral worldviews, all else equal, aiming for a portfolio that is robustly positive across these worldviews and not dominated.
I thought these ideas were interesting, but it would be useful to have a less technical and/or more intuitive explanation.
Does the “Other potential examples” section help? Maybe I should have that section before the technical example?
I think this would be easier to explain with a two-sector model: ie, just H and F. Also, would it be easier to just work with algebra? Ie, H=[−a,b]×[−c,d].
How does this fit with H+4F+5W? That’s 10 units, no?
It’s worth emphasizing that this assumption rules out the diminishing returns case for diversifying; this is a feature, since we want to isolate the uncertainty-case for diversifying.
I think it would get part of it across slightly more easily, although I don’t think the burden is large. I think a 2-sector model might give the false impression that you should try to pair interventions so that each makes up for the negatives of the other, whereas with a good enough example for 3, people might more intuitively grasp that you have far more flexibility.
I’d have to write down a bunch of inequalities to get a portfolio that’s better than N and ensure that one even exists, which I think would be much harder to follow (and do, for me). I expect people would get that the general problem is a system of linear inequalities, although it’s not central to the point I’m making.
Ha. Thanks for pointing this out! I’ll fix this.
Ya, this is a good point. I’ll mention this in the text.
Re algebra, are you defending the numbers you gave as reasonable? Otherwise, if we’re just making up numbers, might as well do the general case.
I think the general case (with the independence and constant marginal cost-effectiveness assumptions) will be harder to follow for some readers (and not easier to follow for anyone), much more work for me (I’m not sure how I would approach it yet), and not general enough to be very useful. Even more generally, it’s a multi-objective linear program, which we would solve algorithmically, not symbolically for a closed form solution.