Diversify Your Moral Risk Portfolio

Crosspost of my blog.

Lots of people have two conflicting desires:

  1. Do the most good.

  2. Make sure you do some good.

These are in conflict. Sometimes the thing that does the most good in expectation has a low chance of doing any good. If you give a lot of money to shrimp, for example, there’s about a 50% chance you’re wasting your money if it turns out that shrimp aren’t conscious. Similarly, if you take Pascal’s wager, and give money to groups that effectively promote the religion you think is most likely to be true, there’s a non-trivial chance that you’re not doing any good at all! And the same thing is true of other speculative proposals for improving the world—trying to become a billionaire, giving to Longtermist organizations that safeguard the future, and so on.

As it happens, I don’t think this attempt to make sure you do some good, rather than maximizing expected good, is rational. I think one should just try to do the most expected good without worrying about the probability that they do good. A 110 chance of saving ten people’s lives is just as good as a 100% chance of saving one person’s life.

The problem: I suck!

More precisely, even though when I think about it, expected utility maximization seems right, I can’t really get myself to do it. It’s hard to be motivated to perform a task if I think there’s a reasonable chance I’m wasting my life. While I’ve written a bunch about the rationality of Pascal’s wager, I find it very hard to take it myself! Similarly, I find it hard to be motivated to give to Longtermist charities, even though they have very high expected value. I currently give only about a quarter of my donations to effective charities, though I suspect I’d give more if I was fully rational. Try as I might to be a robotic expected value maximizer, I just can’t seem to get myself to do it.

So what should one do in a situation like this? Well, we can take our cue from people in finance. What do finance people do when there is a risky business that might fail but might become worth a lot? They diversify! They invest in a hundred companies like this, knowing that though 90 of them may fail, the rest will succeed enough to make it worth it.

You can do this with morality too. Suppose you’re not super sure if giving to shrimp welfare is a good idea. You’re also not sure if Longtermism is good. You’re also not sure how valuable charities helping free chickens from cages are compared to other things. You’re not certain if reducing wild animal suffering is effective. And maybe you’re not sure whether to take Pascal’s wager seriously and support organizations effectively spreading whichever religion you find most plausible.

If you want to be confident that you are doing some good: diversify. Give to all of them. Even if they’re all somewhat speculative, it’s likely that one of them will pay off massively. Just like one can diversify their portfolio by investing in lots of risky companies with potentially high payouts, you can do that for charity too. This doesn’t mean you should give to every place you think might do some amount of good, but it does mean you should risk wasting your money for a small chance of bringing about a ton of value.

If you give a hundred dollars to the shrimp, you can save 1.5 million shrimp from an excruciating. That’s three times more than the number of people in Wyoming! If you have any moral uncertainty about that, and take moral uncertainty seriously, surely that should be at least one of the thing you do at some point over the course of your life.

Now, this isn’t the best way to maximize expected value. If you were an expected value maximizing robot, you would not pursue this strategy. You would say “bleep bloop, this brings about 93.5 fewer expected utils than the other strategy.” But I assume you are not an EV maximizing robot.

This also makes it easier to take seriously the conclusion of weird arguments. If a weird argument has the conclusion that I should stop giving to charities providing bednets, and instead pay to refill the Huel containers at some Longtermist org, I find it very hard to act on. But if the conclusion of an argument is simply that I should be doing a little more to promote astronomical amounts of longterm value, well, that doesn’t seem so bad! It’s easier to motivate yourself to give some money for a speculative gamble than to give all your charitable money for a speculative gamble.

This is one reason I disagree with the common argument that a person should only give to one charity—whichever one they think is best. If I had to donate to only one charity, I’d probably give less effectively. I’d end up convincing myself that the best charity is whichever effective charity I feel the best about, and gave all my money there. For this reason, even though I think ideal agents would probably only give to one charity, accounting for human fallibility, it makes sense to diversify. My guess would be others are the same; if people could only give to one charity, probably very few people would go all in on the shrimp.

For similar reasons, you should refrain from doing things that might be extremely wrong on some ethical view. For instance, I refrain from eating happy animals. This is partly for practical reasons: it’s hard to know if the animal really was happy, and convincing other people to eat only happy animals just generally results in them eating factory farmed animals with nice labels slapped on the product.

But it’s also partly for reasons of moral uncertainty—while I am a utilitarian, it wouldn’t be completely shocking if deontology turned out to be right. If deontology is right and animals have rights, then eating meat is about as bad as being a serial killer. You shouldn’t risk doing something as bad as being a serial killer. Similarly, I would be wary about becoming an anti-Christian activist or abortion doctor, because there’s some chance that doing so is seriously and perhaps even infinitely bad—I don’t want to risk it!

(In a sane world, as Richard Y Chappell notes, people would similarly think about the serious moral risk of discouraging effective giving, on grounds that effective giving prevents children from dying. Somehow, however, people seem to think that small disagreements with the ideology of certain EAs is sufficient cause for blanket denunciation, a decision which is likely to cause additional poor people to die).

Suppose there are three possibilities which entail surprising moral conclusions. Suppose you give them each 30% odds. You might be tempted to dismiss them because any individual one is likely false. But the odds are ~2/​3 that one of them is right. So if you diversify, if you take lots of high risk but high reward morally speculative actions, odds are decent that some of your actions will do lots of good!