Uncorrelated Bets: an easy to understand and very important decision theory consideration, which helps tease out nonobvious but productive criticism of the EA movement

There’s a great video by Ray Dalio, founder of the hedge fund Bridgewater, where he explains the importance of uncorrelated bets to the performance of a successful long term investing strategy:

For example, let’s say you have $100 to invest, and you can pick how much money to put into either of 2 stocks: Tesla and Google, which are uncorrelated. Let’s say that every year, Tesla has a 95% chance of 10x’ing, with a 5% chance of losing all of your money; and that Google has a 50% chance of giving you exactly what you invested back, and a 50% chance of doubling your money. And you have 0 uncertainty around these numbers: you’re 100% sure they’re correct.

If your investing strategy is to maximise (naive) expected value, you would simply put all of your money into Tesla! Tesla’s expected value is 9.5 times your investment, and Google’s is only 1.5 times your investment. This means you would not be investing across a portfolio of uncorrelated bets.

Why would picking the highest (naive) EV bet not be the best strategy? Seems like it should be!

This seems very counterintuitive at first, but there is a very simple reason why: your returns in the markets compound over time. If you put all of your money into Tesla, then your odds of having any money at all are vanishingly small as time goes on: each year, you have a 95% chance of keeping the money: you’ll only have kept the money for 100 years if you hit the 95% 100 times in a row. The odds of that are 0.5%.

This kind of behaviour in a function being maximised is called a geometric sum. It’s a very simple concept: rather than adding the results of your bet over and over, multiply the results over and over. Naive expected value maximisation implicitly assumes you are adding your results over and over! And in reality, most bets get multiplied, and not added, together!

When people use a log utility function, it turns out that they’re accidentally solving the geometric growth maximisation problem. But I think we need to understand the phenomenon more deeply than to simply say “oh just use a log function”. To what degree is your ultimate utility being multiplied vs added together across these decisions? Is there a way to remove some of the variance, and pay a small amount in expected value by doing that?

Here’s a good Wikipedia article to develop better intuition around this weird phenomenon. I think it is quite fundamental and important to understand: https://​​en.wikipedia.org/​​wiki/​​Kelly_criterion

Correlation

A similar weird thing happens with correlation. The most important reason to have a portfolio when investing is that even if some assets do poorly one year, if other assets do well, the reduction in variance of your bets could be worth a reduction in expected value overall of the portfolio per year. Long term performance comes from a reduction in variance and a high EV, not just a high EV!

EA

If this applies to investing money as a hedge fund, I think we need to deeply understand and apply these phenomena much more when thinking about maximising the impact of EA (EA’s overall utility function). Most EAs encourage ideological diversity because of uncertainty in their belief systems: but even more importantly, ideological diversity encourages uncorrelated bets, which smooth the variance year-year, and this smoothness compounds much better. A small chance of going bust catches up eventually, even though it’s a small chance.

What does this mean for the EA movement as a whole?

As the portfolio of resources influenced by EA grows in size, if we want a smoothly compounding movement, I strongly believe we should assign more weight to how uncorrelated a bet is. Not just because it’s more interesting and nicer and more humble, but because it’s simple the better strategy as you manage more and more resources. This really feels like it isn’t understood well enough!

Where do EA correlations come from?

An obvious way to look at correlation is by cause area. If two bets are in the same cause area, then they’re correlated.

But unfortunately, correlation is much more insidious than that.

If we’re all relying a lot on GiveWell to determine effective charities, all of our effective giving is highly correlated to GiveWell. What if investing in certain for profit companies was much more effective than donating to the most effective non-profit? For-profit organisations are self sustaining, and so can reach massive scale (much bigger than AMF). This is an example of an insidious correlation: hidden, until it’s pointed out for you. For example, providing VC funding for the first meat substitute startup might be overlooked in the current GiveWell paradigm: it’s not a charity!

Final question for you

What insidious correlations do you think the EA movement suffers from today?

Let’s focus on whether a strong nonobvious and interesting correlation exists, rather than whether or not the correlation itself is justified. But don’t share any strong correlations: most of them are quite obvious. Try to share strong correlations which we are unaware of until they’re pointed out. That’s what makes them insidious, and extremely dangerous for the overall success of the movement.