Byrne Hobart & Dwarkesh Patel on hardcore believers, monasteries, and effective altruism

Dwarkesh Patel has one of the best podcasts around.

Here’s a lightly-edited extract from his recent conversation with Byrne Hobart.

I’ll share some reflections in the comments.

See also: Twitter version of this post.


Many belief systems have a way of segregating and limiting the impact of the most hardcore believers

Sam Bankman-Fried was an effective altruist and he was a strong proponent of risk-neutrality. We were talking many months ago and you made this really interesting comment that in many belief systems they have a way of segregating and limiting the impact of the most hardcore believers. So if you’re a Christian, then the people who take it the most seriously… you can just make them monks so they don’t cause that much damage to the rest of the world. Effective altruism doesn’t have that, so if you’re like a hardcore risk-neutral utilitarian then you’re out in the world making billion dollar crypto companies.

As a side note: a year ago I feel like the meme was “oh look at these useless rationalists they’re just reading blogs all day and they have all these you know mind palaces and whatever and what good are they” and now everybody’s like “oh these neutral utilitarians are gonna wager our entire civilization in these 51:49 schemes”.

Byrne Hobart 1:13:17

Yeah I think it’s a useful pattern to observe because it goes back to the point that human nature just doesn’t change all that fast, to the extent that it ever does. And different civilizations have had this problem of “okay we’ve got some rules and we’ve got these beliefs” and they’re generally going to guide people to behave the right way and they’re going to guide people to be the right kind of normal person and not to be someone whose life is entirely defined by this incredibly strict rigid moral code—and by whatever you get if you take the premises of that code and just extrapolate them linearly as far as they can go.

I think that gets especially dangerous with really smart people because you can give them a set of first principles and they can ask really interesting questions and come up with edge cases and sometimes for some people… the first philosophy class where they encounter these edge cases they just reject it as stupid. [… ] I think that it is useful to keep in mind that the thought experiments are designed to be implausible, and they are supposed to be intuition pumps, but the more you get this complicated highly abstract economy where an increasing share of it is software interacting with software… well software doesn’t have that common-sense break on behavior. And if you have this very composable economy you can find cases where first-principles thinking actually is action-guiding and can guide you to extreme behaviors. Unfortunately those extreme behaviors are things like trading cryptocurrencies with lots and lots of leverage.

You know, it’s maybe merciful that the the atoms-to-bits interface has not been fully completed while we still have time to deal with malevolent unfriendly EA. But yeah it is a problem that you see a lot. And you see a lot of different societies and they do tend to have some kind of safety valve. Where if you really think that praying all day is the thing you should do, you should go do it somewhere else and you shouldn’t really be part of what we’re doing.

I think that’s healthy. I think in some cases it’s a temporary thing: you get it out of your system and either you come back as this totally cynical person who doesn’t believe in any of it or you come back as someone who is still deeply religious and is willing to integrate with society in in a productive way. I think even within the monastic system you have different levels of engagement with the outside world and different levels of interaction.

So I think that’s something that EA should take seriously as an observation, as a design pattern for societies. You typically don’t want the people in charge to be the most fanatical people. And EA beliefs do tend to correlate with being a very effective shape-rotator or a very effective symbol-manipulator, and those skills are very lucrative and money does have some exchange with power. **So you basically have a system where very smart people can become very powerful. And if very smart people can also become very crazy then you tend to increase the correlation between power and craziness. And it doesn’t take very long clicking through Wikipedia articles on various leaders in world history to see that you ideally do not want your powerful people to be all that crazy or your crazy people to be all that powerful. **

As far as what to actually do about that… I think one model is that smart people should be advisors but not in an executive capacity. Like they shouldn’t be executives or like you don’t want the smartest person in the organization also being the person who makes the final decisions, for various reasons. But you do want them around: you want the person making final decisions to be like reasonably smart—smart enough they understand what the smart person is telling them and why that might be wrong, what the flaws might be.

So that might be one model: you want the EAs dispersed throughout different organizations of the world—as someone working with non-EAs and kind of nudging them in an EA-friendly direction, giving them helpful advice but not actually being the executive.

One possibility is that every other society got it wrong and the monastic tradition was stupid and it has been independently discovered by numerous stupid civilizations that have all been around for much longer than effective altruism. So that’s a possibility—you can’t discount it—but I think if you run the probabilities it’s probably not the case.

The leaders who “take ideas seriously” don’t necessarily have a great track record

Dwarkesh Patel 1:17:54

I mean in general it’s always a little bit… the leaders who take ideas seriously don’t necessarily have a great track record, right?! Like Stalin apparently had a library of like 20,000 books. If you listen to Putin’s speech on Ukraine it’s laden with all kinds of historical references. Obviously you know there’s like many ways you can disagree with it, but it’s like a man of ideas and do you want a man of ideas in charge of important institutions? It’s not clear.

Byrne Hobart 1:18:31 Well the Founding Fathers, a lot of them were wordsmiths and we basically have whole collections of anons flaming each other through pamphlets. So yeah in one sense it was a nation of nerds. On the other hand Washington didn’t—as far as I know—have huge contributions to that literary corpus. So maybe that is actually the model: you want the nerds, you want them to debate things, you want the debates to either reach interesting conclusions or at least tell you where the fault lines are, like what are the things nobody can actually come to a good agreement on. And then you want someone who is not quite that smart, not really into flame wars to actually make the final call.

Dwarkesh Patel 1:19:07

Yeah, that’s a really good point. I mean like forget about Jefferson… imagine if Thomas Paine was made president of the United States. That would be very bad news...

Byrne Hobart 1:19:17

Yeah. It’s important to note that it’s better to have some level of fanaticism than no fanaticism. There’s like an optimal amount of thymos and there’s an optimal place for it but… I think from a totally cynical perspective your most thymotic people, maybe they are at the front lines doing things and taking risks but also not making the decisions about who goes to the front lines, or I think the other thing is making sure that the person deciding where the front lines are and saying the front line is like “we keep France safe from the invaders” and not “the front line is Moscow so get to Moscow and burn it down”.

There’s a recent Napoleon biography that I’m also in the middle of—it’s been a good year for reading about power-tripping people—it points out that technically Napoleon had more countries declare war on him than he declared war on. So on average France was fighting defensive wars during the Napoleonic era. It’s just you know they kept defending farther and farther from France.

Dwarkesh Patel 1:20:24

Yeah defense requires some strange kinds of offense, often.

If we eyeball the track-record of two kinds of investment thesis—“Big worldview” vs “micro-level observations”—the greats have some synthesis of the two, and it probably leans more towards big worldview.

Dwarkesh Patel _1:20:30

Okay so one meta question I’ve had is: when you’re trying to figure out which charities do the most good [...] there’s two kinds of discourse: there’s one that’s like “we’ve got these few dozen RCTs and let’s see how we can extrapolate the data from these in the least theory-laden way” and there’s another where it’s like “I’ve just read a shit ton of classics and I’m like a thinking person I think a lot about culture and philosophy and here’s my big intricate worldview about how these things are going to shape out”. And investing is an interesting realm because there’s both kinds of people there and you can see the track records over long periods of time. So having seen this track record, is there any indication to you whether this first sort of microeconomic approach actually leads to better concrete results than somebody like Thiel or Soros who are motivated by a sort of intricate worldview that’s based on philosophy or something? Which one actually makes better concrete predictions, that are actionable?

Byrne Hobart** 1:21:41

So I think typically the greats have some synthesis of the two and it probably leans more towards big worldview than towards micro-level observations.

One way to divide things is to say that the quants are into all these micro-level observations: like you could be a quant who does not actually know what the numbers mean. [You’re] just looking for patterns and find them and people have done it that way but it seems like quantitative strategies get more successful when you find some anomaly and then you find an explanation for the anomaly. And the explanation might be some psychological factor you’ve identified. And maybe you find studies indicating that loss-aversion is real and this affects how fast stocks go down versus how fast they should often go down, and that gives you a trading strategy. Or maybe it’s something more mundane, like maybe there is some large investor who has some policy like “we rebalance between stocks and bonds on the first day of every quarter” and if you know that the investors who have that policy control X trillion dollars of assets and you know how they’ll rebalance then at the end of every quarter you know money is sloshing between stocks and bonds and that’s predictable. A lot of the quantitative strategies that have those theories behind them tend to blow up more rarely, because they sort of know why the strategy works and then they know why it’ll stop working.

[...]

On the other end if you have these just totally theory-driven views… usually what kills totally abstract theory-driven views is time. Because a lot of the best abstract theories are you look at some part of the economy you say “this is obviously unsustainable” and then the problem is you can say that at any point during its arc and it can look sustainable to other people for a very long time.