Flimsy Pet Theories, Enormous Initiatives

Rigor: Quickly written, rant-style. Comments are appreciated. This post doesn’t get into theories on why this phenomenon exists or the best remedies to improve on it.
Facebook post here

One common pattern in many ambitious projects is that they typically feature a core of decisive small or flimsy pet theories, and typically there’s basically no questioning or researching into these theories. This is particularly the case for the moral missions of large projects.

A group could easily be $100 Billion spent on an initiative, before $100K is spent on actually red-teaming the key ideas.[1][2]

I had one professor in college who was a literal rocket scientist. He clearly was incredibly intelligent and had spent many years mastering his craft.

I had a conversation with him at his office, and he explained that he kept on getting calls from his friends in finance to make 3x his salary. But he declined them because he just believed it was really important for humanity to make space settlements in order for it to survive long-term.

I tried asking for his opinion on existential threats, and which specific scenarios these space settlements would help with. It was pretty clear he had barely thought about these. From what I could tell, he probably spend less than 10 hours seriously figuring out if space settlements would actually be more valuable to humanity than other alternatives. But he was spending essentially his entire career, at perhaps a significant sacrifice, trying to make them.

I’ve later worked with engineers at some other altruistic organizations, and often their stories were very similar. They had some hunch that thing X was pretty nice, and then they’d dedicate many years doing highly intelligent work to pursue it.

This is a big deal for individual careers, but it’s often a bigger one for massive projects.

Take SpaceX, Blue Origin, Neurolink, OpenAI. Each of these started with a really flimsy and incredibly speculative moral case. Now, each is probably worth at least $10 Billion, some much more. They all have very large groups of brilliant engineers and scientists. They all don’t seem to have researchers really analyzing the missions to make sure they actually make sense. From what I can tell, each started with a mission that effectively came from a pet idea of the founder (in fairness, Elon Musk was involved in 3 of these), and then billions of dollars were spent executing these.

And those are the nicer examples. I’m pretty sure Mark Zuckerberg still thinks Facebook is a boon to humanity, based on his speculation on the value of “connecting the planet”.

“Founded in 2004, Facebook’s mission is to give people the power to build community and bring the world closer together.”[3]

Now, Facebook/​Meta has over 60,000 employees and a market cap of around $1 Trillion. Do they have at least 2 full-time employee equivalents (~0.003% of the company) doing real cost-benefit analyses on if Facebook is actually expected to achieve its mission statement? They had to be serious about their business risks, one time, in their SEC filings. Surely they could do similar about their moral risks if they wanted to.

Historically, big philanthropy has also been fairly abysmal here. My impression is that Andrew Carnegie spent very little, if anything, to figure out if libraries were really the best use of his money, before going ahead and funding 3,000 libraries.

I’m also fairly confident that the donors of new Stanford and Harvard programs haven’t done the simplest of reasonable analyses. At the very least, they could pay a competent person $100,000 to do analysis, before spending $50 Million in donations. But instead, they have some quick hunch that “leadership in government at Harvard” deserves $50 Million, and they donate $50 Million.[4]

Politics is even bigger, and politics is (even) worse. I rarely see political groups seriously red-teaming their own policies, before they sign them into law, after which the impacts can last for hundreds of years.

I’m not sure why society puts up with all of this. When someone in power basically says something like,

“I have this weird hunch that X is true, and therefore I’m dedicating major resources into it, but also, I haven’t bothered to have anyone spend a small amount of time analyzing these points, and I’m never going to do so.”,

it really should be taken with about the same seriousness of,

“I’m dedicating major resources into X because of the flying spaghetti monster.”

But as of now, this is perfectly normal.

I have thoughts on why this is (and I’m curious to get others), and of course, I’m interested in the best ways of fixing it, but that’s probably enough for one post.

Clarifications /​ Responses

Jason Crawford wrote,

I worry that this approach would kill the spark of vision/​drive that makes great things happen.

I think major tech initiatives can be great and high-EV, I just think that eventually, someone should make sure they are the best use (or at least a decent use) of their resources. Maybe not when there are 3 people working on them, but at very least when you get to a thousand.

A few people on the Facebook post wrote that the reason for these sorts of moves is that they’re not really altruistic.

Sahar wrote,

I’m not sure most people who donate to Harvard do it because they think it’s the most effective. I’d imagine they do it for the prestige and social acclaim and positioning before anything else

Eric wrote,

I think the most obvious explanation is that people are doing the things they want/​like to do, and then their brain’s PR department puts a shiny altruistic motive sheen on it. Presumably your physics professor just really enjoyed doing physics and wasn’t interested in doing finance, and then told himself an easily believable story about why he chose physics. Society puts up with it because we let people do what they want with their time/​effort as long as it’s not harmful, and we discount people’s self-proclaimed altruistic motives anyway. Even if people do things for obviously irrational reasons or reasons that everyone else disagrees with, as long as it seems net positive people are fine with it (e.g. the Bahai gardens in Haifa Israel, or the Shen Yun performances by Falun Gong).

I largely agree that these are reasonable explanations. But to be clear, I think these sorts of explanations shouldn’t act as reasons for this phenomenon to be socially acceptable. If we find patterns of people lying or deluding themselves for interesting reasons, we should still try to correct them.


[1] To be clear, there’s often red-teaming of the technical challenges, but not the moral questions.

[2] SpaceX has a valuation of just over $100 billion, and I haven’t seen a decent red-team from them on the viability of mars colonies vs. global bunkers. If you have links, please send them!

[3] https://​​investor.fb.com/​​resources/​​default.aspx

[4] This is based on some examples I remember hearing about, but I don’t have them in front of me.