[Unendorsed] — An update in favor of trying to make tens of billions of dollars

[Edit: Since this post was written, it has come out that SBF has committed large-scale fraud, though the details are still unclear. In the process, he’s also lost the large majority of his net worth.

Hence I no longer endorse the post as it is.

The update that making tens of billions of dollars was perhaps easier than it seemed is no longer as valid; in particular, there are two important new updates:

  • Most importantly, it now seems likely that being willing to break deontological constraints was necessary to achieve this level of wealth. Even under a utilitarian worldview, this means there were significant costs that the analysis below does not take into account. I have not tried to quantify these costs, but it currently seems likely to me that they were very significant relative to the expected benefits, plausibly larger. I think rule- and norm-following are very valuable (honesty particularly so) and ends-justify-the-means type reasoning is often misguided.

  • Furthermore, this post was premised on the assumption that ex-post EA startup founders did very well. This is much less true now than it was at the time the post was published and hence all associated updates should be relativized. The strength of this second point depends on where we are on the spectrum between the following two worlds:

    • It all went really well until everything blew up in a risky bet a few days/​months before the collapse. In a naive utilitarian sense (i.e. (wrongly) ignoring the costs of norm- and rule-breaking mentioned above) that bet may have even been positive EV ex-ante.

    • SBF never really was as rich as Forbes and co. assumed. FTX and Alameda weren’t as profitable as was commonly assumed. Perhaps they were always going to blow up, e.g. due to very poor accounting… It may not have been possible to sell much larger fractions of the companies at the valuation prices at the time, e.g. because that may have entailed more careful due diligence, which they wouldn’t have passed.

Either way, it seems like SBF’s net worth is now close to zero relative to the numbers discussed below.

As of 25th Nov 2022, details are still unclear.]

Key take-aways

  • Sam Bankman-Fried was able to make over $20 billion in just 4 years.

  • By staying in quant trading, he’d have made several orders of magnitude less money over that period.

  • Presumably, (way?) fewer than 100 EAs tried something similar.

  • If that’s our base rate, then maybe more EAs should try to found multi-billion dollar companies.

Note on this post:

This is my first EA Forum post. One stumbling block in the past has been to have very high expectations toward potential post ideas and then not write them up at all. So to counteract that, here I’ve tried writing up my thoughts on one specific thing without having as broad a scope as might be ideal.

Who am I talking to?

This post focuses on the expected value of founding a start-up. Of course, there are many reasons not to found a start-up other than beliefs about the expected value. To start with, it requires high levels of sacrifice and probably the most likely outcome is failure. Furthermore, the qualities required to make a good founder are quite rare and if you don’t have them, that’s no reason to feel bad! In fact, there’s probably lots of other ways you can do a ton of good!

So here I will focus on the people who think they might be a good fit and who would consider founding a start-up if they were convinced it was the highest-impact thing for them to do.

This is only one way to look at this

There are several approaches one could take to this question and this is only one of them. That’s why the post is titled “An update in favor of trying to …” and not “You should probably try to …” (although that could still be true!). The most important factor is probably your inside view. But there are also other outside views! See these two posts from Brian Tomasik and Applied Divinity Studies for instance[1]. They both consider the success rate of companies that got into YCombinator among other things. A paragraph I particularly liked from the latter post was this:

I understand that the odds of becoming a billionaire are low, but it doesn’t matter if you only consider the conditional probabilities. What’s the cost/​benefit of taking 6 months off your day job to work on a startup? Conditional on being successful there, what’s the cost/​benefit of trying very hard to seek out venture capital? Given that you’ve raised money, what’s the cost/​benefit of trying to make a billion dollars? The bet sounds insane to begin with, but at each step you’re taking on a very reasonable level of risk.

The argument

Sam Bankman-Fried is an example of a person motivated by EA to earn as much as he could in order to give it all away. It’s going pretty well. His net worth is estimated to be around $22.5 billion and he’s the world’s richest person under thirty. It took him about 4 years (!) to build most of that wealth, from founding Alameda Research in October 2017 to today.

Lincoln Quirk, co-founder of Wave, which is now valued at $1.7B, is also worth mentioning here. He too has been interested in EA for a long time. I think for the purposes of this post though, it’s fine to focus on Sam. Reasoning in this footnote[2].

Edit 18/​10/​2021: Stefan Schubert points out in the comments that it’s still worth considering Wave as a datapoint confirming that Sam Bankman-Fried wasn’t a total one-off. I agree!

Can we conclude anything from Sam’s story? I think so! Specifically, I wanna lay out an argument that on current margins, more people should take bets to make tens of billions of dollars. That is, if you are in a reference class similar to Sam’s before founding Alameda Research and you are currently considering founding a start-up, then this post aims to cause you to update in favor of that.

Let’s first ask how many EAs tried to found companies that would potentially be worth billions. Let’s call this number N.

David Moss estimates there are about 2300 highly engaged EAs. A little under 40% of them are earning to give. How many of those are trying to found multi-billion dollar companies? I don’t know. But founding a multi-billion dollar company probably demands sacrifices larger than even what most “highly engaged EAs” are doing. Based purely on personal impressions, even 10% of that 40% would seem surprisingly high.[3] So that would give us a not-very-confident upper bound of N = 10% * 40% * 2300 = 92; let’s make it 100. Keep in mind though that for all I know this number could be as low as 2, just Sam and Lincoln. Another person that might count would be Sam’s cofounder Gary Wang. Also maybe Emerson Spartz? I expect I’m missing a few.

So say N=100 and for the time being let’s naively assume these people were all exactly identical to Sam prior to founding Alameda Research and then the dice were thrown and everyone except real Sam failed. Then the expected value of setting out to found a multi-billion dollar company for this class of people would roughly be $22.5 billion /​ 100 = $225 million.That’s huge! And if N was much smaller, say 10, then we get $22.5 billion /​ 10 = $2.25 billion!

How much could he have donated by staying at Jane Street? I don’t know, but optimistically something like 10 million a year over those same 4 years? (He seems to be a very good trader and this 80,000 hours podcast episode suggests earnings can go that high.) That would make $40 million. That’s over 5 times lower than the conservative estimate above and 50 times lower than the aggressive one. For any N<560, founding a start-up was higher expected value. And keep in mind that I’m using a very optimistic estimate for his earnings at Jane Street. $10 million total might be more realistic.

A lot of bad assumptions have gone into this. Other EAs are in fact not identical copies of pre-success Sam. But the difference is so big (5x to 200x) that I think we’re still on to something.

What do we make of those numbers? Here’s one way to think about it.

Think of people as being on a spectrum from “very likely to succeed at building a billion-dollar company” to “very unlikely to succeed at this”. If you’re sufficiently high on that continuum, you should do it.

So now there’s two scenarios.

One is that it was obvious a priori that Sam was very high on this spectrum, but that there aren’t many other EAs anywhere near as high. In this world everything is right the way it is. A marginal person going into entrepreneurship wouldn’t have anything near even just Sam’s a priori expected gains.

The other scenario is that there’s actually a bunch of EAs who are within one factor of ten from Sam in terms of a priori likelihood of succeeding. And they just don’t go for it. If we’re in this world, those marginal people are currently making a huge mistake! Their expected impact might not be as high as Sam’s expected impact was a priori, but it might still be their best option. Many more should try entrepreneurship.

So which of these worlds do we live in? I don’t know, but my guess would be that it’s close enough to the second for it to be true that on current margins, more people should try start-up entrepreneurship. The difference above was pretty big.

What does it mean to be in the same reference class?

I don’t know all the details, but from what I’ve read and heard in interviews, here are a few relevant points about Sam:

  • He studied Physics at MIT.

  • He decided to do earning to give and was able to get hired by Jane Street.

  • He liked his time there.

  • What motivated him to leave Jane Street was that he felt he could probably find higher expected value options.

  • At least from founding his own companies onward, he’s famously demonstrated a very strong work ethic. He claims to do almost nothing but work and gets most of his sleep in a beanbag at the office, so his colleagues can wake him up if they need something and so he can stay focused. He’s also talked about having experimented with a bunch of ADHD drugs like Adderall or Modafinil to improve productivity. (Source: last few minutes of this interview.) (I don’t want to endorse these meds here. I don’t have a strong opinion either way yet, but it seems relevant. I’m also not saying that you should feel bad about yourself if you’re not willing to work as hard as Sam; almost no one is, including the vast majority of EAs. But it would be dishonest to leave that part out here.)

  • He hadn’t done anything very entrepreneurial prior to founding Alameda at around age 25 (with one small exception[4]).

  • He’s motivated by EA; what drives him is to make money so he can give it all away.

  • This seems to make him more willing to take high-variance bets than other entrepreneurs. (E.g. about founding FTX he has said that at the time they thought it was high expected value, but also 80% likely to fail).

  • He doesn’t try to do good directly through his companies, saying it’s likely better to optimize either income or direct impact, but not both. However, he says it does matter to him that the direct impact of his work be net positive, even if small.

  • Edit 18/​10/​2021: Leon Lang adds in the comments that his parents were both Stanford Professors.

Importantly this is not a checklist with boxes you have to fill! Rather it’s just meant to give you a feel for how you could have looked at this person prior to knowing they would turn out successful. Maybe listening to a bunch of interviews could also help with that. Here’s my top recommendation, a recent fireside chat he gave at Stanford EA.

And if upon hearing this without knowing how it turned out you’d estimate odds of success within one factor of 10 from what you would guess for a person more like you, you should go for it! At least that’s what this argument suggests.The point is somewhat weakened by the fact that Sam obviously knew a lot more about himself than just the list above.

One thing I want to highlight, that presumably puts him in a reference class closer to yours, is that he set out on this journey with the explicit goal of earning to give. I think that’s a notable difference and it motivated me to write this post. With other entrepreneurs who think about donations only after the fact it’s harder to find an upper bound on how many tried something similar.

You’d also expect that class of people to be more risk-averse, since altruistic returns to money are near-linear on relevant scales at least according to some worldviews, while selfish returns are sharply diminishing (perhaps logarithmic?).

Edit 18/​10/​2021: Tsunayoshi points out in the comments that Venture Capitalists can sometimes exert pressure such that “startup founders are often forced to aim for 1B+ companies because they lost control of the board, even if they themselves would prefer the higher chances of success with a <1B company”. Read his full comment below. This weakens the argument above, though I expect a significant amount of risk-aversion to remain.

Why did I put “tens of” billions in the title? Isn’t one billion ambitious enough?

If you manage to make a billion dollars and give it all away, amazing!

Ex ante, I think it’s worth stressing that returns to money are near-linear according to many worldviews. Another way to say this is that altruistic returns to more money are only diminishing quite slowly and you should be nearly risk neutral on relevant scales[5].

If you buy that, then it seems to me that you should be shooting for the stars, since I would guess your chances of making ten billion dollars are more than 10% of your chances of making one billion if you’re actually trying. I’m not sure about this though.

By contrast, selfish returns to money are sharply diminishing. That’s why risk-neutrality often feels counter-intuitive! It’s strange to value making $100M one hundred times less than making $10B, but maybe you should. If you think so, are you actually acting accordingly?

I should emphasize though that how linear this stuff really is depends a lot on your worldview and other empirical beliefs and I can definitely understand if people feel returns are diminishing quickly enough for this argument not to go through. In that case, try to make $100M maybe?

Possible objections

Am I just trying to make the reference class as small as possible to then conclude high odds of success? Doesn’t that always apply? E.g. wouldn’t anyone “sufficiently like Elon Musk” have huge odds of success too?

There is something to this. Picking an appropriate reference class is partly an art.

I think “all EAs trying to become absurdly rich” is among the most relevant reference classes for an EA to look at even before hearing about Sam (here’s one example where Applied Divinity Studies kind of does that, though it’s about “rationalists”, not EAs). Then you learn about Sam and update in favor.

Of course, there’s also value in looking at other cases and other reference classes. I already mentioned these two posts from Brian Tomasik and Applied Divinity Studies.

Isn’t the inside view way more important than the outside view for questions like these? E.g. Ben Kuhn has made arguments along these lines.

I broadly agree with that actually. Still I think there’s value in looking at outside views.

First, if on the inside view you’re on the verge between trying to found a start-up or not, this outside view might just be enough to push you in favor of going for it.

But more importantly, whether you stumble across opportunities or ideas to found a start-up isn’t completely random. It’s good to know whether you should be willing to invest time into coming up with such ideas, doing some initial research or prototyping, meeting potential co-founders etc. Maybe to Ben the answer is just “obviously yes” and beyond that the outside view is worthless. To me at least this isn’t obvious a priori.

Isn’t this still subject to all sorts of selection effects?

I’ve tried to account for the most obvious selection effect by asking how many EAs tried something similar and failed.

There’s always some selection effects remaining though and it’s worth stating again that you shouldn’t overfit to Sam’s particular story.

For example, one selection effect that could be going on is that of all the groups I’m a part of, I chose the one that contained Sam Bankman-Fried (Effective Altruism) and ignored all the other ones where I didn’t know about similar success stories. It does seem to me though that EA is the one group that I’m most “a part of”, so it’s not that arbitrary.

The mere fact that Sam is the world’s richest person under thirty also suggests that maybe this is much harder than the analysis above made it seem. It is true that a subset of EAs are among a rare class of people who value money near-linearly on relevant scales. Hence we should expect them to be disproportionately likely to achieve such outlier-level success. But this fact should still make us suspicious.

There are diminishing returns to more money. Doing this now is actually less valuable than it was in Sam’s case.

That’s true! How big a factor it is depends a lot on what your favorite cause area and/​or intervention is and how much money that can absorb. Of course, it also matters whether you’ll be donating to the same things as Sam!

He hasn’t yet decided where the bulk of the money should go it seems. But he has said that he’s basically convinced that most of the expected value lies in the future. A somewhat unusual thing for EAs, is that he considers political donations a promising opportunity.

More discussion on diminishing returns above .

Hasn’t earning to give been un-recommended?

My impression is that it’s complicated and people disagree about this. It is true that 80,000 hours have de-emphasized this path, though I think they still recommend it for certain people. Notably for this post, Sam Bankman-Fried has expressed an impression that it is now under-emphasized. Make up your own mind about this. Certainly there’s some amount of money where most people would agree it’s better than whatever direct work you would have done. Again, this depends a lot on your favorite cause area.

So you’re basing your whole analysis on this one case?!

This isn’t supposed to be a full-blown analysis of the EV of founding a start-up. It’s just meant to provide one argument that you may not have considered before.

Also, if it’s true that impact is a fat-tailed top-heavy distribution, a lot of the expected value is in outlier cases. Certainly, this seems to be the case when it comes to donations. It’s less naive than it may seem then to focus your analysis on these outliers.


I’m very uncertain about how valuable it is for me to write such posts relative to other things I could be doing. If this caused you to change your mind, it would be great if you let me know in the comments below. I don’t expect this to happen, but if it contributed to you changing career plans, absolutely let me know!


Thanks a lot to Aaron Gertler for reading a draft of this post and providing valuable feedback! See this generous offer.


  1. ↩︎

    Readers may also find this recent forum post on joining an early stage start-up interesting.

  2. ↩︎

    I don’t know Lincoln’s net worth, but it’s probably not much more than 1% of Sam’s? (Sam owns an unusually large share of FTX). Lincoln has taken the Founders Pledge, though I don’t know what percentage he’s pledged to give (minimum 5%, but possibly up to 100%). That’s still a lot of money donated! But it’s also small enough to not change my calculation much, so I just left it out. I think this just illustrates how incredibly top-heavy these things are. Almost all the expected value is in the very very best outcomes.

    Furthermore, I suspect most of Lincoln’s impact flows through the amazing direct work Wave are doing! (At least that’s probably true from a global health and development worldview. I’m not sure about this though, comments welcome.) Ben Kuhn’s post Why and how to start a for-profit company serving emerging markets may be of interest. By contrast, here I focus on “pure” earning to give, without regards to direct impact, other than it not causing harm. I’m not claiming that one is better than the other. (I think this argument would depend a lot on your favorite cause area among other things).

    Edit 18/​10/​2021: In his comment on this post, Lincoln mentioned this “my pledge is 10%, although I expect more like 50-75% to go to useful world-improving things but don’t want to pledge it because then I’m constrained by what other people think is effective.”

  3. ↩︎

    If there exists data on this that I’m unaware of, I would be grateful for any pointers! And if your personal impression is that my guess is off, also let me know.

  4. ↩︎

    At some point in high school, “he organized and largely wrote a puzzle hunt in which teams from local schools could compete”. Source.

  5. ↩︎

    This isn’t necessarily true. E.g. maybe you think AI risk is the only thing that matters, theoretical work on this is the only way to make progress, and marginal researchers wouldn’t be able to add much, since the best people are already getting funded (I’m not saying this is true!). Then returns to money are quickly diminishing and you should be quite risk-averse. If on the other hand you think GiveWell type charities are the way to go, then there’s room for a lot more money and returns diminish much more slowly. Biorisk also seems to be able to absorb lots of money. Further discussion is beyond the scope of this post, but it’s worth thinking about this if you’re earning to give.