NOTE: I have some indirect associations with SBF and his companies,
though probably less so than many of the others who’ve been posting
and commenting on the forum. I don’t expect anything I write here to
meaningfully affect how things play out in the future for me, so I
don’t think this creates a conflict of interest, but feel free to
discount what I say.
NOTE 2: I’m publishing this post without having spent the level of
effort polishing and refining it that I normally try to spend. This
is due to the time-sensitive nature of the subject matter and because
I expect to get more value from being corrected in the comments on
the post than from refining the post myself. If errors are pointed
out, I will try to correct them, but may not always be able to make
timely corrections, so if you’re reading the post, please also check
the comments to check for flaws identified by comments.
NOTE 3: Byrne Hobart’s post Money, Credit, Trust, and
FTX makes fairly
overlapping points albeit with different emphases and a lot more
elaboration (and less focus on the effective altruism angle).
The collapse of Sam Bankman-Fried (SBF) and his companies FTX and
Alameda Research is the topic du jour on the Effective Altruism Forum,
and there have been several posts on the
Forum
discussing what happened and what we can learn from it. The post FTX
FAQ
provides a good summary of what we know as of the time I’m writing
this post. I’m also funding work on a timeline of FTX
collapse
(still a work in progress, but with enough coverage already to be useful if
you are starting with very little knowledge).
Based on information so far, fraud and deception on the part of SBF
(and/or others in FTX and/or Alameda Research) likely happened and
were likely key to the way things played out and the extent of damage
caused. The trigger seems to be the big loan that FTX provided to
Alameda Research to bail it
out, using
customer funds for the purpose. If FTX hadn’t bailed out Alameda, it’s
quite likely that the spectacular death of FTX we saw (with depositors
losing all their money as well) wouldn’t have happened. But it’s also
plausible that without the loan, the situation with Alameda Research
was dire enough that Alameda Research, and then FTX, would have died
due to the lack of funds. Hopefully that would have been a more
graceful death with less pain to depositors. That is a very important
difference. Nonetheless, I suspect that by the time of the bailout, we
were already at a kind of endgame.
In this post, I try to step back a bit from the endgame, and even get
away from the specifics of FTX and Alameda Research (that I know very
little about) and in fact even get away from the specifics of SBF’s
business practices (where again I know very little). Rather, I talk
about SBF’s overall philosophy around risk and expected value, as he
has articulated himself, and has been approvingly amplified by several
EA websites and groups. I think the philosophy was key to the
overall way things played out. And I also discuss the relationship
between the philosophy and the ideas of effective altruism, both in
the abstract and as specifically championed by many leaders in
effective altruism (including the team at 80,000 Hours). My goal is to
encourage people to reassess the philosophy and make appropriate
updates.
I make two claims:
Claim 1: SBF engages in extreme risk-taking that is a crude
approximation to the idea of expected value maximization as
perceived by him.
Claim 2: At least part of the motivation for SBF’s risk-taking comes
from ideas in effective altruism, and in particular specific points
made by EA leaders including people affiliated with 80,000
Hours. While personality probably accounts for a lot of SBF’s
decisions, the role of EA ideas as a catalyst cannot be dismissed
based on the evidence.
Here are a few things I am not claiming (some of these are discussed
in a little more detail toward the end of the post, though I don’t
elaborate extensively on them):
I’m not claiming that the EA philosophy and community create more
incentives for deception and dishonesty than most groups do; I
actually think the opposite is true. Rather, I’m focusing on the
encouragement that the EA philosophy and community provide for
risk-taking, and the expected value framework that naively
encourages this.
I’m not claiming that the basic arguments about risk-taking,
expected value, and effective altruism are completely wrong or that
events with SBF have fully invalidated them. I think the basic logic
of many of these is still fairly sound, but also that the events
with SBF should lead us to update in the other direction and to
add more nuance to our thinking about these topics. I try to
articulate this a little bit in this post, but nowhere near enough,
and further articulation would require a separate post (perhaps by
another person).
I’m not claiming that the downside of making pro-risk-taking
arguments should have been fully obvious in hindsight—the
collapse of SBF / FTX is new information and whatever model we have
of the world after it should be at least somewhat different than the
model we had of the world prior to it. I do think that at least some
aspects of these points should have been given more attention in the
past, even with the more limited information available then.
And to be clear, these are some things I’m not really covering in
this post:
I’m not covering the fraught topic of whether EA leaders or people
should have been able to predict what specifically happened with SBF
and FTX, and whether they had prior indications of his
character. That topic has been discussed in several other threads.
Claim 1 justification: SBF engages in extreme risk-taking
I won’t really provide much direct justification for Claim 1; I’ll
note in passing that a lot of commentary both on the EA Forum (such as
Kerry Vaughan’s
summary)
and in external press coverage (see for instance
Axios). The
justification for Claim 2 provided below is more detailed and also
implicitly provides further justification for Claim 1.
See also Byrne Hobart’s post Money, Credit, Trust, and
FTX, that goes into
some of the math involving expected value and the historical context
of FTX and Alameda Research.
Claim 2 justification: At least part of the motivation for SBF’s extreme risk-taking comes from effective altruist ideas
SBF’s articulation in a fireside chat with Stanford EA
Here is the transcript from YouTube, lightly edited by me for
sentencification and removal of “um”s and uh”s; you can watch the
video or read the original transcript on YouTube by clicking “Show
transcript” in the options under ”...” below the video. I have
highlighted the portions most relevant to my points, but have not
elided any other stuff within those segments of the video.
Moderator (51:52): I think you basically answered this already but
what concrete advice do you have for students for how they should be
spending their time, how to be more ambitious, and how to better
optimize for their goals figuring out what their goals might be or
ought to be? Maybe what would you do differently if you were a
student or a first year now at MIT or Stanford? And yeah any last
words that you’d want to leave the audience with before wrapping up.
SBF (52:20): I’d go to back to 2010 drop out and buy a lot of
bitcoin but but but seriously i think there’s something a little bit
true there although that’s not exactly what I think I could have
predicted um which is that in 2012 um I had a friend at MIT who was
sort of bored one day I think some some guy I don’t remember who
gave one free bitcoin to every MIT student around then i think there
was like I don’t know I was like well like five dollars at the time
or something. Anyway, one of my friends, Gary, got bored and built
some Bitcoin arbitrage bots for the nascent crypto exchanges that
were around back in the early 2010s and made some money doing it;
not a lot, he sort of saturated the market. There wasn’t a lot of
volume but that was pretty cool. I never really checked it out that
much. He was kind of tempted to got distracted he stopped doing this
there’s like it wasn’t big enough field to make much and then
neither of us thought about crypto again for five years. And then I
called him up and we founded Alameda together. Certainly in
retrospect it’s hard to argue that like it wouldn’t have been
correct for us to just drop everything and do that back in 2012.
SBF contd (53:45): And obviously there’s a lot of stupid retroactive
retrospective thinking there where like we couldn’t have known what
would happen but i actually think at the time we should have done it.
We shouldn’t have been able to predict how well it would have gone but
diving into something that seems exciting and giving it your all and
seeing how far it will go—I think it’s just like an incredibly good
strategy in life and it’s way better than you know sort of sticking
around for another few years not doing much or just sort of like
following the status quo. If you see a great opportunity I sort of
think take it whatever it is. If it seems way better than whatever
else you’d be doing by some sort of like weird expected value
calculation that seems like it can’t possibly be right but kind of
feels cool i think it is probably right in expectation and yeah
it’ll probably fail that’s okay most things do you try another
thing. I don’t know that that could be in a lot of different ways
right like that could be some earning to give startup that could be
jumping in some EA organization that could be taking charge of running
Stanford EA that could be working you know diving into some biorisk
research ir some other wacky thing like i don’t know but there are a
lot of awesome things to do out there and, you know, try them see how
it goes! Try things that seem like they’ll either be the right thing
for you to do for the time being and teach you a lot or like the
upside if they go extremely well is extremely high and like if the
thing you’re doing is neither of those, keep your eyes
open for something else!
And earlier in the talk, SBF says:
SBF (44:05): um i think there’s been a lot of very very bad messaging
over the years on that. I think there’s a lot of messaging that is all
a funding bottleneck and then a pretty sharp turn towards it’s all a
talent bottleneck. I think they’re both wrong um my sense is that both
matter. I think like as i’ve sort of spent more time trying to find
things to fund. I found more things to fund and and don’t currently
feel like strongly under constrained on funding and over constrained
on talent. I think both are very much limiting factors. And there are
ways to really scale up the amount of of good that you can give to.
So what are some ways to do it? First of all, I think it’s sort of
like a little awkward but it’s just true um and probably not worth
like you know trying to to sort of ignore that like on the funding
side it it’s probably going to be very top-heavy. It’s a property of
how the world works today that like the distribution of how much you
can make over various things is not it’s not like a normal
distribution like the tail is way fatter. And it just has a pretty
straightforward implication, I think, which is that like if earning to
give is what you’re thinking of doing and to be very clear I do think
that can be incredibly valuable and i don’t think that we are
unconstrained on funding, I think you should be thinking big. I
think you should be thinking in expected value terms what’s the
thing you can do that will make the most [money]. And I want to flag
there that if you think that the odds that you will achieve that
target through the path are above 30 percent, you’re almost
certainly not being ambitious enough! It is almost certainly going
to be the case that there is a risk-reward trade-off here that the
things that make the most in expected value terms are things that will
probably fail and that if you’re playing this correctly you should
be it’s very likely you should be pursuing a path where you think that
the median amount that you end up being able to donate is zero or
very close to it like it’s sort of brutal and weird that’s that’s how
the math works um i it not always but but I think more often than not
that like um this is like super top heavy. You should be looking for
things that have extremely high upside and willing to accept that they
might fail, willing to accept that they will probably fail and to
acknowledge that we’re trying to maximize our collective total
impact and expected value on the world and you know there’s no special
virtue associated with having at least some impact like this stuff is
linear. Expected values: I think are pretty brutal but
they are what they are! If your vision for what you’re gonna do seems
very likely to work you should think about how to make that vision
more ambitious such you know obviously maximizing for how much it will
work given that but like probably you’re not being ambitious enough if
it seems like it’ll probably work. Although it should seem like it
could plausibly work or otherwise probably it’s a mistake.
SBF’s articulation in a podcast with 80,000 Hours
In a podcast with 80,000
Hours
(full transcript available on page), SBF makes the same points about
expected value, but goes into a little more detail. Here are the first
three paras from 80,000 Hours’ summary on top (emphases mine):
If you were offered a 100% chance of $1 million to keep yourself, or
a 10% chance of $15 million — it makes total sense to play it
safe. You’d be devastated if you lost, and barely happier if you
won.
But if you were offered a 100% chance of donating $1 billion, or a
10% chance of donating $15 billion, you should just go with whatever
has the highest expected value — that is, probability multiplied by
the goodness of the outcome — and so swing for the fences.
This is the totally rational but rarely seen high-risk approach to
philanthropy championed by today’s guest, Sam Bankman-Fried. Sam
founded the cryptocurrency trading platform FTX, which has grown his
wealth from around $1 million to $20,000 million.
And more in the full transcript:
Rob Wiblin: Yeah. Let’s back up a bit, and help to set the scene for
listeners. What motivated you to take such a high-risk, high-return
approach to doing good as starting your own crypto trading firm? And
then also just saying, “We don’t like the exchanges we’re operating
on. I’m going to start my own crypto exchange and try to compete
there.”
Sam Bankman-Fried: This probably won’t be super shocking to you, but
when you think about things from — taking a step back —
Rob Wiblin: Expected value?
Sam Bankman-Fried: If your goal is to have impact on the world — and
in particular if your goal is to maximize the amount of impact that
you have on the world — that has pretty strong implications for what
you end up doing. Among other things, if you really are trying to
maximize your impact, then at what point do you start hitting
decreasing marginal returns? Well, in terms of doing good, there’s
no such thing: more good is more good. It’s not like you did some
good, so good doesn’t matter anymore. But how about money? Are you
able to donate so much that money doesn’t matter anymore? And the
answer is, I don’t exactly know. But you’re thinking about the scale
of the world there, right? At what point are you out of ways for the
world to spend money to change?
Sam Bankman-Fried: There’s eight billion people. Government budgets
run in the tens of trillions per year. It’s a really massive
scale. You take one disease, and that’s a billion a year to help
mitigate the effects of one tropical disease. So it’s unclear
exactly what the answer is, but it’s at least billions per year
probably, so at least 100 billion overall before you risk running
out of good things to do with money. I think that’s actually a
really powerful fact. That means that you should be pretty
aggressive with what you’re doing, and really trying to hit home
runs rather than just have some impact — because the upside is just
absolutely enormous.
Rob Wiblin: Yeah. Our instincts about how much risk to take on are
trained on the fact that in day-to-day life, the upside for us as
individuals is super limited. Even if you become a millionaire,
there’s just only so much incrementally better that your life is
going to be — and getting wiped out is very bad by contrast.
Rob Wiblin: But when it comes to doing good, you don’t hit declining
returns like that at all. Or not really on the scale of the amount
of money that any one person can make. So you kind of want to just
be risk neutral. As an individual, to make a bet where it’s like,
“I’m going to gamble my $10 billion and either get $20 billion or
$0, with equal probability” would be madness. But from an altruistic
point of view, it’s not so crazy. Maybe that’s an even bet, but you
should be much more open to making radical gambles like that.
Sam Bankman-Fried: Completely agree. I think that’s just a big piece
of it. Your strategy is very different if you’re optimizing for
making at least a million dollars, versus if you’re optimizing for
just the linear amount that you make. One piece of that is that
Alameda was a successful trading firm. Why bother with FTX? And the
answer is, there was a big opportunity there that I wanted to go
after and see what we could do there. It’s not like Alameda was
doing well and so what’s the point, because it’s already doing well?
No. There’s well, and then there’s better than well — there’s no
reason to stop at just doing well.
The expected value argument and its connection with effective altruism
It’s folk wisdom that personal (selfish) utility for individuals tends
to be less than linear in the money they have, an idea that is also
widely known as the diminishing marginal utility of
money. One
common (though probably inaccurate) approximation is that utility to
individuals is approximately logarithmic in
money. This
is the motivation for the Kelly
criterion, a widely
referenced criterion for how to diversify one’s portfolio in order to
maximize the expected value of the logarithm of wealth. These general
ideas are well-known in economics and among a lot of intellectuals
including many in the effective altruist movement.
The “altruistic” twist here is that for individuals interested in
altruistic impact, utility is much closer to being linear in money
than logarithmic. Or, we don’t quite see diminishing marginal utility
of money for altruistic purposes, at least at the amounts of money
that most people can make. That’s because the problems of the world
are huge and can absorb huge amounts of money (this is true for most
big problems, ranging from climate change to AI safety to global
health and development to animal welfare). So basically doubling your
wealth that you intend to allocate to charity should approximately
double your impact.
The basic idea is covered in a post by Paul
Christiano
(also cited by 80,000
Hours)
but he’s only looking at financial investments. In contrast, SBF
preaches and practices defining one’s whole life / earning-to-give
trajectory around risky high-expected-value bets. See also the risk
aversion topic on the EA
Forum.
For more on SBF’s articulation of the math and the thinking behind
this, see his tweet
thread where
he compares the Kelly criterion (maximizing expected log wealth) with
his own approach that is based on closer to linear returns.
Endorsements of “thinking big” and more-than-normal risk-taking in effective altruism
Some of the enthusiastic agreement and encouragement of SBF’s views
can be seen in the 80,000 Hours podcast, where the interviewer, Robert
Wiblin, agrees with and even repeats and summarizes several of SBF’s
expected value claims. For instance, quoting from the preceding
excerpt of the 80,000 Hours podcast:
Rob Wiblin: But when it comes to doing good, you don’t hit declining
returns like that at all. Or not really on the scale of the amount
of money that any one person can make. So you kind of want to just
be risk neutral. As an individual, to make a bet where it’s like,
“I’m going to gamble my $10 billion and either get $20 billion or
$0, with equal probability” would be madness. But from an altruistic
point of view, it’s not so crazy. Maybe that’s an even bet, but you
should be much more open to making radical gambles like that.
The 80,000 Hours post Be more
ambitious makes
fairly similar arguments about the importance of being more ambitious
and the value of focusing on upside, and the way these become more
important if you want to do good rather than if you are just
interested in your personal well-being. SBF is also cited as a case
study! There are also cautionary notes later in the post about limiting
downsides, but the final note is still around encouraging more rather than less ambition:
We advise people who are overconfident, as well as people who are
underconfident. But if your aim is to have an impact,
underconfidence seems like the bigger danger. It’s better to aim a
little too high than too low.
But ambitious people do not need to be irrational. You don’t need to
convince yourself that success is guaranteed. To be worth betting
on, you just need to believe that:
Success is possible
Your downsides are limited
The expected value of pursuing the path is high
If you’ve found a path that might be amazing, make a backup plan and
give it a go. It may not work out, but it might be the best thing
you ever decide to do.
Moreover, even the discussion of backup plans only talks of personal
backup plans, rather than backup plans to mitigate the potential
impact on others one does business with (for instance, customers,
employees, investors) or on charities and foundations that might start
depending on one’s donation plans; emphases in the below excerpt are
mine:
Even if you can’t easily estimate how likely risks are to
materialise, you can often do a lot to limit them, freeing yourself
up to focus on upsides.
Over time, you can aim to set up your life to make yourself more
able to take risks. Some of the most important steps you can take
include:
Building up your financial security. If you’re at constant risk of
failing to make your rent, that’s a serious downside you can’t
discount. Looking after your physical and mental health and
important relationships, so that your lifestyle is sustainable.
Building valuable career capital that gives you backup options,
e.g. through building skills or finding good mentors. When
comparing different career paths, here are some tips:
Consider ‘downside scenarios’ for each of the paths you’re
considering. What might happen in the worst 10% of scenarios?
Look for risks that are really serious. It’s easy to have a vague
sense that you might ‘fail’ by embarking on an ambitious path, but
what would failure actually be like? The risks to be most
concerned about are those that could prevent you from trying again,
or that could make your life a lot worse. You might find that
when you think about what would actually happen if you failed, your
life would still be fine. For example, if you apply for a grant for
an ambitious project and don’t get it, you will have just lost a bit
of time.
If you identify a serious risk of pursuing some option, see if
you can modify the option to reduce that risk. Many entrepreneurs
like Bill Gates are famous for dropping out of college, which makes
them look like risk-takers. But besides the security provided by his
upper middle-class background, Gates also made sure he had the
option to return to Harvard if his startup failed. By modifying the
option, starting Microsoft didn’t involve much risk at all. Often
the most useful step you can do here is to have a good backup plan,
and this is part of our planning process.
If you can’t modify the path to reduce the risk to an acceptable
level, eliminate that option and try something else.
Check with your gut. If you feel uneasy about embarking on a
path even after taking the steps above, there may be a risk you
haven’t realised yet. Negative emotions can be a sign to keep
investigating to figure out what’s behind them.
Will MacAskill, a key figure in effective altruism, was an early influence on SBF and pushed him in the earning-to-give direction. This is confirmed by SBF in both the 80,000 Hours podcast and the Stanford EA interview; it’s also described in
this New Yorker
article:
He had recently become vegan and was in the market for a righteous
path. MacAskill pitched him on earning to give.
MacAskill has also made the general argument that if your goals are
altruistic, you should be much more ruthless in your pursuit of scale
and take on more risk. The video I could most readily find was a deep
dive with Ali Abdaal and was
talking about altruistic impact through “direct work” rather than
donations, but elsewhere in the video he does suggest a kind of
exchange rate between the two depending on one’s direct impact in
comparison to the value of donations.
Will MacAskill (2:49:00): This also is a difference between if you’re
trying to optimize for impact versus income so yeah you might think
like okay got a couple of million in the bank now i’m just going to be
happy with that like i can just seek that out like additional money’s
not worth that much more. Because you’ve got like it is three million
YouTube subscribers?
Ali Abdaal: About that
Will MacAskill: Okay, yeah, so you’re like if I had six million I’d
have a bit more money but it’s not going to be a huge difference in my
well-being, [so] I’m not particularly motivated to grow the
numbers.
Ali Abdaal: except i don’t have like an impact goal
Will MacAskill: Exactly! But now if you’re having impacts yeah how much better the six
million subscribers than three million.
Ali Abdaal: yeah way better
Will MacAskill: Probably about twice as good like maybe not exactly
like but like to first approximation yeah and so having altruistic
impact in mind gives like much stronger arguments for scaling.
Linearity on the low end: the lower bound of zero impact and non-consideration of negative impact by losing money
In the above discussion, my focus when talking about the
close-to-linear altruistic returns of money was on the upside/positive
side: you can scale up giving since the world’s problems are so
big. However, there’s another direction where this is important as
well: the direction toward zero (and beyond?).
One sometimes-implicit and sometimes-explicit idea in SBF’s discourse
is the idea that utility is close enough to linear in money, and as an
important corollary, there’s a lower bound at zero. The worst-case
outcome here is making nothing, in which case you make no donations
and therefore have effectively zero impact. So risk-taking has very
high upside but only a limited downside—in the worst case, you’re
wiped out, you declare bankruptcy, maybe you even die penniless.
From a selfish perspective, this is a pretty bad outcome (and
indeed, a logarithmic model of utility would give an infinite
negative utility to having no money). So from a selfish perspective,
there’s a big downside to being wiped out, and this is part of what
motivates risk-aversion.
From an altruistic perspective, however, getting to zero money is a
bad outcome but only to the extent that it represents the absence of
good outcomes. So it’s an outcome that you try to avoid, but not all
that desperately.
Moreover, this simple framework was developed mostly in connection
with people managing their own savings, rather than running complex
companies that manage other people’s investments and assets. So it
doesn’t even begin to grapple with the idea of going negative and
the utility implications of that. Of course, personal wealth can be
negative when one puts money on a credit card or takes a student loan,
but these are relatively small amounts and people generally start
thinking of altruism when they’re no longer in significant debt. My
guess is that SBF (consciously or subconsciously) rounds up “going
negative” to zero because ultimately it just means he’s able to donate
zero money.
Startup risk and the kicking in of caution
A lot of what SBF said about risk-taking makes a lot of sense in the
context of somebody trying a startup idea (having earmarked some sort
of safety net that they won’t touch, and then using other funds from
themselves or outside investors that are explicitly understood to be
for the purpose). What also tends to happen is that once the startup
starts succeeding and real people start depending on it for real
stuff, it starts moving in a more conservative direction—reducing
the riskiness of its actions. There are probably four factors that
push in that direction:
The founders/owners now have more to lose from a purely selfish
perspective; this essentially comes from the “diminishing marginal
utility of money” idea albeit it may or may not be seen in purely
financial terms. For instance, after a company grows from
near-nothing to being worth a few million, and the founders have
shares worth a decent chunk of that, they are at risk of losing
that money if they tank the startup.
The founders/owners have a desire to succeed and to not mess
things up (e.g., because they now feel more passionately about the
thing they’re building, they feel attached to its success, or to
avoid embarrassment). Messing up an already-big company feels
more embarrassing, and can be more guilt-inducing as well to the
extent that one sees the pain caused to others.
The founders/owners have needed to involve other stakeholders,
who also can lose out if things go bad. This includes investors,
employees, customers, partners, etc. Some of them may have
incentives to take more risk (particularly investors who want to
get big payouts from a diversified portfolio) but others benefit
from greater stability and less risk. Moreover, since different
stakeholders see the riskiness of various actions differently, and
some level of agreement is needed, the overall direction will be
toward less risk.
Third parties may put more pressure of various sorts; this
includes regulators, hackers, a hostile press, or various other
actors. In the face of this pressure, more caution and care may be
needed.
A great post by Dan Luu talks about how
Google and Microsoft ultimately got serious about security after
embarrassing incidents. He writes:
Google didn’t go from adding z to the end of names to having the
world’s best security because someone gave a rousing speech or wrote
a convincing essay. They did it after getting embarrassed a few
times, which gave people who wanted to do things “right” the
leverage to fix fundamental process issues. It’s the same story at
almost every company I know of that has good practices. Microsoft
was a joke in the security world for years, until multiple
disastrously bad exploits forced them to get serious about
security. This makes it sound simple, but if you talk to people who
were there at the time, the change was brutal. Despite a mandate
from the top, there was vicious political pushback from people whose
position was that the company got to where it was in 2003 without
wasting time on practices like security. Why change what’s worked?
So what was special about the SBF situation where they were able to
get to such a huge scale without these sorts of things kicking in?
Let’s go through the four points:
The founders/owners now have more to lose from a purely selfish
perspective: I think that although this was true, it probably
wasn’t as true in SBF’s perception because his mental model was
that of altruistic impact and linear utility. So making what he
considered a positive-EV bet when the company was worth $5 billion
may not have felt that different from making a positive-EV bet when
the company was worth $5 million. So at least the absence of this
particular mechanism was tied to the altruistic endgame of the
money.
The founders/owners have a desire to succeed and to not mess
things up (e.g., because they now feel more passionately about the
thing they’re building, they feel attached to its success, or to
avoid embarrassment): My guess is that while SBF obviously had a
desire to succeed and not mess things up, he didn’t actually feel
that passionate about the value of the work he was doing and saw it
as a gamble to make money; as long as it was EV-positive, he was
willing to take big risks even after amassing a lot of wealth. I
believe that this stemmed very directly from his EA-influenced
thinking about risk and value.
The founders/owners have needed to involve other stakeholders,
who also can lose out if things go bad: The failure of this
mechanism doesn’t seem directly tied to SBF’s EA connection, but
may be more of a feature of the business: they were able to get to
a fairly large scale without having a lot of different
stakeholders, and were also able to preserve a fair amount of
secrecy despite the openness of the blockchain.
Third parties may put more pressure of various sorts: This
didn’t happen … until it did, and then everything collapsed. The
failure here in the wider world seems mostly unrelated to EA and
may have more to do with the novelty of the space and therefore the
lack of relevant critical expertise; however, the failure to notice
this within EA was likely due to EA’s positive impression of SBF
and his expected value-maximizing ideals.
Further thoughts (without extensive justification)
Tentative thoughts on where I think SBF went wrong in his thinking
I had listened to SBF’s fireside chat shortly after it had come
out. His thoughts on taking risk had been interesting to me, insofar
as it differed from my own philosophy on risk, but I didn’t consider
him wrong per se. If anything, listening to him made me update my
priors slightly toward taking more risk. I couldn’t find anything
very categorically wrong in what he said.
Upon further reflection, I actually think that what he said was
directionally incorrect in several ways, and what ended up happening
to him is directional evidence that I should be updating away from
the direction of his advice. In particular, I suspect that these are
some areas where he’s wrong:
Even viewed altruistically, going to zero or negative money has
pretty sharp negative utility, particularly the way it ended up
happening to FTX (with a loss of customer deposits and attendant
suffering of many people). But the drop would have plausibly been
bad even in tamer scenarios (for instance, if Alameda had openly
gone bankrupt and FTX had died out with the collapse of the FTT).
I suspect that altruistic impact is less than linear in money, and
that there are a lot of other details about the way things play out,
that affect altruistic impact. For instance, I suspect that FTX
could have had a significant positive impact if it had quit with SBF
making enough to earmark a billion dollars for charity. That would
have been enough money to champion the values and start a pattern of
altruism that ultimately could have been continued by other donors
(ironically, Nick Beckstead makes the point that individual funders
may have relatively few good grants to
make and that’s why Future Fund
experimented with delegating grantmaking to a larger number of
regrantors; I think a similar point applies at the foundation level
as well).
This would obviously have been better than what ultimately
transpired, but I suspect it would have been better even in properly
done expected value calculations. This is a tricky point to justify
and I won’t attempt to do it here.
SBF fails to account for optimism bias … if you think you’ve got a
30% chance of success, you probably have much less. I suspect he was
ultimately a victim of this optimism bias.
I could be wrong about several of these points; I’m also not
retreading the familiar ground of how what ended up happening (and was
likely a direct result of SBF’s actions) was terrible and unethical
etc. I’m making the point that even the original risk-taking was
probably wrong from a perspective of maximizing altruistic impact.
Yep, I was and continue to be confused about this. I did tell a
bunch of people that I think promoting SBF publicly was bad, and
e.g. sent a number of messages when some news article that people
were promoting (or maybe 80k interview?) was saying that “Sam
sleeps on a bean bag” and “Sam drives a Corolla” when I was quite
confident that they knew that Sam was living in one of the most
expensive and lavish properties in the Bahamas and was definitely
not living a very frugal livestyle.
I think this is important as part of the general point that SBF was
successful at cultivating a certain kind of image in the media that
didn’t reflect his reality. However, I don’t think that the fact (that
his actual lifestyle was an order of magnitude more luxurious than his
public persona might indicate) undercuts the general claim that his
main goal was to make a bunch of money to donate. Even this moderately
more luxurious lifestyle was still well within his means, and if
maintaining that lifestyle were his goal, exiting after making a few
hundred million dollars would probably have been a selfishly smart
thing to do.
Noble lies?
A comment by Oliver Habryka covers more relevant attributes of
SBF that, if true, could be part of the reason for FTX’s ultimate
collapse:
I definitely would have put Sam into the “un-lawuful oathbreaker”
category and have warned many people I have been working with that
Sam has a reputation for dishonesty and that we should limit our
engagement with him (and more broadly I have been complaining about
an erosion of honesty norms among EA leadership to many of the
current leadership, in which I often brought up Sam as one of the
sources of my concern directly).
I definitely had many conversations with people in “EA leadership”
(which is not an amazingly well-defined category) where people told
me that I should not trust him. To be clear, nobody I talked to
expected wide-scale fraud, and I don’t think this included literally
everyone, but almost everyone I talked to told me that I should
assume that Sam lies substantially more than population-level
baseline (while also being substantially more strategic about his
lying than almost everyone else).
I do want to add to this that in addition to Sam having a reputation
for dishonesty, he also had a reputation for being vindictive, and
almost everyone who told me about their concerns about Sam did so
while seeming quite visibly afraid of retribution from Sam if they
were to be identified as the source of the reputation, and I was
never given details without also being asked for confidentiality.
These claims about dishonesty, if true, could help explain why SBF
failed to get the right sort of feedback and checks and balances that
could have prevented him from making risky moves.
In a documentary on Theranos founder Elizabeth
Holmes,
psychology professor Dan Ariely said that his experiments found that
people are much more likely to lie a little bit when the effect of the
lying will send money to charity rather than when they pocket the
money from their lies. He also claimed that lie detector tests have
more trouble catching lies spoken by people who are lying to send more
money to charity. He claimed that people feel less conflicted about
what they consider noble lies than what they consider self-serving
lies, and therefore can lie more convincingly when they think it’s for
the greater good. I have not checked the research myself, but you can
see a summary of the argument in this Business Insider
article. I
have also heard that Ariely himself got into trouble for allegedly
fake data in another
experiment,
but the general point he made seems plausible even if you don’t put
much weight on his experimental evidence.
To the extent that SBF engaged in dishonesty without having any of the
“tells” that dishonest people have, his altruistic endgame might be
part of the reason for it. However, I don’t see this as being
heavily connected with effective altruism as a philosophy,
community, or social movement. That’s because in general, unlike with
risk-taking, the EA philosophy and community has not encouraged
lying. If anything, it has, at least in explicit statements, put a
greater premium on integrity than most people do.
A counterfactual thought experiment
One standard way of evaluating whether A causes B is to think about a
world where A hadn’t happened and ask whether B was likely to have
happened in that world.
Here’s a thought experiment—what would have happened if the
existing EA philosophy and community hadn’t had this strong bent
toward risk-taking and thinking big, and/or hadn’t been pushing
earning to give? It’s definitely pretty unclear, but I think:
In the absence of an earning-to-give push, there’s a pretty good
chance that SBF would have gone on to do some form of direct work
initially (e.g., working on animal welfare as was his original
intent, or getting into some work in the longtermist space).
In the absence of a push to be more ambitious, there’s a pretty
good chance that SBF would have felt content working at Jane Street
Capital and donating a chunk of money to charity, and would only
have left it to pursue direct work. You can take a look at his old
blog—reads just
like an ordinary earning-to-giver.
If he hadn’t internalized the utility-is-linear-in-money argument
that is common in EA circles (and that pushed him to continue to
take similar risks despite amassing large sums of money), it’s
likely that SBF would have exited after Alameda Research’s initial
trading success and then used that to make donations.
SBF, extreme risk-taking, expected value, and effective altruism
NOTE: I have some indirect associations with SBF and his companies, though probably less so than many of the others who’ve been posting and commenting on the forum. I don’t expect anything I write here to meaningfully affect how things play out in the future for me, so I don’t think this creates a conflict of interest, but feel free to discount what I say.
NOTE 2: I’m publishing this post without having spent the level of effort polishing and refining it that I normally try to spend. This is due to the time-sensitive nature of the subject matter and because I expect to get more value from being corrected in the comments on the post than from refining the post myself. If errors are pointed out, I will try to correct them, but may not always be able to make timely corrections, so if you’re reading the post, please also check the comments to check for flaws identified by comments.
NOTE 3: Byrne Hobart’s post Money, Credit, Trust, and FTX makes fairly overlapping points albeit with different emphases and a lot more elaboration (and less focus on the effective altruism angle).
The collapse of Sam Bankman-Fried (SBF) and his companies FTX and Alameda Research is the topic du jour on the Effective Altruism Forum, and there have been several posts on the Forum discussing what happened and what we can learn from it. The post FTX FAQ provides a good summary of what we know as of the time I’m writing this post. I’m also funding work on a timeline of FTX collapse (still a work in progress, but with enough coverage already to be useful if you are starting with very little knowledge).
Based on information so far, fraud and deception on the part of SBF (and/or others in FTX and/or Alameda Research) likely happened and were likely key to the way things played out and the extent of damage caused. The trigger seems to be the big loan that FTX provided to Alameda Research to bail it out, using customer funds for the purpose. If FTX hadn’t bailed out Alameda, it’s quite likely that the spectacular death of FTX we saw (with depositors losing all their money as well) wouldn’t have happened. But it’s also plausible that without the loan, the situation with Alameda Research was dire enough that Alameda Research, and then FTX, would have died due to the lack of funds. Hopefully that would have been a more graceful death with less pain to depositors. That is a very important difference. Nonetheless, I suspect that by the time of the bailout, we were already at a kind of endgame.
In this post, I try to step back a bit from the endgame, and even get away from the specifics of FTX and Alameda Research (that I know very little about) and in fact even get away from the specifics of SBF’s business practices (where again I know very little). Rather, I talk about SBF’s overall philosophy around risk and expected value, as he has articulated himself, and has been approvingly amplified by several EA websites and groups. I think the philosophy was key to the overall way things played out. And I also discuss the relationship between the philosophy and the ideas of effective altruism, both in the abstract and as specifically championed by many leaders in effective altruism (including the team at 80,000 Hours). My goal is to encourage people to reassess the philosophy and make appropriate updates.
I make two claims:
Claim 1: SBF engages in extreme risk-taking that is a crude approximation to the idea of expected value maximization as perceived by him.
Claim 2: At least part of the motivation for SBF’s risk-taking comes from ideas in effective altruism, and in particular specific points made by EA leaders including people affiliated with 80,000 Hours. While personality probably accounts for a lot of SBF’s decisions, the role of EA ideas as a catalyst cannot be dismissed based on the evidence.
Here are a few things I am not claiming (some of these are discussed in a little more detail toward the end of the post, though I don’t elaborate extensively on them):
I’m not claiming that the EA philosophy and community create more incentives for deception and dishonesty than most groups do; I actually think the opposite is true. Rather, I’m focusing on the encouragement that the EA philosophy and community provide for risk-taking, and the expected value framework that naively encourages this.
I’m not claiming that the basic arguments about risk-taking, expected value, and effective altruism are completely wrong or that events with SBF have fully invalidated them. I think the basic logic of many of these is still fairly sound, but also that the events with SBF should lead us to update in the other direction and to add more nuance to our thinking about these topics. I try to articulate this a little bit in this post, but nowhere near enough, and further articulation would require a separate post (perhaps by another person).
I’m not claiming that the downside of making pro-risk-taking arguments should have been fully obvious in hindsight—the collapse of SBF / FTX is new information and whatever model we have of the world after it should be at least somewhat different than the model we had of the world prior to it. I do think that at least some aspects of these points should have been given more attention in the past, even with the more limited information available then.
And to be clear, these are some things I’m not really covering in this post:
I’m not covering the fraught topic of whether EA leaders or people should have been able to predict what specifically happened with SBF and FTX, and whether they had prior indications of his character. That topic has been discussed in several other threads.
Claim 1 justification: SBF engages in extreme risk-taking
I won’t really provide much direct justification for Claim 1; I’ll note in passing that a lot of commentary both on the EA Forum (such as Kerry Vaughan’s summary) and in external press coverage (see for instance Axios). The justification for Claim 2 provided below is more detailed and also implicitly provides further justification for Claim 1.
See also Byrne Hobart’s post Money, Credit, Trust, and FTX, that goes into some of the math involving expected value and the historical context of FTX and Alameda Research.
Claim 2 justification: At least part of the motivation for SBF’s extreme risk-taking comes from effective altruist ideas
SBF’s articulation in a fireside chat with Stanford EA
In a fireside chat with Stanford EA, SBF gives advice to students based on his own experience. At first listening, everything he says sounds quite reasonable (in general, SBF’s public persona feels very reasonable—something that falsely causes people to feel reassured!).
Here is the transcript from YouTube, lightly edited by me for sentencification and removal of “um”s and uh”s; you can watch the video or read the original transcript on YouTube by clicking “Show transcript” in the options under ”...” below the video. I have highlighted the portions most relevant to my points, but have not elided any other stuff within those segments of the video.
And earlier in the talk, SBF says:
SBF’s articulation in a podcast with 80,000 Hours
In a podcast with 80,000 Hours (full transcript available on page), SBF makes the same points about expected value, but goes into a little more detail. Here are the first three paras from 80,000 Hours’ summary on top (emphases mine):
And more in the full transcript:
The expected value argument and its connection with effective altruism
It’s folk wisdom that personal (selfish) utility for individuals tends to be less than linear in the money they have, an idea that is also widely known as the diminishing marginal utility of money. One common (though probably inaccurate) approximation is that utility to individuals is approximately logarithmic in money. This is the motivation for the Kelly criterion, a widely referenced criterion for how to diversify one’s portfolio in order to maximize the expected value of the logarithm of wealth. These general ideas are well-known in economics and among a lot of intellectuals including many in the effective altruist movement.
The “altruistic” twist here is that for individuals interested in altruistic impact, utility is much closer to being linear in money than logarithmic. Or, we don’t quite see diminishing marginal utility of money for altruistic purposes, at least at the amounts of money that most people can make. That’s because the problems of the world are huge and can absorb huge amounts of money (this is true for most big problems, ranging from climate change to AI safety to global health and development to animal welfare). So basically doubling your wealth that you intend to allocate to charity should approximately double your impact.
The basic idea is covered in a post by Paul Christiano (also cited by 80,000 Hours) but he’s only looking at financial investments. In contrast, SBF preaches and practices defining one’s whole life / earning-to-give trajectory around risky high-expected-value bets. See also the risk aversion topic on the EA Forum.
For more on SBF’s articulation of the math and the thinking behind this, see his tweet thread where he compares the Kelly criterion (maximizing expected log wealth) with his own approach that is based on closer to linear returns.
Endorsements of “thinking big” and more-than-normal risk-taking in effective altruism
Some of the enthusiastic agreement and encouragement of SBF’s views can be seen in the 80,000 Hours podcast, where the interviewer, Robert Wiblin, agrees with and even repeats and summarizes several of SBF’s expected value claims. For instance, quoting from the preceding excerpt of the 80,000 Hours podcast:
The 80,000 Hours post Be more ambitious makes fairly similar arguments about the importance of being more ambitious and the value of focusing on upside, and the way these become more important if you want to do good rather than if you are just interested in your personal well-being. SBF is also cited as a case study! There are also cautionary notes later in the post about limiting downsides, but the final note is still around encouraging more rather than less ambition:
Moreover, even the discussion of backup plans only talks of personal backup plans, rather than backup plans to mitigate the potential impact on others one does business with (for instance, customers, employees, investors) or on charities and foundations that might start depending on one’s donation plans; emphases in the below excerpt are mine:
Will MacAskill, a key figure in effective altruism, was an early influence on SBF and pushed him in the earning-to-give direction. This is confirmed by SBF in both the 80,000 Hours podcast and the Stanford EA interview; it’s also described in this New Yorker article:
MacAskill has also made the general argument that if your goals are altruistic, you should be much more ruthless in your pursuit of scale and take on more risk. The video I could most readily find was a deep dive with Ali Abdaal and was talking about altruistic impact through “direct work” rather than donations, but elsewhere in the video he does suggest a kind of exchange rate between the two depending on one’s direct impact in comparison to the value of donations.
Linearity on the low end: the lower bound of zero impact and non-consideration of negative impact by losing money
In the above discussion, my focus when talking about the close-to-linear altruistic returns of money was on the upside/positive side: you can scale up giving since the world’s problems are so big. However, there’s another direction where this is important as well: the direction toward zero (and beyond?).
One sometimes-implicit and sometimes-explicit idea in SBF’s discourse is the idea that utility is close enough to linear in money, and as an important corollary, there’s a lower bound at zero. The worst-case outcome here is making nothing, in which case you make no donations and therefore have effectively zero impact. So risk-taking has very high upside but only a limited downside—in the worst case, you’re wiped out, you declare bankruptcy, maybe you even die penniless.
From a selfish perspective, this is a pretty bad outcome (and indeed, a logarithmic model of utility would give an infinite negative utility to having no money). So from a selfish perspective, there’s a big downside to being wiped out, and this is part of what motivates risk-aversion.
From an altruistic perspective, however, getting to zero money is a bad outcome but only to the extent that it represents the absence of good outcomes. So it’s an outcome that you try to avoid, but not all that desperately.
Moreover, this simple framework was developed mostly in connection with people managing their own savings, rather than running complex companies that manage other people’s investments and assets. So it doesn’t even begin to grapple with the idea of going negative and the utility implications of that. Of course, personal wealth can be negative when one puts money on a credit card or takes a student loan, but these are relatively small amounts and people generally start thinking of altruism when they’re no longer in significant debt. My guess is that SBF (consciously or subconsciously) rounds up “going negative” to zero because ultimately it just means he’s able to donate zero money.
Startup risk and the kicking in of caution
A lot of what SBF said about risk-taking makes a lot of sense in the context of somebody trying a startup idea (having earmarked some sort of safety net that they won’t touch, and then using other funds from themselves or outside investors that are explicitly understood to be for the purpose). What also tends to happen is that once the startup starts succeeding and real people start depending on it for real stuff, it starts moving in a more conservative direction—reducing the riskiness of its actions. There are probably four factors that push in that direction:
The founders/owners now have more to lose from a purely selfish perspective; this essentially comes from the “diminishing marginal utility of money” idea albeit it may or may not be seen in purely financial terms. For instance, after a company grows from near-nothing to being worth a few million, and the founders have shares worth a decent chunk of that, they are at risk of losing that money if they tank the startup.
The founders/owners have a desire to succeed and to not mess things up (e.g., because they now feel more passionately about the thing they’re building, they feel attached to its success, or to avoid embarrassment). Messing up an already-big company feels more embarrassing, and can be more guilt-inducing as well to the extent that one sees the pain caused to others.
The founders/owners have needed to involve other stakeholders, who also can lose out if things go bad. This includes investors, employees, customers, partners, etc. Some of them may have incentives to take more risk (particularly investors who want to get big payouts from a diversified portfolio) but others benefit from greater stability and less risk. Moreover, since different stakeholders see the riskiness of various actions differently, and some level of agreement is needed, the overall direction will be toward less risk.
Third parties may put more pressure of various sorts; this includes regulators, hackers, a hostile press, or various other actors. In the face of this pressure, more caution and care may be needed.
A great post by Dan Luu talks about how Google and Microsoft ultimately got serious about security after embarrassing incidents. He writes:
So what was special about the SBF situation where they were able to get to such a huge scale without these sorts of things kicking in? Let’s go through the four points:
The founders/owners now have more to lose from a purely selfish perspective: I think that although this was true, it probably wasn’t as true in SBF’s perception because his mental model was that of altruistic impact and linear utility. So making what he considered a positive-EV bet when the company was worth $5 billion may not have felt that different from making a positive-EV bet when the company was worth $5 million. So at least the absence of this particular mechanism was tied to the altruistic endgame of the money.
The founders/owners have a desire to succeed and to not mess things up (e.g., because they now feel more passionately about the thing they’re building, they feel attached to its success, or to avoid embarrassment): My guess is that while SBF obviously had a desire to succeed and not mess things up, he didn’t actually feel that passionate about the value of the work he was doing and saw it as a gamble to make money; as long as it was EV-positive, he was willing to take big risks even after amassing a lot of wealth. I believe that this stemmed very directly from his EA-influenced thinking about risk and value.
The founders/owners have needed to involve other stakeholders, who also can lose out if things go bad: The failure of this mechanism doesn’t seem directly tied to SBF’s EA connection, but may be more of a feature of the business: they were able to get to a fairly large scale without having a lot of different stakeholders, and were also able to preserve a fair amount of secrecy despite the openness of the blockchain.
Third parties may put more pressure of various sorts: This didn’t happen … until it did, and then everything collapsed. The failure here in the wider world seems mostly unrelated to EA and may have more to do with the novelty of the space and therefore the lack of relevant critical expertise; however, the failure to notice this within EA was likely due to EA’s positive impression of SBF and his expected value-maximizing ideals.
Further thoughts (without extensive justification)
Tentative thoughts on where I think SBF went wrong in his thinking
I had listened to SBF’s fireside chat shortly after it had come out. His thoughts on taking risk had been interesting to me, insofar as it differed from my own philosophy on risk, but I didn’t consider him wrong per se. If anything, listening to him made me update my priors slightly toward taking more risk. I couldn’t find anything very categorically wrong in what he said.
Upon further reflection, I actually think that what he said was directionally incorrect in several ways, and what ended up happening to him is directional evidence that I should be updating away from the direction of his advice. In particular, I suspect that these are some areas where he’s wrong:
Even viewed altruistically, going to zero or negative money has pretty sharp negative utility, particularly the way it ended up happening to FTX (with a loss of customer deposits and attendant suffering of many people). But the drop would have plausibly been bad even in tamer scenarios (for instance, if Alameda had openly gone bankrupt and FTX had died out with the collapse of the FTT).
I suspect that altruistic impact is less than linear in money, and that there are a lot of other details about the way things play out, that affect altruistic impact. For instance, I suspect that FTX could have had a significant positive impact if it had quit with SBF making enough to earmark a billion dollars for charity. That would have been enough money to champion the values and start a pattern of altruism that ultimately could have been continued by other donors (ironically, Nick Beckstead makes the point that individual funders may have relatively few good grants to make and that’s why Future Fund experimented with delegating grantmaking to a larger number of regrantors; I think a similar point applies at the foundation level as well).
This would obviously have been better than what ultimately transpired, but I suspect it would have been better even in properly done expected value calculations. This is a tricky point to justify and I won’t attempt to do it here.
SBF fails to account for optimism bias … if you think you’ve got a 30% chance of success, you probably have much less. I suspect he was ultimately a victim of this optimism bias.
I could be wrong about several of these points; I’m also not retreading the familiar ground of how what ended up happening (and was likely a direct result of SBF’s actions) was terrible and unethical etc. I’m making the point that even the original risk-taking was probably wrong from a perspective of maximizing altruistic impact.
Luxurious lifestyle?
Oliver Habryka’s comment seems valuable:
I think this is important as part of the general point that SBF was successful at cultivating a certain kind of image in the media that didn’t reflect his reality. However, I don’t think that the fact (that his actual lifestyle was an order of magnitude more luxurious than his public persona might indicate) undercuts the general claim that his main goal was to make a bunch of money to donate. Even this moderately more luxurious lifestyle was still well within his means, and if maintaining that lifestyle were his goal, exiting after making a few hundred million dollars would probably have been a selfishly smart thing to do.
Noble lies?
A comment by Oliver Habryka covers more relevant attributes of SBF that, if true, could be part of the reason for FTX’s ultimate collapse:
These claims about dishonesty, if true, could help explain why SBF failed to get the right sort of feedback and checks and balances that could have prevented him from making risky moves.
In a documentary on Theranos founder Elizabeth Holmes, psychology professor Dan Ariely said that his experiments found that people are much more likely to lie a little bit when the effect of the lying will send money to charity rather than when they pocket the money from their lies. He also claimed that lie detector tests have more trouble catching lies spoken by people who are lying to send more money to charity. He claimed that people feel less conflicted about what they consider noble lies than what they consider self-serving lies, and therefore can lie more convincingly when they think it’s for the greater good. I have not checked the research myself, but you can see a summary of the argument in this Business Insider article. I have also heard that Ariely himself got into trouble for allegedly fake data in another experiment, but the general point he made seems plausible even if you don’t put much weight on his experimental evidence.
To the extent that SBF engaged in dishonesty without having any of the “tells” that dishonest people have, his altruistic endgame might be part of the reason for it. However, I don’t see this as being heavily connected with effective altruism as a philosophy, community, or social movement. That’s because in general, unlike with risk-taking, the EA philosophy and community has not encouraged lying. If anything, it has, at least in explicit statements, put a greater premium on integrity than most people do.
A counterfactual thought experiment
One standard way of evaluating whether A causes B is to think about a world where A hadn’t happened and ask whether B was likely to have happened in that world.
Here’s a thought experiment—what would have happened if the existing EA philosophy and community hadn’t had this strong bent toward risk-taking and thinking big, and/or hadn’t been pushing earning to give? It’s definitely pretty unclear, but I think:
In the absence of an earning-to-give push, there’s a pretty good chance that SBF would have gone on to do some form of direct work initially (e.g., working on animal welfare as was his original intent, or getting into some work in the longtermist space).
In the absence of a push to be more ambitious, there’s a pretty good chance that SBF would have felt content working at Jane Street Capital and donating a chunk of money to charity, and would only have left it to pursue direct work. You can take a look at his old blog—reads just like an ordinary earning-to-giver.
If he hadn’t internalized the utility-is-linear-in-money argument that is common in EA circles (and that pushed him to continue to take similar risks despite amassing large sums of money), it’s likely that SBF would have exited after Alameda Research’s initial trading success and then used that to make donations.