A personal reflection on SBF


The following is a personal account of my (direct and indirect) interactions with Sam Bankman-Fried, which I wrote up in early/​mid-November when news came out that FTX had apparently stolen billions of dollars from its customers.

I’d previously intended to post a version of this publicly, on account of how people were worried about who knew what when, but in the writing of it I realized how many of my observations were second-hand and shared with me in confidence. This ultimately led to me shelving it (after completing enough of it to extract what lessons I could from the whole affair).

I’m posting this now (with various details blurred out) because early last week Rob Bensinger suggested that I do so. Rob argued that accounts such as this one might be useful to the larger community, because they help strip away a layer of mystery and ambiguity from the situation by plainly stating what particular EAs knew or believed, and when they knew or believed it.

This post is structured as a chronological account of the facts as I recall them, followed by my own accounting of salient things I think I did right and wrong, followed by general takeaways.

Some caveats:

  1. I don’t speak for any of the people who shared their thoughts or experiences with me. Some info was shared with me in confidence, and I asked those people for feedback and gave them the opportunity to veto this post, and their feedback made this post better, but their lack of a veto does not constitute approval of the content. My impression is that they think I have some of the emphasis and framings wrong (but it’s not worth the time/​attention it would take to correct).

  2. This post consists of some of my own processing of my mistakes. It’s not a reaction to the whole FTX affair. (My high-level reaction at the time was one of surprise, anger, sadness, and disappointment, with tone and content not terribly dissimilar from Rob Wiblin’s reactions, as I understood them.)[1]

  3. The genre of this essay is “me accounting for how I failed my art, while comparing myself to an implausibly high standard”. I’m against self-flagellation, and I don’t recommend beating yourself up for failing to meet an implausibly high standard.

    I endorse comparing yourself to a high standard, if doing so helps you notice where your thinking processes could be improved, and if doing so does not cause you psychological distress.

  4. My original draft of this post started with a list of relatively raw observations. But the most salient raw observations were shared in confidence, and much of the remainder felt like airing personal details unnecessarily, which feels like an undue violation of others’ privacy. As such, I’ve kept the recounting somewhat vague.

  5. I am not particularly recommending that others in the community who had qualms about Sam write up a similarly thorough account. I was pretty tangential to the whole affair, which is why I can fit something this thorough into only ~7k words, and is why it doesn’t seem to me like a huge invasion of privacy to post something like this (especially given what I’m keeping vague).

    Hopefully this helps people get a better sense of the degree to which at least one EA had at least some warning signs about Sam, and what sort of signs those were. Maybe it will even spark some candid conversation, as I expect might be healthy, if the discussion quality is good.

Short version

My firsthand interactions with Sam were largely pleasant. Multiple of my friends had bad experiences with him, though. Some of them gave me warnings.

In one case, a friend warned me about Sam and I (foolishly) misunderstood the friend as arguing that Sam was pursuing ill ends, and weighed their evidence against other evidence that Sam was pursuing good ends, and wound up uncertain.

This was an error of reasoning. I had some impression that Sam had altruistic intent, and I had some second-hand reports that he was mean and untrustworthy in his pursuits. And instead of assembling this evidence to try to form a unified picture of the truth, I pit my evidence against itself, and settled on some middle-ground “I’m not sure if he’s a force for good or for ill”.

(And even if I hadn’t made this error, I don’t think I would’ve been able to change much, though I might have been able to change a little.)


Mid 2015-17(?)

The very first time I met Sam was at the afterparty of an EA Global. I forget which one. If memory serves, somebody introduced me to him as a person who was a staunch causal decision theorist, and someone who didn’t buy this logical decision theory stuff. We launched into an extended argument, and did not come to any consensus. This is the context in which I formed my first impressions.

Early 2018

Sam had moved into a group house a few blocks away from my house, while (co)founding Alameda Research. He employed a bunch of my friends, and (briefly) worked in the same office building as me.

One evening, a friend and I dropped by the group house and hung out. Sam and some other people were there, and a bunch of us stayed up late chatting about a wide range of topics. I found the conversation pleasant (and, in particular, didn’t get any bad vibes from Sam, and in fact enjoyed the spirit of candor reflected in his probing).

In early 2018, I heard secondhand from a bunch of my friends about a major conflict at Alameda Research resulting in a mass exodus from the org. A bunch of my friends said that they’d been burned in the conflict, and various people seemed bitter about their interactions with Sam in particular.

At the time, my only response was to file the observation away and offer sympathy. I didn’t pry for details. (In part because my default policy regarding community drama is to ignore it, on the theory that most drama is distracting and unimportant, and drama needs attention to breathe. And in part because it looked to me at a glance like Alameda was dead, which lowered my probability that a response was necessary.)

Late 2020

It wasn’t until late 2020, when I was hanging out with one such friend, that I got a sense that the Alameda conflict had been much worse than I’d previously thought.

I was told some stories that gave me pause, though I continued to avoid prying about the details. Some of those details, plus bits and pieces of other accounts, gave me the overall impression that Sam is unfair, socially ruthless, and willing to betray handshake agreements.[2]

There were some stories that seemed to me to cross a “Not Cool” line, and I encouraged my friend to speak up publicly about what happened, and offered to signal-boost them and back them up. They declined, and noted that they’d already told a variety of other community-members (to no effect).

During that 2020 interaction, my friend asked whether I thought that Sam experiencing great success would be good or bad, and I said that my best guess is it would be good.

At the time, I made the error of conflating my friend’s question with something more like “Do you think Sam is secretly in this business for personal glorification, and would reveal his true selfish colors upon attaining great wealth and power, or do you think that he is ultimately trying to do good?”

I answered this alternative question, and thereby gave mixed signals to a friend who was perhaps probing what sort of conviction I’d have in my support of them. Oops.

Late 2020 - early 2021

In the period between my aforementioned meeting with my friend, and a period in early 2021 where I have some chat logs, I heard more about Sam.

Unfortunately, I don’t quite know what I learned when, nor who I learned it from. (Although I remember at least one piece coming from a friend by way of song.) This was the period when Ben Delo was being charged with some sort of cryptocurrency regulation violation, and I heard a variety of rumors about people from a variety of places, some of which might have conflated Ben Delo with Sam; or I might have mixed up the two in my recollection later.

Things I vaguely recall hearing (or maybe mishearing) in this time period (including possibly at the end of my late-2020 visit, my memory is fuzzy here):

  1. Sam was now a decabillionaire.

  2. Alameda Research had survived, and moved to Hong Kong.

  3. Alameda Research had moved to Hong Kong because the US crypto regulations were too strict.

  4. Alameda Research had committed KYC regulation violations, and its executives were no longer welcome in the US (and might be apprehended if they attempted to re-enter).

  5. Alameda Research had changed its name to FTX.

There might’ve been others. I didn’t pay particularly close attention. Note that not all of these are true. I’m currently fairly confident that (1) and (2) are true (Wikipedia says Alameda Research moved in 2019), and (5) is false. My guess is that (3) is true and (4) was conflating Sam with Ben Delo? But I haven’t checked in detail.

I do recall some friends and family observing that my community seemed adjacent to the cryptocurrency community, and wanting to talk about it, sometime in this time period.

I recall saying (to a family member, using “hyperbolic/​provocative” tone-markers) something along the lines of “Yeah, I have a friend who got into crypto trading and did everything by the books and wound up with a net worth of tens/​hundreds of millions of dollars. And I have another friend who played fast and loose with the regulations, whose net worth is now ten billion dollars. From this, we learn that the cost of doing everything completely by the books is about ten billion dollars, because ~ten billion dollars minus ~a hundred million dollars is of course ~ten billion dollars.” (I also recall repeating this musing at least twice from cache, to at least two different friends, in mid 2021.)

Early 2021

I have some chat logs from early 2021 (not too long after I learned that Sam was very wealthy now) where a friend asked for my take on Sam (in the context of whether to engage with his altruistic endeavors), and I said I was (literal quote) “a little wary of him, on account of thinking he has fewer principles than most community members”. I pointed my friend towards a mutual friend who’d had good interactions with Sam and a mutual friend who’d had bad interactions with Sam.

At about the same time, MIRI sold some MKR tokens (that had been donated to us) to Alameda Research because it was tricky to convert the MKR to USD on Coinbase Exchange, and Alameda had previously mentioned an interest in helping EAs with unwieldy crypto transactions. I interacted with Sam some at this time, to briefly get his take on some crypto questions while the channel was open.

Early 2022

Early in 2022, I swung by the FTX offices to briefly visit with some folks associated with the FTX Future Fund, while I was in the Bahamas for other business.

My next interaction with Sam was in a group setting, when we were both at an EA group-house in Washington, D.C. simultaneously. We hung out, it was a good time. I recall having some lingering discomfort around the “hey, I hear you were mean to my friends” thing, but not enough to bring it up out-of-the-blue in a group context (and it’s hard to say how much of a flinch there really was, on account of hindsight bias).

Shortly before November 8th, 2022

During the period where FTX was looking pretty shaky (so probably November 5th, 6th, or 7th?), I was coincidentally introduced to a cryptocurrency trader. He heard that I had some acquaintance with Sam, and said that something was up with FTX, and asked whether I thought Sam had stolen customers’ money. I said “I’ve heard that he’s often a prick, and that he’s skirted a variety of regulations, but I’d be preeeettty surprised if he didn’t have the customer money”.

(I’m glad that I was asked the question point-blank out-loud on Nov 5-7, because otherwise I think there’s a decent chance that hindsight bias today would cause me to inflate my memories of all the reasons I had for suspicion, and that I’d have forgotten how, on balance, I was surprised that Sam didn’t have the customer money, even in the wake of early suspicion.)

In the same conversation, I also vaguely recall reporting that I thought Sam was trying to legitimately do good with his money, when queried about whether the “EA” thing was legit.

(Embarrassingly, I don’t think it was until those conversations that I finally learned that Alameda Research had survived. My previous hypothesis was that it’d burned down, and FTX had risen from its ashes.)


I’ll catalog some places where I’m either particularly pleased or displeased with my performance, in rough chronological order. Later in this post, I’ll record the general lessons I’ve managed to extract.[3]

I didn’t press for details

I had at least two opportunities (in early 2018 and in late 2020) to ask my friends for more details about their bad experiences, and I neither sought details then nor came back with questions later.

I was dissuaded from poking around in part by the fact that I was under the impression that my friend was under some sort of non-disparagement agreement.

Reflecting now, my current guess is that it was an error for me to not pry just because I thought non-disparagement agreements were involved.

I think that it would have been a good idea for me to explicitly encourage my friend to tell me more, insofar as my friend was willing to trust me to keep things confidential, and insofar as this was within the bounds of their idealized agreements (acknowledging that Earth is often pretty messed up about what the paper contracts literally say). Knowing more would have made it more likely that I could connect the dots and respond better (in ways that didn’t betray their confidence).

I failed to listen properly to my friends

When my friend asked me whether I thought Sam achieving great success would be good or bad, I was not consciously tracking the difference between the hypothesis “Sam is amoral and will intentionally use power for ill ends, if he acquires it” and the hypothesis “Sam is reckless and harmful in his pursuits, such that ill ends will result from him acquiring power, regardless of whether or not he ultimately has altruistic intent”. This is a foolish and basic mistake that I made multiple times. Oops.

I misheard my friend as arguing for the former, and weighed their arguments against my impression that Sam in fact had altruistic intent at heart, and wound up feeling uncertain (as evidenced by the later chat logs).

Commenting on an earlier draft of this post, my friend relayed to me the experience of trying to warn community members that Sam exhibited sketchy behavior, only to be rebuffed by claims to the effect of ~”if there are going to be sociopaths in power, they might as well be EA sociopaths”.

I don’t doubt my friend’s claim. I didn’t see other people respond to their objections (and, if I understand correctly, I was only late and incidental to their overall experience). Separately, I can see how my own response fits into that overall impression.

My recollections don’t support the hypothesis that I personally made the specific error of thinking that the sociopaths in power might as well be EA sociopaths (and I don’t know to what degree my friend read me as saying this), but human brains are not entirely trustworthy artifacts when it comes to memories that paint the rememberer in a bad light, so do with that what you will.

On my own recollections, what happened in my case is more like: I believe ~nobody is evil and ~everything is broken, and when I see humans accused of evil I get all defensive and argue that they’re merely broken. (I have exhibited this pattern in a few other instances, which I’m now taking a second look at.)

In this case, in my defensiveness regarding Sam having altruistic intent, and my decision not to direct much attention to this topic, I entirely missed the point that broken people can also be dangerous.[4]

I think I was basically modeling the question of “is it good if Sam experiences great success?” as being a question of his ultimate ends, and thus turning on whether he was secretly evil (or suchlike). And I wasn’t persuaded that he was secretly evil.

But that very breakdown considers only how Good things would be if Sam got to choose the ends by wishing on a genie, without taking into account the (real!) risk of shitty ends caused by unethical means!

My error here perhaps rhymes with the gloss “the sociopaths in power might as well be our sociopaths”. But as far as I recall, I didn’t explicitly make (and wouldn’t have endorsed) any argument of the form “Sam is unlikely to cause massive collateral damage in his pursuit of wealth and power”; I was simply failing to notice that the answer to the given question depended on how much harm we should expect Sam to do along the way. (Oops. It feels obvious in retrospect. Sorry.)

(Extra context: if I recall correctly, I was not, at the beginning of that conversation in 2020, aware that Sam was wildly wealthy. I assumed that Alameda had died in 2018, and actually kinda thought we were discussing water under the bridge plus separate edgy thought experiments about whether the CEV of a self-professed Good-aligned ~sociopath is better or worse in expectation than the status quo (which question notably does not weigh harms that people would commit in their own recklessness). And I also didn’t reexamine the conversation at all upon learning that Sam had become a decabillionaire. I can be kinda clueless sometimes. Oops.)[5]

I pit my evidence against itself

I (foolishly) misunderstood my friend as arguing that Sam was pursuing ill ends, and weighed their evidence against other evidence that Sam was pursuing good ends, and wound up uncertain.

This was an error of reasoning. I had some impression that Sam had altruistic intent, and I had some second-hand reports that he was mean and untrustworthy in his pursuits. And instead of assembling this evidence to reveal the truth, I pit my evidence against itself, and settled on some middle-ground “I’m not sure if he’s a force for good or for ill” that didn’t fit any of it.

I internally (implicitly) saw “strong evidence on both sides”, and shrugged, and marked myself down as uncertain. But in real life, there’s never strong evidence on both sides of a question about how the world is.

Falsehoods don’t have strong evidence in favor of them, that happens to be barely outweighed by even stronger evidence for the truth! All the evidence points towards a single reality!

Example: If you have 15 bits of evidence that Mars is in the east, and 14 bits of evidence that Mars is in the west, you shouldn’t be like, “Hmm, so that’s one net bit of evidence for it being in the east” and call it a day. You should be like, “I wonder if Mars moves around?”

“Or if there are multiple Marses?”

“Or if I’m moving around without knowing it?”

Failing that, at least notice that you’re confused and that you don’t have a single coherent model that accounts for all the evidence.

I was supposed to notice the tension, and seek some way that our apparently-contradictory evidence was not in fact in conflict.

Had I sought out a way to resolve the tension, I might have noticed that my friends were arguing not “Sam is pursuing Evil (despite your evidence to the contrary)” but rather “Sam is the sort of creature who does harm even in his pursuit of Good (and him succeeding is dangerous on those grounds)”.

But I wasn’t thinking about it clearly or carefully. I was just tossing various observations on different sides of an improperly-unified scale, and watching where it balanced.

And so when Sam did make a zillion dollars and start visibly putting it towards pandemic prevention and nuclear war prevention and etc. etc., that (subconsciously and implicitly) felt to me like it pulled down one side of the scales, and raised the other.

If I’d been thinking properly, I would have noticed that his shocking wealth was not in contradiction with the evidence on the other side of the scale, and was in fact easy to square with the hypothesis that his methods have been amoral. I might have even managed to explicitly form the hypothesis that his gains were ill-gotten.

But I wasn’t thinking properly about the matter (or much at all, to be honest), and all the visible evidence of Goodness felt (at a glance) like it canceled out the competing evidence of amorality. A foolish mistake.

I pride myself in my ability to tease apart subtle tensions, and to avoid pitting the evidence against itself, in my areas of expertise. Clearly I have some work to do, to apply these skills more consistently or more broadly.

I think this was pretty cool of me.

Even though I was (foolishly) skeptical of (what I thought was) my friend’s “Sam is ill-intentioned” hypothesis, I nevertheless noticed that the stories my friend recounted sounded like Sam had crossed a line, and I encouraged them to speak up about it, and offered to back them up.

(My memories suggest that I offered to speak up about it myself at their behest, and take what flak I could, if they wanted me to, although that memory is significantly less clear and could easily be rose-tinted hindsight.)[6]

I failed to prod others to action

I did basically nothing in response to learning that my friend’s concerns had theretofore fallen on deaf ears.

A cooler version of me would have taken that as more of a red flag, and made a list of deaf-eared people to pester, and then pestered them.

I didn’t, and I regret that.

I failed to notice discomforts

When people make big and persistent mistakes, the usual cause (in my experience) is not something that comes labeled with giant mental “THIS IS A MISTAKE” warning signs when you reflect on it.

Instead, tracing mistakes back to their upstream causes, I think that the cause tends to look like a tiny note of discord that got repeatedly ignored—nothing that mentally feels important or action-relevant, just a nagging feeling that pops up sometimes.

To do better, then, I want to take stock of those subtler upstream causes, and think about the flinch reactions I exhibited on the five-second level and whether I should have responded to them differently.

Looking at the sort of things I said to friends and family in 2021, I was clearly aware that Sam is the sort of person who readily skirts regulations.

I wish I lived in a world where this was a damning condemnation, but alas, my current model is that regulations are often unduly stifling and generally harmful.[7]

I’d mentally binned various KYC-ish cryptocurrency regulations in the “well-intentioned but poorly-implemented” category, and did not in the slightest suspect FTX of mixing funds with customer assets. (I didn’t even yet have separate ‘FTX’ and ‘Alameda’ concepts; I just wasn’t paying that much attention.)

Looking back, I think that I remember experiencing little mental flinches when I referred to Sam (in passing) as a “friend” (although my brain might be exaggerating the memories in a self-serving /​ hindsight-biased way), on account of having unresolved grievances of the form “I’ve heard you were pretty shitty to people I care about”.

To be clear, though, the flinches were not of the form “maybe he’s stealing from clients”—I don’t recall the thought even occurring to me that Sam might be committing financial fraud or doing anything similarly bad. They were of the form “he seems to have hurt people I care about”.

And, for the avoidance of doubt, if not for hearing that he’d hurt my friends, I’d’ve unflinchingly called him “friend”—we shared a community, we’d had a few long involved philosophical arguments, we’d stayed up late talking at his house; that’s enough for me.[8]

I also recall similar little flinches at (e.g.) the group house in D.C., or on Nov 5-7 (when I struggled again for words for my relationship to Sam, and settled—if I recall correctly—on “not-very-close friend”, with some caveats about how I’d heard tell of shady behavior).

Reflecting a bit further, I think that the things I told people about Sam were colored somewhat by the tone of their inquiries. Once Sam started getting press for his donations, the tenor of some friend/​family inquiries became more skeptical, and my responses changed to match: people would ask me questions like “is this Sam guy legit?”, and I would mentally substitute questions like “are these EA charities he’s donating to legitimate, and are they actually getting the money?”, which I felt much more readily able to answer. (Whereas in early 2021, I was more likely to mention the Hong Kong rumors, or the bad blood with my friends.)

But even then, I recall flashes of unease.

If I’d noticed them explicitly, perhaps I could have traced them back to the source. Perhaps that would have been the catalyst needed for me to stop pitting my “he hurt my friends” evidence against my “he’s trying to do a lot of good” evidence. And if I’d found the (pretty basic!) way to reconcile all the evidence simultaneously, it might have led me straight to the truth.


Pry more

I think there’s a way to pry into friends’ bad social experiences compassionately, fueled by genuine curiosity for juicy gossip and also by genuine compassion, that makes it easier to support one’s friends.

Had I done more of this in the case of Sam and Alameda, I might have had more puzzle-pieces to work with, and I plan to do more prying into my friends’ concerns going forward.

(This is a lesson that I’ve already been taught once before, by some bad actors in the rationality community. That said, I wasn’t taught that particular lesson until mid 2018, so by my own accounting, I get only one “learns slowly” strike from the late 2020 conversation.)

I also think that I should think of non-disparagement agreements as pertaining to public knowledge, not to knowledge shared in confidence between friends, and that I shouldn’t let their existence dissuade me from inquiring further.

I think I’m basically already better at this, having written this all out explicitly.

Don’t pit your evidence against itself

Fixing the “I pit my evidence against itself” problem is easy enough once I’ve recognized that I’m doing this (or so my visualizer suggests); the tricky part is recognizing that I’m doing it.

One obvious exercise for me to do here is to mull on the difference between uncertainty that feels like it comes from lack of knowledge, and uncertainty that feels like it comes from tension/​conflict in the evidence. I think there’s a subjective difference, that I just missed in this case, and that I can perhaps become much better at detecting, in the wake of this harsh lesson.

The other obvious way to notice when I’m doing this more readily is to get better at noticing my own unease in general.

Notice more unease

This is a key rationalist skill that, in my experience so far, I’ve always had more room to improve on.

I think my main action-item here is to mull on the particular unease that I felt at various times, and attend to their connection to recent painful events, on the model that this helps hone my unease-detectors in general.

Have more backbone

A thing I feel particularly bad about is not confronting Sam at any point about the ways he hurt people I care about.

I can come up with a variety of excuses: I basically only saw him in group contexts; once he apparently had tens of billions of dollars and was frantically running around trying to grow his wealth and put it towards good causes, it felt like a dick move to bring up years-old second-hand injuries out of the blue; I’d never heard his side of the story.

But also, failing to have the intent to grill him about it seems to have eroded my memory, and taken the edges off of my anger. I can still remember the sense of “yep, that crosses a line, fuck him, I’ll back you up if you need support” in that late 2020 conversation, and when I hold that memory fresh in my mind, I’m embarrassed by my conduct of being broadly cordial at a group house in 2022.

There’s a virtue, I think, to hearing about something that was Not Cool, and then… being unwilling to let it slide. I don’t think that, on my ideal ethics, I’d be required to confront Sam; avoiding him might also be permitted. And, on my ideal ethics, I’d definitely entertain the hypothesis that things looked very different from his own point of view. But I do think that, according to my ethics, I’m not supposed to hear concerns and then just slip silently back into generic cordiality.

(A part of me does protest, here, that this is particularly tricky in cases where the information is shared in confidence, in which case I don’t necessarily have license to confront Sam directly without violating that confidence. But, like, that’s not an excuse to slip silently into generic cordiality; it’s an excuse to notice that I’m chafing under confidentiality and then try to work out some other solution. Which might have, e.g., driven me to prod others to action.)

I think it’s plausible that I’ve gotten better at this simply by noticing the error, writing all this out, and reflecting on the embarrassment.

On blame

In writing this post, I worry that my words will be seen as giving social license to EAs to self-flagellate. So it seems important to reiterate that I’m against self-flagellation.

Having an overly pessimistic model of yourself is no more virtuous than having an overly optimistic model. Nor is it virtuous to exaggerate your faults to others. I’m not here trying to take a bunch of other people’s blame onto myself.

I’m disappointed with myself for not reflexively fitting all my evidence together into a single whole, and for failing to explicitly notice my unease multiple times. These are places where I strive to excel in general, and I intend to do better next time. But if your takeaway is that there’s a bunch of blame at my feet—

—well, actually, that’s fine, I don’t really care. But I do care if you adopt that stance toward the friends of mine who criticized Sam. I feel preemptively protective of my friends here, against an imagined internet mob who will proceed from here to allege that they didn’t do their part.

As far as I’m concerned, my friends who tried to warn people about Sam already well more than paid their dues, and the correct community response is “oops, we should have listened better” and not “why didn’t you shout louder?”[9]

I’ll also caveat that I’m not trying to place any blame at the feet of others who turned an apparently-deaf ear, given my current knowledge state. I don’t know what Sam’s side of the story sounded like, or whether there was some coordination failure where nobody felt like they were the one who could do something about it, or what. I am familiar with how clues that look obvious in retrospect can be difficult to assemble in advance, and with the phenomenon where diffusion of responsibility makes it hard for a community to do anything in response to warnings.

My impression, looking both within my own communities and at the broader world (that contains various other issues that lurk unseen for ages before being declared obvious in hindsight), is that this stuff is tricky to properly notice and address in advance.

As for myself, and the degree to which I personally turned a deaf ear—well, you already have my accounting, above. I think I definitely could have done better. I had lots of the puzzle-pieces, and if I’d been thinking better, I could have put them together and avoided a bunch of surprise.

I might even have been able to catalyze a public account of a bunch of the sketchy behavior at early Alameda, which might maybe have caused the EA community to keep a bit more distance from Sam, which would plausibly have caused him to have a lower reputation, and somewhat fewer victims. Which would have been great!

But, also, that’s not quite what this document is about. The use of this sort of document, to me, is that I can improve my ability to think for next time. (And the use I’m imagining for the community is that I’m hoping to lead by example, when it comes to giving honest and relatively candid accounts that we can possibly learn from. Or something, I dunno, ask Rob Bensinger, he’s the one who exhumed this from my drafts.)

I’d be mining the situation for self-improvements even if Omega themself guaranteed that none of my actions could have averted any of the harm Sam did. I’m here taking harsh lessons and seeing what I can learn about how to think better and be cooler; I’m not here to weave a tale about how the outcome secretly depended deeply on my own actions.[10]

It’s not virtuous to pretend that outcomes depend on your efforts to a greater degree than they actually do.

A parting note

Oh, right, one more piece of accounting that I almost forgot:

Clearly,[11] my real opportunity to avert this whole catastrophe was to be more persuasive in that first CDT vs. LDT conversation. It seems likely to me that Sam had some deficit in modeling the consequences of being shady and untrustworthy in multi-agent decision problems, and if that deficit had been repaired back in ~2015, perhaps this whole mess could have been avoided. Mea culpa.

  1. ^

    I’m also not trying to document MIRI’s financial interactions with Sam, Alameda, FTX, or the FTX Foundation. Rob Bensinger collected that information here.

  2. ^

    I did not come away with the belief that Sam was defrauding his clients. I’m not aware of any fraud or theft having been part of the 2018-Alameda story. But I still don’t know all the details, and (e.g.) the details Naia Bouscal and others shared in November about Alameda’s early history go beyond what I recall learning.

  3. ^

    A notable absence in this list is that I did not pay much attention to Sam, or FTX, or Alameda. That definitely contributed to my failure to notice that bad stuff was happening, but I stand by the decision given what I knew at the time, because I have other stuff to do.

  4. ^

    The FTX debacle and other revelations have updated me a little toward “Sam’s ultimate goals may not have been altruistic at all”, but this is a pretty small update. Mostly my guess is that Sam’s bad behavior came from issues orthogonal to his ultimate goals, or was even exacerbated by his altruism. (E.g., a low-integrity person with altruistic ends may find it easier to rationalize bad behavior because the stakes are so high, or because the feeling of moral purity leaks out and contaminates everything you do with a sense of Virtuousness.)

    Reflecting on this post as a whole, I have an overall concern that it isn’t compassionate enough towards Sam. I worry about the social incentives to only speak up about this topic if you’re willing to flatten your models into caricatures and strategically empathize with all and only the people who it’s strategically savvy to empathize with.

    My best guess is still that Sam had a bunch of good intent, and tried to do lots of good, and really was serious about putting money towards good.

    I separately think there’s a pretty solid chance that he was (reckless and negligent and foolish and) not noticing that it wasn’t his money he was giving away; that he really did think that it was his own hard-earned money. Though I also entertain the hypothesis that he knew exactly what he was doing; and regardless, he’s at fault for the resultant destruction.

    I’m angry that (in effect, and by all appearances) an enormous number of people had their money stolen from them, after trusting Sam and FTX to do right by them. This has caused a great deal of hardship for a huge number of people, and through his (apparent) actions, Sam has in my eyes moved himself to the back of the compassion-line: any efforts we extend to help people should go to the victims first, long before they go to the perpetrators.

    I don’t really know how to walk the line between “you dicked over my friends”, and “you hurt and betrayed huge numbers of innocent people”, and “I nonetheless feel compassion for you as a fellow human being”, and “I’ve enjoyed hanging out on occasion”, and “I don’t in fact, in real life, think that you’re a one-dimensional villain with no rare praiseworthy qualities”, and “I deeply respect people dedicating their resources towards addressing deep and real issues that they see around them”, and “… but those were not your resources”, and an overarching “you were a deluded reckless harmful fool”.

    So for lack of knowing how to walk that line, I can at least comment on the problem in this footnote.

  5. ^

    To be clear, I don’t think it’s my friends’ fault that I got little info in these early conversations. I was not acting curious, because (as I mentioned earlier) I was under the impression that they were bound by various privacy agreements and I felt it would be antisocial to pry.

  6. ^

    Evidence for the theory that my recollections are rose-tinted: in an earlier draft of this post, I asked my friend whether I had in fact encouraged them to speak up and offered backup, and they answered something to the effect of: yes, but also in the same conversation you argued that Sam having lots of power was probably good for the world, which undermined the message. I’d completely forgotten about that bit, before they jogged my memory.

    More evidence for the rose-tinting theory: before I jogged my memory, I had a vague recollection that I had started out the conversation not knowing that Sam was, at that time, rich and powerful, and then learned that this was the case, and doubled down. But, after more recollection, I now am pretty confident that I learned about Sam’s billionaire status a few months later, when a local country-music legend played a rendition of his song “My Girlfriend Left Me For A Billionaire” at a COVID-safe gathering. My clear memory of surprise from the song casts the fuzzy/​distant memory of doubling-down into deep suspicion.

  7. ^

    For example, I think ridesharing apps have created a large amount of social value, despite how—if I understand correctly—they were technically illegal in various places when they started out. And, for another example, I would prefer that websites stop showing me the “accept cookies” prompt and just use cookies, regardless of how illegal that is.

  8. ^

    Another factor that I recall weighing in the split-section word-choice: Once somebody has wealth and power, I’m more hesitant to use the label “friend”, for fear of exaggerating the strength of a relationship that it would be cool to have.

    And another factor: I’d never heard Sam’s side of the early-Alameda blowup story, and felt weird about passing strong judgment before hearing it.

    Ultimately, the choice was probably decided by the fact that English doesn’t have a great word for our relationship—”acquaintance” is three syllables and isn’t quite right, “cocommunity-member” is closer but it’s just way too long.

  9. ^

    Recall that my friends who worked at Alameda until the 2018 breakup didn’t commit egregious financial fraud. FTX’s behaviors were bad, as were the behaviors of Alameda after the exodus, but if we aren’t endorsing the Copenhagen Interpretation of Ethics, being in the blast radius of other people’s bad behavior does not make you evil too.

    (Though if it were just me at risk of being Copenhagened, I’d cut most of this section and not worry about it. If I’m exposing other people to risk of being Copenhagened by writing a blog post that touches on something they did, then I feel more of a responsibility to add in disclaimers like this.)

    I think it genuinely would have been really cool if some people in the Alameda blast zone had loudly publicly aired concerns in advance, despite how early attempts to talk about it apparently fell on deaf ears. But doing so also would have involved going significantly above and beyond the call of duty. The general policy of demanding everyone routinely take on that much personal cost is asking far too much of individuals—doubly so given that nobody I talked to knew about Sam’s apparent financial crimes (as far as I can tell), as opposed to more general shadiness.

    If your takeaway from this is “the people in the blast radius should have spoken up louder”, rather than “how can the community improve its mechanisms for incentivizing and aggregating this sort of knowledge”, then I think you’re taking quite the wrong message (while worsening the incentives, to boot).

  10. ^

    That’s why this doc focuses on my own errors—like acting more cordial than I endorse given what I knew–rather than on bigger issues like mitigating harm Sam did. Harm mitigation is important stuff, and there’s a place for it, but this document isn’t that place.

  11. ^

    This word is intended to be read with an intonation signifying that the following text is mostly joking (albeit perhaps with a small grain of truth).