I was one of the people who left at the time described. I don’t think this summary is accurate, particularly (3).
(1) seems the most true, but anyone who’s heard Sam on a podcast could tell you he has an enormous appetite for risk. IIRC he’s publicly stated they bet the entire company on FTX despite thinking it had a <20% chance of paying off. And yeah, when Sam plays league of legends while talking to famous investors he seems like a quirky billionaire; when he does it to you he seems like a dick. There are a lot of bad things I can say about Sam, but there’s no elaborate conspiracy.
Lastly, my severance agreement didn’t have a non-disparagement clause, and I’m pretty sure no one’s did. I assume that you are not hearing from staff because they are worried about the looming shitstorm over FTX now, not some agreement from four years ago.
When said shitstorm dies down I might post more and under my real name, but for now the phrase “wireless mouse” should confirm me as someone who worked there at the time to anyone else who was also there.
I’m the person that Kerry was quoting here, and am at least one of the reasons he believed the others had signed agreements with non-disparagement clauses. I didn’t sign a severance agreement for a few reasons: I wanted to retain the ability to sue, I believed there was a non-disparagement clause, and I didn’t want to sign away rights to the ownership stake that I had been verbally told I would receive. Given that I didn’t actually sign it, I could believe that the non-disparagement clauses were removed and I didn’t know about it, and people have just been quiet for other reasons (of which there are certainly plenty).
I think point 3 is overstated but not fundamentally inaccurate. My understanding was that a group of senior leadership offered Sam to buy him out, he declined, and he bought them out instead. My further understanding is that his negotiating position was far stronger than it should have been due to him having sole legal ownership (which I was told he obtained in a way I think it is more than fair to describe as backstabbing). I wasn’t personally involved in those negotiations, in part because I clashed with Sam probably worse than anyone else at the company, which likely would have derailed them.
That brings me to my next point, which is that I definitely had one of the most negative opinions of Sam and his actions at the time, and it’s reasonable for people to downweight my take on all of this accordingly. That said, I do feel that my perspective has been clearly vindicated by current events.
I want to push back very strongly against the idea that this was primarily about Sam’s appetite for risk. Yes, he has an absurd appetite for risk, but what’s more important is what kinds of risks he has an appetite for. He consistently displayed a flagrant disregard for legal structures and safeguards, a belief that rules do not apply to him, and an inclination to see the ends as justifying the means. At this stage it’s clear that what happened at FTX was fraud, plain and simple, and his decision to engage in that fraud was entirely in character.
(As a minor note, I can confirm that the “wireless mouse” phrase does validate ftxthrowaway as someone who was there at the time, though of course now that it has been used this way publicly once it will no longer be valid in the future.)
I’m curious if you (or any other “SBF skeptic”) has any opinion regarding whether his character flaws should’ve been apparent to more people outside the organizations he worked at, e.g. on the basis of his public interviews. Or alternatively, were there any red flags in retrospect when you first met him?
I’m asking because so far this thread has discussed the problem in terms of private info not propagating. But I want to understand if the problem could’ve been stopped at the level of public info. If so that suggests that a solution of just getting better at propagating private info may be unsatisfactory—lots of EAs had public info about SBF, but few made a stink.
I’m also interested to hear “SBF skeptic” takes on the extent his character flaws were a result of his involvement in EA. Or maybe something about being raised consequentialist as a kid? Like, if we believe that SBF would’ve been a good person if it weren’t for exposure to consequentialist ideas, that suggests we should do major introspection.
One of the biggest lessons I learned from all of this is that while humans are quite good judges of character in general, we do a lot worse in the presence of sufficient charisma, and in those cases we can’t trust our guts, even when they’re usually right. When I first met SBF, I liked him quite a bit, and I didn’t notice any red flags. Even during the first month or two of working with him, I kind of had blinders on and made excuses for things that in retrospect I shouldn’t have.
It’s hard for me to say about what people should have been able to detect from his public presence, because I haven’t watched any of his public interviews. I put a fair amount of effort into making sure that news about him (or FTX) didn’t show up in any of my feeds, because when it did I found it pretty triggering.
Personally, I don’t think his character flaws are at all a function of EA. To me, his character seems a lot more like what I hear from friends who work in politics about what some people are like in that domain. Given his family is very involved in politics, that connection seems plausible to me. This is very uncharitable, but: from my discussions with him he always seemed a lot more interested in power than in doing good, and I always worried that he just saw doing good as an opportunity to gain power. There’s obviously no way for me to have any kind of confidence in that assessment, though, and I don’t think people should put hardly any weight on it.
In terms of public interviews, I think the most interesting/relevant parts are him expressing willingness to bite consequentialist/utilitarian bullets in a way that’s a bit on the edge of the mainstream Overton window, but I believe would’ve been within the EA Overton window prior to recent events (unsure about now). BTW I got these examples from Marginal Revolution comments/Twitter.
This one seems most relevant—the first question Patrick asks Sam is whether the ends justify the means.
In this interview, search for “So why then should we ever spend a whole lot of money on life extension since we can just replace people pretty cheaply?” and “Should a Benthamite be risk-neutral with regard to social welfare?”
In any case, given that you think people should put hardly any weight on your assessment, it seems to me that as a community we should be doing a fair amount of introspection. Here are some things I’ve been thinking about:
We should update away from “EA exceptionalism” and towards self-doubt. (EDIT: I like this thread about “EA exceptionalism”, though I don’t agree with all the claims.) It sounds like you think more self-doubt would’ve been really helpful for Sam. IMO, self-doubt should increase in proportion to one’s power. (Trying to “more than cancel out” the normal human tendency towards decreased self-doubt as power increases.) This one is tricky, because it seems bad to tell people who already experience Chidi Anagonye-style crippling self-doubt that they should self-doubt even more. But it certainly seems good for our average level of self-doubt to increase, even if self-doubt need not increase in every individual EA. Related: Having the self-awareness to know where you are on the self-doubt spectrum seems like an important and unsolved problem.
I’m also wondering if I should think of “morality” as being two different things: A descriptive account of what I value, and (separately) a prescriptive code of behavior. And then, beyond just endorsing the abstract concept of ethical injunctions, maybe it would be good to take a stab at codifying exactly what they should be. The idea seems a bit under-operationalized, although it’s likely there are relevant blog posts that aren’t coming to my mind. Like, I notice that the EA who’s most associated with the phrase “ethical injunctions” is also the biggest advocate of drastic unilateral action, and I’m not sure how to reconcile that (not trying to throw shade—genuinely unsure). EDIT: This is a great tweet; related.
Institutional safeguards are also looking better, but I was already very in favor of those and puzzled by lack of EA interest, so I can’t say it was a huge update for me personally.
This one is tricky, because it seems bad to tell people who already experience Chidi Anagonye-style crippling self-doubt that they should self-doubt even more.
EA self-doubt has always seemed weirdly compartmentalized to me. Even the humblest of people in the movement is often happy to dismiss considered viewpoints by highly intelligent people on the grounds that it doesn’t satisfy EA principles. This includes me—I think we are sometimes right to do so, but probably do so far too much nonetheless.
(from phone)
That was an example of an ea being highly upvoted for dismissing multiple extremely smart and well meaning people’s life’s work as ‘really flimsy and incredibly speculative’ because he wasn’t satisfied that they could justify their work within a framework that the ea movement had decided is one of the only ones worth contemplating. As if that framework itself isn’t incredibly speculative (and therefore if you reject any of its many suppositions, really flimsy)
I’m not sure I share your view of that post. Some quotes from it:
...he just believed it was really important for humanity to make space settlements in order for it to survive long-term… From what I could tell, [my professor] probably spend less than 10 hours seriously figuring out if space settlements would actually be more valuable to humanity than other alternatives.
...
Take SpaceX, Blue Origin, Neurolink, OpenAI. Each of these started with a really flimsy and incredibly speculative moral case. Now, each is probably worth at least $10 Billion, some much more. They all have very large groups of brilliant engineers and scientists. They all don’t seem to have researchers really analyzing the missions to make sure they actually make sense.
...
My impression is that Andrew Carnegie spent very little, if anything, to figure out if libraries were really the best use of his money, before going ahead and funding 3,000 libraries.
...
I rarely see political groups seriously red-teaming their own policies, before they sign them into law, after which the impacts can last for hundreds of years.
I don’t think any of these observations hinge on the EA framework strongly? Like, do we have reason to believe Andrew Carnegie spent a significant amount trying to figure out if libraries were a great donation target by his own lights, as opposed to according to the EA framework?
The thing that annoyed me about that post was that at the time it was written, it seemed to me that the EA movement was also fairly guilty of this! (It was written before the criticism/red teaming contest.)
I’m not familiar enough with the case of Andrew Carnegie to comment and I agree on the point of political tribalism. The other two are what bother me.
On the professor, the problem is there explicitly: you omitted a key line ‘I tried asking for his opinion on existential threats’, which is a strongly EA-identifying approach, and one which many people feel is too simplistic. Eg see Gideon Futurman’s EAGx Rotterdam talk when it’s up—he argues the way EAs think about x-risk is far too simplified, focusing on single-event narratives, ignoring countless possible trajectories that could end in extinction or similar any one of which is vanishingly unlikely, but which collectively we should take much more seriously. Whether or not one agrees with this view, it seems to me to be one a smart person could reasonably hold, and shows that by asking someone ‘his opinion on existential threats, and which specific scenarios these space settlements would help with’, you’re pigeonholing them into EA-aligned specific-single-event way of thinking.
As for Elon Musk, I think the same problem is there implicitly: he’s written a paper called ‘Making Humans a Multiplanetary Species’, spoken extensively on the subject and spent his life thinking that it’s important, and while you could reasonably disagree with his arguments, I don’t see any grounds for dismissing them as ‘really flimsy and incredibly speculative’ without engagement, unless your reason for doing so is ‘there exists a pool of important research which contradicts them and which I think is correct’. There are certainly plenty of other smart people who think as he does, some of them EAs (though maybe that doesn’t contribute to my original complaint). Since there’s a very clear mathematical argument that it’s harder to kill all of a more widespread and numerous civilisation, to say that the case is ‘really flimsy’, you basically need to assume the EA-aligned narrative that AI is highly likely to kill us all.
What’s interesting about this interview clip though is that he seems to explicitly endorse a set of principles that directly contradict the actions he took!
Well that’s the thing—it seems likely he didn’t see his actions as contradicting those principles. Suggesting that they’re actually a dangerous set of principles to endorse, even if they sound reasonable. That’s what’s really got me thinking.
I wonder if part of the problem is a consistent failure of imagination on the part of humans to see how our designs might fail. Kind of like how an amateur chess player devotes a lot more thought to how they could win than how their opponent could win. So if the principles Sam endorsed are at all recoverable, maybe they could be recovered via a process like “before violating common-sense ethics for the sake of utility, go down a massive checklist searching for reasons why this could be a mistake, including external observers in the decision if possible”.
I think your first paragraph provides a potential answer to your second :-)
There’s an implicit “Sam fell prey to motivated reasoning, but I wouldn’t do that” in your comment, which itself seems like motivated reasoning :-)
(At least, it seems like motivated reasoning in the absence of a strong story for Sam being different from the rest of us. That’s why I’m so interested in what people like nbouscal have to say.)
So you think there’s too much danger of cutting yourself and everyone else via motivated reasoning, ala Dan Luu’s “Normalization of Deviance” and the principles have little room for errors in implementing them, is that right?
most human beings perceive themselves as good and decent people, such that they can understand many of their rule violations as entirely rational and ethically acceptable responses to problematic situations. They understand themselves to be doing nothing wrong, and will be outraged and often fiercely defend themselves when confronted with evidence to the contrary.
Specifically, I was saying that wrong results would come up if you failed in one of the steps of reasoning, and there’s no self-correction mechanism for bad reasoning like Sam Bankman-Fried was doing.
I do feel that my perspective has been clearly vindicated by current events.
Can I ask the obvious question of whether you made money by shorting ftt? You were both one of the most anti-FTX and most still involved in crypto trading, so I suspect if you didn’t then no one did.
Ps: apologies for burning the “wireless mouse” Commons. If others want to make throwaways, feel free to dm me what that is referring to and I will publicly comment my verification.
Also no non-disparagement clause in my agreement. FWIW I was one of the people who negotiated the severance stuff after the 2018 blowup, and I feel fairly confident that that holds for everyone. (But my memory is crappy, so that’s mostly because I trust the FB post about what was negotiated more than you do.)
… I assume you realise that that narrows you down to one of two people (given it’s safe to assume Nishad is not currently spending his time on the EA Forum)
I do think I was probably just remembering incorrectly about this to be honest, I looked back through things from then and it looks like there was a lot of back-and-forth about the inclusion of an NDA (among other clauses), so it seems very plausible that it was just removed entirely during that negotiation (aside from the one in the IP agreement).
… I assume you realise that that narrows you down to one of two people (given it’s safe to assume Nishad is not currently spending his time on the EA Forum)
yep, not too worried about this. thanks for flagging :)
Can I ask the obvious question of whether you made money by shorting ftt? You were both one of the most anti-FTX and most still involved in crypto trading, so I suspect if you didn’t then no one did.
I’ve been on leave from work due to severe burnout for the last couple months (and still am), and was intentionally avoiding seeing anything about SBF/FTX outside of work until recent events made that basically impossible. So no, I didn’t personally trade on any of this at all.
Curious, SBF had started looking into crypto—and almost immediately noticed something strange. Bitcoin was trading at a higher price in Japan and Korea than it was in the U.S. In theory, this should never happen because it represents a riskless profit opportunity—in other words, a free lunch. One simply buys Bitcoin at the lower price, sells it at the higher price, and pockets the difference. Jane Street built an empire on high-frequency trades that took advantage of fraction-of-a-cent price differences. But here was Bitcoin, trading at around $15,000 in South Korea: an unheard-of 50 percent price premium.
After SBF’s fall, Twitter speculation says this is dubious.
This is because the cause of the Kimchi premium was strict legal capital controls, and the liquidity was orders of magnitude too small to produce the wealth in SBF later used. At best, SBF was actively breaking laws by this trade. The amount of money he could make may have been too small to justify the narratives around his early success.
Do you have any comments on the above?
Jaan Tallinn investment
Tallinn later ended up funding SBF with $50M.
What would you say to the speculation that it was this funding, and not the Kimchi arb , that really launched SBF’s career?
If this is mostly true, the takeaway is that there’s little cleverness or competency being expressed here here?
It seems like power, money and access led to SBF’s success. This theme would fit with SBF’s later behavior, with bluffing and overaweing spend.
That tradition seems hollow and bad, maybe contagious to the things that SBF created or touched.
This could be useful in some way? It seems like the vector EA or EA PR could take, could counter this.
I don’t mind sharing a bit about this. SBF desperately wanted to do the Korea arb, and we spent quite a bit of time coming up with any number of outlandish tactics that might enable us to do so, but we were never able to actually figure it out. The capital controls worked. The best we could do was predict which direction the premium would go and trade into KRW and then back out of it accordingly.
Japan was different. We were able to get a Japanese entity set up, and we did successfully trade on the Japan arb. As far as I know we didn’t break any laws in doing so, but I wasn’t directly involved in the operational side of it. My recollection is that we made something like 10-30 million dollars (~90%CI) off of that arb in total, but I’m not at all confident on the exact amount.
Is that what created his early wealth, though? Not really. Before we all left, pretty much all of that profit had been lost to a series of bad trades and mismanagement of assets. Examples included some number of millions lost to a large directional bet on ETH (that Sam made directly counter to the predictions of our best event trader), a few million more on a large OTC trade in some illiquid shitcoin that crashed long before we could get out of it, another couple million in a series of XRP transfers that nobody noticed had never arrived, and that had fallen in value by something like 90% when they finally showed up much later, and various other random small things like a junior trader accidentally transferring half a million dollars of USDT to a BTC address (or something like that) due to a complete lack of safeguards on transfers, etc. Not to mention absurd levels of expenditures, e.g. an AWS bill that at one point reached about a quarter million dollars per month.
My knowledge of the story ends when we left, and my recollection is that at that point the Japan arb had long been closed and most of our profits from it had been squandered. I don’t know how he achieved his later success, but if I were to guess, I’d say it probably has a lot more to do with setting up FTX, launching highly predatory instruments like leveraged ETF tokens on it, and doing similarly shady stuff to the things that brought it all crashing down, but during a bull market that made all of those risks pay off. That’s entirely guesswork though, I have no inside knowledge about anything that happened after April 2018.
Note: All of this is purely from memory, I have not cross-checked it with anyone else who was there, and it could be substantially wrong in the details. It has been a long time, and I spent most of that time trying to forget all about it. I’m sharing this because I believe the broad strokes of it to be accurate, but please do not update too strongly from it, nor quote it without mentioning this disclaimer.
Thank you for sharing, I can understand why you might be feeling burnt out!! I’ve been in a workplace environment that reminds me of this, and especially if you care about the people and projects there...it’s painful.
Personally, I remember telling at least a handful of people at the time that Sam belonged in a jail cell, but I expect that people thought I was being hyperbolic (which was entirely fair, I was traumatised and was probably communicating in a way that signalled unreliability).
I was told that conversations were had with people in leadership roles in EA. I wasn’t part of those conversations and don’t know the full details of what was discussed or with whom.
It would be awesome for the names of senior people who knew to be made public, plus the exact nature of what they were told and their response or lack thereof.
I think this could be a nice-to-have, but really, I think it’s too much to ask, ”For every senior EA, we want a long list of exactly each thing they knew about SBF”
This would probably be a massive pain, and much of the key information will be confidential (for example, informants who want to remain anonymous).
My guess is that there were a bunch of flags that were more apparent than nbouscal’s stories.
I do think we should have really useful summaries of the key results. If there were a few people who were complicit or highly negligent, then that should be reported, and appropriate actions taken.
I strongly believe it is hyperrelevant to know who knew what, when so that these people are held to account. I don’t think this is too much to ask, nor does it have to be arduous in the way you described of getting every name with max fidelity. I see so many claims that “key EA members knew what was going on” and never any sort of name associate with it.
I agree this is really important and would really, really want it to be figured out, and key actions taken. I think I’m less focused on all of the information of such a discovery being public, as opposed to much of it being summarized a bit.
I don’t feel like I’m in a good place to give a good answer. First, I haven’t really thought about it nor am I an expert in these sorts of matters.
Second, I’m like several layers deep in funding structures that start with these people. It’s sort of like asking me to publicly write what I love/hate, objectively, about my boss.
I think I could say that I’d expect appropriate actions to look a lot like they do with top companies (mainly ones without lots of known management integrity problems). At these companies, I believe that when some officials are investigated for potential issues, often they’re given no punishment, and sometimes they’re fired. It really depends on the details of the findings.
I think it is very important to understand what was known about SBF’s behaviour during the initial Alameda breakup, and for this to be publicly discussed and to understand if any of this disaster was predictable beforehand. I have recently spoken to someone involved who told me that SBF was not just cavalier, but unethical and violated commonsense ethical norms. We really need to understand whether this was known beforehand, and if so learn some very hard lessons.
It is important to distinguish different types of risk-taking here. (1) There is the kind of risk taking that promises high payoffs but with a high chance of the bet falling to zero, without violating commonsense ethical norms, (2) Risk taking in the sense of being willing to risk it all secretly violating ethical norms to get more money. One flaw in SBF’s thinking seemed to be that risk-neutral altruists should take big risks because the returns can only fall to zero. In fact, the returns can go negative—eg all the people he has stiffed, and all of the damage he has done to EA.
I was one of the people who left at the time described. I don’t think this summary is accurate, particularly (3).
(1) seems the most true, but anyone who’s heard Sam on a podcast could tell you he has an enormous appetite for risk. IIRC he’s publicly stated they bet the entire company on FTX despite thinking it had a <20% chance of paying off. And yeah, when Sam plays league of legends while talking to famous investors he seems like a quirky billionaire; when he does it to you he seems like a dick. There are a lot of bad things I can say about Sam, but there’s no elaborate conspiracy.
Lastly, my severance agreement didn’t have a non-disparagement clause, and I’m pretty sure no one’s did. I assume that you are not hearing from staff because they are worried about the looming shitstorm over FTX now, not some agreement from four years ago.
When said shitstorm dies down I might post more and under my real name, but for now the phrase “wireless mouse” should confirm me as someone who worked there at the time to anyone else who was also there.
I’m the person that Kerry was quoting here, and am at least one of the reasons he believed the others had signed agreements with non-disparagement clauses. I didn’t sign a severance agreement for a few reasons: I wanted to retain the ability to sue, I believed there was a non-disparagement clause, and I didn’t want to sign away rights to the ownership stake that I had been verbally told I would receive. Given that I didn’t actually sign it, I could believe that the non-disparagement clauses were removed and I didn’t know about it, and people have just been quiet for other reasons (of which there are certainly plenty).
I think point 3 is overstated but not fundamentally inaccurate. My understanding was that a group of senior leadership offered Sam to buy him out, he declined, and he bought them out instead. My further understanding is that his negotiating position was far stronger than it should have been due to him having sole legal ownership (which I was told he obtained in a way I think it is more than fair to describe as backstabbing). I wasn’t personally involved in those negotiations, in part because I clashed with Sam probably worse than anyone else at the company, which likely would have derailed them.
That brings me to my next point, which is that I definitely had one of the most negative opinions of Sam and his actions at the time, and it’s reasonable for people to downweight my take on all of this accordingly. That said, I do feel that my perspective has been clearly vindicated by current events.
I want to push back very strongly against the idea that this was primarily about Sam’s appetite for risk. Yes, he has an absurd appetite for risk, but what’s more important is what kinds of risks he has an appetite for. He consistently displayed a flagrant disregard for legal structures and safeguards, a belief that rules do not apply to him, and an inclination to see the ends as justifying the means. At this stage it’s clear that what happened at FTX was fraud, plain and simple, and his decision to engage in that fraud was entirely in character.
(As a minor note, I can confirm that the “wireless mouse” phrase does validate ftxthrowaway as someone who was there at the time, though of course now that it has been used this way publicly once it will no longer be valid in the future.)
I’m curious if you (or any other “SBF skeptic”) has any opinion regarding whether his character flaws should’ve been apparent to more people outside the organizations he worked at, e.g. on the basis of his public interviews. Or alternatively, were there any red flags in retrospect when you first met him?
I’m asking because so far this thread has discussed the problem in terms of private info not propagating. But I want to understand if the problem could’ve been stopped at the level of public info. If so that suggests that a solution of just getting better at propagating private info may be unsatisfactory—lots of EAs had public info about SBF, but few made a stink.
I’m also interested to hear “SBF skeptic” takes on the extent his character flaws were a result of his involvement in EA. Or maybe something about being raised consequentialist as a kid? Like, if we believe that SBF would’ve been a good person if it weren’t for exposure to consequentialist ideas, that suggests we should do major introspection.
One of the biggest lessons I learned from all of this is that while humans are quite good judges of character in general, we do a lot worse in the presence of sufficient charisma, and in those cases we can’t trust our guts, even when they’re usually right. When I first met SBF, I liked him quite a bit, and I didn’t notice any red flags. Even during the first month or two of working with him, I kind of had blinders on and made excuses for things that in retrospect I shouldn’t have.
It’s hard for me to say about what people should have been able to detect from his public presence, because I haven’t watched any of his public interviews. I put a fair amount of effort into making sure that news about him (or FTX) didn’t show up in any of my feeds, because when it did I found it pretty triggering.
Personally, I don’t think his character flaws are at all a function of EA. To me, his character seems a lot more like what I hear from friends who work in politics about what some people are like in that domain. Given his family is very involved in politics, that connection seems plausible to me. This is very uncharitable, but: from my discussions with him he always seemed a lot more interested in power than in doing good, and I always worried that he just saw doing good as an opportunity to gain power. There’s obviously no way for me to have any kind of confidence in that assessment, though, and I don’t think people should put hardly any weight on it.
Thanks for the reply!
In terms of public interviews, I think the most interesting/relevant parts are him expressing willingness to bite consequentialist/utilitarian bullets in a way that’s a bit on the edge of the mainstream Overton window, but I believe would’ve been within the EA Overton window prior to recent events (unsure about now). BTW I got these examples from Marginal Revolution comments/Twitter.
This one seems most relevant—the first question Patrick asks Sam is whether the ends justify the means.
In this interview, search for “So why then should we ever spend a whole lot of money on life extension since we can just replace people pretty cheaply?” and “Should a Benthamite be risk-neutral with regard to social welfare?”
In any case, given that you think people should put hardly any weight on your assessment, it seems to me that as a community we should be doing a fair amount of introspection. Here are some things I’ve been thinking about:
We should update away from “EA exceptionalism” and towards self-doubt. (EDIT: I like this thread about “EA exceptionalism”, though I don’t agree with all the claims.) It sounds like you think more self-doubt would’ve been really helpful for Sam. IMO, self-doubt should increase in proportion to one’s power. (Trying to “more than cancel out” the normal human tendency towards decreased self-doubt as power increases.) This one is tricky, because it seems bad to tell people who already experience Chidi Anagonye-style crippling self-doubt that they should self-doubt even more. But it certainly seems good for our average level of self-doubt to increase, even if self-doubt need not increase in every individual EA. Related: Having the self-awareness to know where you are on the self-doubt spectrum seems like an important and unsolved problem.
I’m also wondering if I should think of “morality” as being two different things: A descriptive account of what I value, and (separately) a prescriptive code of behavior. And then, beyond just endorsing the abstract concept of ethical injunctions, maybe it would be good to take a stab at codifying exactly what they should be. The idea seems a bit under-operationalized, although it’s likely there are relevant blog posts that aren’t coming to my mind. Like, I notice that the EA who’s most associated with the phrase “ethical injunctions” is also the biggest advocate of drastic unilateral action, and I’m not sure how to reconcile that (not trying to throw shade—genuinely unsure). EDIT: This is a great tweet; related.
Institutional safeguards are also looking better, but I was already very in favor of those and puzzled by lack of EA interest, so I can’t say it was a huge update for me personally.
EA self-doubt has always seemed weirdly compartmentalized to me. Even the humblest of people in the movement is often happy to dismiss considered viewpoints by highly intelligent people on the grounds that it doesn’t satisfy EA principles. This includes me—I think we are sometimes right to do so, but probably do so far too much nonetheless.
Seems plausible, I think it would be good to have a dedicated “translator” who tries to understand & steelman views that are less mainstream in EA.
Wasn’t sure about the relevance of that link?
(from phone) That was an example of an ea being highly upvoted for dismissing multiple extremely smart and well meaning people’s life’s work as ‘really flimsy and incredibly speculative’ because he wasn’t satisfied that they could justify their work within a framework that the ea movement had decided is one of the only ones worth contemplating. As if that framework itself isn’t incredibly speculative (and therefore if you reject any of its many suppositions, really flimsy)
Thanks!
I’m not sure I share your view of that post. Some quotes from it:
...
...
...
I don’t think any of these observations hinge on the EA framework strongly? Like, do we have reason to believe Andrew Carnegie spent a significant amount trying to figure out if libraries were a great donation target by his own lights, as opposed to according to the EA framework?
The thing that annoyed me about that post was that at the time it was written, it seemed to me that the EA movement was also fairly guilty of this! (It was written before the criticism/red teaming contest.)
I’m not familiar enough with the case of Andrew Carnegie to comment and I agree on the point of political tribalism. The other two are what bother me.
On the professor, the problem is there explicitly: you omitted a key line ‘I tried asking for his opinion on existential threats’, which is a strongly EA-identifying approach, and one which many people feel is too simplistic. Eg see Gideon Futurman’s EAGx Rotterdam talk when it’s up—he argues the way EAs think about x-risk is far too simplified, focusing on single-event narratives, ignoring countless possible trajectories that could end in extinction or similar any one of which is vanishingly unlikely, but which collectively we should take much more seriously. Whether or not one agrees with this view, it seems to me to be one a smart person could reasonably hold, and shows that by asking someone ‘his opinion on existential threats, and which specific scenarios these space settlements would help with’, you’re pigeonholing them into EA-aligned specific-single-event way of thinking.
As for Elon Musk, I think the same problem is there implicitly: he’s written a paper called ‘Making Humans a Multiplanetary Species’, spoken extensively on the subject and spent his life thinking that it’s important, and while you could reasonably disagree with his arguments, I don’t see any grounds for dismissing them as ‘really flimsy and incredibly speculative’ without engagement, unless your reason for doing so is ‘there exists a pool of important research which contradicts them and which I think is correct’. There are certainly plenty of other smart people who think as he does, some of them EAs (though maybe that doesn’t contribute to my original complaint). Since there’s a very clear mathematical argument that it’s harder to kill all of a more widespread and numerous civilisation, to say that the case is ‘really flimsy’, you basically need to assume the EA-aligned narrative that AI is highly likely to kill us all.
Thanks!
What’s interesting about this interview clip though is that he seems to explicitly endorse a set of principles that directly contradict the actions he took!
Well that’s the thing—it seems likely he didn’t see his actions as contradicting those principles. Suggesting that they’re actually a dangerous set of principles to endorse, even if they sound reasonable. That’s what’s really got me thinking.
I wonder if part of the problem is a consistent failure of imagination on the part of humans to see how our designs might fail. Kind of like how an amateur chess player devotes a lot more thought to how they could win than how their opponent could win. So if the principles Sam endorsed are at all recoverable, maybe they could be recovered via a process like “before violating common-sense ethics for the sake of utility, go down a massive checklist searching for reasons why this could be a mistake, including external observers in the decision if possible”.
My guess is standard motivated reasoning explains why he thought he wasn’t in violation of his stated principles.
Question, but why do you think the principles were dangerous, exactly? I am confused about the danger you state.
I think your first paragraph provides a potential answer to your second :-)
There’s an implicit “Sam fell prey to motivated reasoning, but I wouldn’t do that” in your comment, which itself seems like motivated reasoning :-)
(At least, it seems like motivated reasoning in the absence of a strong story for Sam being different from the rest of us. That’s why I’m so interested in what people like nbouscal have to say.)
So you think there’s too much danger of cutting yourself and everyone else via motivated reasoning, ala Dan Luu’s “Normalization of Deviance” and the principles have little room for errors in implementing them, is that right?
Here’s a link to it:
https://danluu.com/wat/
And a quote:
I’m not sure what you mean by “the principles have little room for errors in implementing them”.
That quote seems scarily plausible.
EDIT: Relevant Twitter thread
Specifically, I was saying that wrong results would come up if you failed in one of the steps of reasoning, and there’s no self-correction mechanism for bad reasoning like Sam Bankman-Fried was doing.
Can I ask the obvious question of whether you made money by shorting ftt? You were both one of the most anti-FTX and most still involved in crypto trading, so I suspect if you didn’t then no one did.
Ps: apologies for burning the “wireless mouse” Commons. If others want to make throwaways, feel free to dm me what that is referring to and I will publicly comment my verification.
Also no non-disparagement clause in my agreement. FWIW I was one of the people who negotiated the severance stuff after the 2018 blowup, and I feel fairly confident that that holds for everyone. (But my memory is crappy, so that’s mostly because I trust the FB post about what was negotiated more than you do.)
DM’d you.
Confirming this account made an Alameda research reference in my DMs.
… I assume you realise that that narrows you down to one of two people (given it’s safe to assume Nishad is not currently spending his time on the EA Forum)
I do think I was probably just remembering incorrectly about this to be honest, I looked back through things from then and it looks like there was a lot of back-and-forth about the inclusion of an NDA (among other clauses), so it seems very plausible that it was just removed entirely during that negotiation (aside from the one in the IP agreement).
yep, not too worried about this. thanks for flagging :)
Here is some questions/content that might be interesting to discuss if you’re interested?
I’ve been on leave from work due to severe burnout for the last couple months (and still am), and was intentionally avoiding seeing anything about SBF/FTX outside of work until recent events made that basically impossible. So no, I didn’t personally trade on any of this at all.
Fair. Sorry to hear that, I hope you can go back to ignoring the situation soon!
Can you answer two questions related to the source of SBF’s early business wealth?
Were the Kimchi arb returns real?
As you know, the “Kimchi premium” was this difference in BTC price between Korea (Japan?) and the rest of the world.
The narrative is that SBF arbed this price difference to make many millions and create his early wealth.
The Sequoia puff piece makes this cute story:
After SBF’s fall, Twitter speculation says this is dubious.
This is because the cause of the Kimchi premium was strict legal capital controls, and the liquidity was orders of magnitude too small to produce the wealth in SBF later used. At best, SBF was actively breaking laws by this trade. The amount of money he could make may have been too small to justify the narratives around his early success.
Do you have any comments on the above?
Jaan Tallinn investment
Tallinn later ended up funding SBF with $50M.
What would you say to the speculation that it was this funding, and not the Kimchi arb , that really launched SBF’s career?
If this is mostly true, the takeaway is that there’s little cleverness or competency being expressed here here?
It seems like power, money and access led to SBF’s success. This theme would fit with SBF’s later behavior, with bluffing and overaweing spend.
That tradition seems hollow and bad, maybe contagious to the things that SBF created or touched.
This could be useful in some way? It seems like the vector EA or EA PR could take, could counter this.
I don’t mind sharing a bit about this. SBF desperately wanted to do the Korea arb, and we spent quite a bit of time coming up with any number of outlandish tactics that might enable us to do so, but we were never able to actually figure it out. The capital controls worked. The best we could do was predict which direction the premium would go and trade into KRW and then back out of it accordingly.
Japan was different. We were able to get a Japanese entity set up, and we did successfully trade on the Japan arb. As far as I know we didn’t break any laws in doing so, but I wasn’t directly involved in the operational side of it. My recollection is that we made something like 10-30 million dollars (~90%CI) off of that arb in total, but I’m not at all confident on the exact amount.
Is that what created his early wealth, though? Not really. Before we all left, pretty much all of that profit had been lost to a series of bad trades and mismanagement of assets. Examples included some number of millions lost to a large directional bet on ETH (that Sam made directly counter to the predictions of our best event trader), a few million more on a large OTC trade in some illiquid shitcoin that crashed long before we could get out of it, another couple million in a series of XRP transfers that nobody noticed had never arrived, and that had fallen in value by something like 90% when they finally showed up much later, and various other random small things like a junior trader accidentally transferring half a million dollars of USDT to a BTC address (or something like that) due to a complete lack of safeguards on transfers, etc. Not to mention absurd levels of expenditures, e.g. an AWS bill that at one point reached about a quarter million dollars per month.
My knowledge of the story ends when we left, and my recollection is that at that point the Japan arb had long been closed and most of our profits from it had been squandered. I don’t know how he achieved his later success, but if I were to guess, I’d say it probably has a lot more to do with setting up FTX, launching highly predatory instruments like leveraged ETF tokens on it, and doing similarly shady stuff to the things that brought it all crashing down, but during a bull market that made all of those risks pay off. That’s entirely guesswork though, I have no inside knowledge about anything that happened after April 2018.
Note: All of this is purely from memory, I have not cross-checked it with anyone else who was there, and it could be substantially wrong in the details. It has been a long time, and I spent most of that time trying to forget all about it. I’m sharing this because I believe the broad strokes of it to be accurate, but please do not update too strongly from it, nor quote it without mentioning this disclaimer.
What about the GBTC arb trade? Did Alameda get into that during your time there?
Good question, but tbh I just don’t remember the answer.
Thank you for sharing, I can understand why you might be feeling burnt out!! I’ve been in a workplace environment that reminds me of this, and especially if you care about the people and projects there...it’s painful.
Here is some questions/content that might be interesting to discuss?
(You might not want to given if your fatigue though.)
Thanks for sharing this nbouscal. How many people did you tell about this at the time?
Personally, I remember telling at least a handful of people at the time that Sam belonged in a jail cell, but I expect that people thought I was being hyperbolic (which was entirely fair, I was traumatised and was probably communicating in a way that signalled unreliability).
I was told that conversations were had with people in leadership roles in EA. I wasn’t part of those conversations and don’t know the full details of what was discussed or with whom.
It would be awesome for the names of senior people who knew to be made public, plus the exact nature of what they were told and their response or lack thereof.
I think this could be a nice-to-have, but really, I think it’s too much to ask,
”For every senior EA, we want a long list of exactly each thing they knew about SBF”
This would probably be a massive pain, and much of the key information will be confidential (for example, informants who want to remain anonymous).
My guess is that there were a bunch of flags that were more apparent than nbouscal’s stories.
I do think we should have really useful summaries of the key results. If there were a few people who were complicit or highly negligent, then that should be reported, and appropriate actions taken.
I strongly believe it is hyperrelevant to know who knew what, when so that these people are held to account. I don’t think this is too much to ask, nor does it have to be arduous in the way you described of getting every name with max fidelity. I see so many claims that “key EA members knew what was going on” and never any sort of name associate with it.
I agree this is really important and would really, really want it to be figured out, and key actions taken. I think I’m less focused on all of the information of such a discovery being public, as opposed to much of it being summarized a bit.
A summary of sorts is being compiled here:
What would you suggest might be appropriate actions for complicity or negligence?
I don’t feel like I’m in a good place to give a good answer. First, I haven’t really thought about it nor am I an expert in these sorts of matters.
Second, I’m like several layers deep in funding structures that start with these people. It’s sort of like asking me to publicly write what I love/hate, objectively, about my boss.
I think I could say that I’d expect appropriate actions to look a lot like they do with top companies (mainly ones without lots of known management integrity problems). At these companies, I believe that when some officials are investigated for potential issues, often they’re given no punishment, and sometimes they’re fired. It really depends on the details of the findings.
I think it is very important to understand what was known about SBF’s behaviour during the initial Alameda breakup, and for this to be publicly discussed and to understand if any of this disaster was predictable beforehand. I have recently spoken to someone involved who told me that SBF was not just cavalier, but unethical and violated commonsense ethical norms. We really need to understand whether this was known beforehand, and if so learn some very hard lessons.
It is important to distinguish different types of risk-taking here. (1) There is the kind of risk taking that promises high payoffs but with a high chance of the bet falling to zero, without violating commonsense ethical norms, (2) Risk taking in the sense of being willing to risk it all secretly violating ethical norms to get more money. One flaw in SBF’s thinking seemed to be that risk-neutral altruists should take big risks because the returns can only fall to zero. In fact, the returns can go negative—eg all the people he has stiffed, and all of the damage he has done to EA.
Are you in a position to be more specific about what SBF did that this is referring to?
no