The FTX Future Fund team has resigned
We were shocked and immensely saddened to learn of the recent events at FTX. Our hearts go out to the thousands of FTX customers whose finances may have been jeopardized or destroyed.
We are now unable to perform our work or process grants, and we have fundamental questions about the legitimacy and integrity of the business operations that were funding the FTX Foundation and the Future Fund. As a result, we resigned earlier today.
We don’t yet have a full picture of what went wrong, and we are following the news online as it unfolds. But to the extent that the leadership of FTX may have engaged in deception or dishonesty, we condemn that behavior in the strongest possible terms. We believe that being a good actor in the world means striving to act with honesty and integrity.
We are devastated to say that it looks likely that there are many committed grants that the Future Fund will be unable to honor. We are so sorry that it has come to this. We are no longer employed by the Future Fund, but, in our personal capacities, we are exploring ways to help with this awful situation. We joined the Future Fund to support incredible people and projects, and this outcome is heartbreaking to us.
We appreciate the grantees’ work to help build a better future, and we have been honored to support it. We’re sorry that we won’t be able to continue to do so going forward, and we deeply regret the difficult, painful, and stressful position that many of you are now in.
To reach us, grantees may email grantee-reachout@googlegroups.com. We know grantees must have many questions, and in our personal capacities we will try to answer them as best as we can given the circumstances.
Nick Beckstead
Leopold Aschenbrenner
Avital Balwit
Ketan Ramakrishnan
Will MacAskill
- Why I think strong general AI is coming soon by 28 Sep 2022 5:40 UTC; 335 points) (LessWrong;
- My reaction to FTX: appalled by 11 Nov 2022 18:08 UTC; 325 points) (
- Some Comments on the Recent FTX TIME Article by 20 Mar 2023 17:36 UTC; 302 points) (
- Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest by 21 Nov 2022 21:45 UTC; 291 points) (
- Reflections and lessons from Effective Ventures by 28 Oct 2024 16:01 UTC; 186 points) (
- FTX FAQ by 13 Nov 2022 5:00 UTC; 144 points) (
- What is effective altruism? How could it be improved? by 5 May 2023 15:53 UTC; 142 points) (
- Sadly, FTX by 17 Nov 2022 14:26 UTC; 134 points) (
- Sadly, FTX by 17 Nov 2022 14:30 UTC; 133 points) (LessWrong;
- How could we have avoided this? by 12 Nov 2022 12:45 UTC; 123 points) (
- Opportunities that surprised us during our Clearer Thinking Regrants program by 7 Nov 2022 13:09 UTC; 116 points) (
- Our 2022 Giving by 4 Dec 2022 15:35 UTC; 114 points) (
- Consider Applying to the Future Fellowship at MIT by 25 Oct 2022 15:50 UTC; 68 points) (
- Overview: Reflection Projects on Community Reform by 1 May 2023 15:14 UTC; 67 points) (
- An update on the Spanish-speaking EA community by 26 Jun 2023 14:31 UTC; 67 points) (
- Why didn’t the FTX Foundation secure its bag? by 15 Nov 2022 19:54 UTC; 57 points) (
- After recent FTX events, what are alternative sources of funding for longtermist projects? by 12 Nov 2022 16:51 UTC; 56 points) (
- EA & LW Forums Weekly Summary (7th Nov − 13th Nov 22′) by 16 Nov 2022 3:04 UTC; 38 points) (
- Our 2022 Giving by 3 Dec 2022 15:40 UTC; 33 points) (LessWrong;
- Consider Applying to the Future Fellowship at MIT by 25 Oct 2022 15:40 UTC; 29 points) (LessWrong;
- 23 Feb 2023 23:11 UTC; 28 points) 's comment on EA is too New & Important to Schism by (
- Opportunities that surprised us during our Clearer Thinking Regrants program by 7 Nov 2022 13:09 UTC; 20 points) (LessWrong;
- EA & LW Forums Weekly Summary (7th Nov − 13th Nov 22′) by 16 Nov 2022 3:04 UTC; 19 points) (LessWrong;
- 17 Dec 2022 19:59 UTC; 18 points) 's comment on The Effective Altruism movement is not above conflicts of interest by (
- 11 Nov 2022 11:05 UTC; 15 points) 's comment on Announcing the Future Fund’s AI Worldview Prize by (
- Why I think strong general AI is coming soon by 28 Sep 2022 6:55 UTC; 14 points) (
- 28 Nov 2022 10:32 UTC; 5 points) 's comment on If you received FTX grant money you should return it by (
- 27 Nov 2022 15:50 UTC; 3 points) 's comment on Sam Bankman-Fried, the FTX collapse, and the limits of effective altruism [The Hindu] by (
- 14 Nov 2022 11:08 UTC; 1 point) 's comment on NY Times on the FTX implosion’s impact on EA by (
- In Defense of SBF by 14 Nov 2022 16:10 UTC; -1 points) (
- 14 Nov 2022 23:38 UTC; -2 points) 's comment on In Defense of SBF by (
What do EA and the FTX Future Team think of a claim by Kerry Vaughan that Sam Bankman-Fried did severely unethical behavior before and EA and FTX covered it up and laundered his reputation, effectively getting away with it.
I’m posting because of true, this suggests big changes to EA norms are necessary to deal with bad actors like him, and that Sam Bankman-Fried should be outright banned from the forum and EA events.
Link to tweets here:
https://twitter.com/KerryLVaughan/status/1590807597011333120
I want to clarify the claims I’m making in the Twitter thread.
I am not claiming that EA leadership or members of the FTX Future fund knew Sam was engaging in fraudulent behavior while they were working at FTX Future Fund.
Instead, I am saying that friends of mine in the EA community worked at Alameda Research during the first 6 months of its existence. At the end of that period, many of them suddenly left all at once. In talking about this with people involved, my impression is:
1) The majority of staff at Alameda were unhappy with Sam’s leadership of the company. Their concerns about Sam included concerns about him taking extreme and unnecessary risks and losing large amounts of money, poor safeguards around moving money around, poor capital controls, including a lack of distinction between money owned by investors and money owned by Alameda itself, and Sam generally being extremely difficult to work with.
2) The legal ownership structure of Alameda did not reflect the ownership structure that had been agreed to by the parties involved. In particular, Sam registered Alameda under his sole ownership and not as jointly owned by him and his cofounders. This was not thought to be a problem because everyone trusted each other as EAs.
3) Eventually, the conflict got serious enough that a large cohort of people decided to quit all at once. Sam refused to honor the agreed ownership structure of Alameda and used his legal ownership to screw people out of their rightful ownership stake in Alameda Research.
4) Several high-ranking and well-respected members of the EA community with more familiarity with the situation believed that Sam had behaved unethically in his handling of the situation. I heard this from multiple sources personally.
5) I believe the basic information above circulated widely among EA leadership and was known to some members of FTX Future Fund.
A person who did work at Alameda during this period described Sam’s behavior as follows:
Additionally, Jeffrey Ladish has a Twitter thread that further suggests that concerns about Sam’s business practices were somewhat widespread.
Information about pre-2018 Alameda is difficult to obtain because the majority of those directly involved signed NDAs before their departure in exchange for severance payments. I am aware of only one employee who did not. The other people who can spreak freely on the topic are early investors in Alameda and members of the EA community who heard about Alameda from those directly involved before they signed their NDAs.
I want to add that I am reasonably afraid that my discussing this will make me some powerful enemies in the EA community. I didn’t raise this issue sooner because of this concern. If anyone else wants to step up and reveal their information or otherwise help agitate strongly to ensure all the information about this situation comes to light, that would be most appreciated.
I think this is really important.
I was one of the people who left at the time described. I don’t think this summary is accurate, particularly (3).
(1) seems the most true, but anyone who’s heard Sam on a podcast could tell you he has an enormous appetite for risk. IIRC he’s publicly stated they bet the entire company on FTX despite thinking it had a <20% chance of paying off. And yeah, when Sam plays league of legends while talking to famous investors he seems like a quirky billionaire; when he does it to you he seems like a dick. There are a lot of bad things I can say about Sam, but there’s no elaborate conspiracy.
Lastly, my severance agreement didn’t have a non-disparagement clause, and I’m pretty sure no one’s did. I assume that you are not hearing from staff because they are worried about the looming shitstorm over FTX now, not some agreement from four years ago.
When said shitstorm dies down I might post more and under my real name, but for now the phrase “wireless mouse” should confirm me as someone who worked there at the time to anyone else who was also there.
I’m the person that Kerry was quoting here, and am at least one of the reasons he believed the others had signed agreements with non-disparagement clauses. I didn’t sign a severance agreement for a few reasons: I wanted to retain the ability to sue, I believed there was a non-disparagement clause, and I didn’t want to sign away rights to the ownership stake that I had been verbally told I would receive. Given that I didn’t actually sign it, I could believe that the non-disparagement clauses were removed and I didn’t know about it, and people have just been quiet for other reasons (of which there are certainly plenty).
I think point 3 is overstated but not fundamentally inaccurate. My understanding was that a group of senior leadership offered Sam to buy him out, he declined, and he bought them out instead. My further understanding is that his negotiating position was far stronger than it should have been due to him having sole legal ownership (which I was told he obtained in a way I think it is more than fair to describe as backstabbing). I wasn’t personally involved in those negotiations, in part because I clashed with Sam probably worse than anyone else at the company, which likely would have derailed them.
That brings me to my next point, which is that I definitely had one of the most negative opinions of Sam and his actions at the time, and it’s reasonable for people to downweight my take on all of this accordingly. That said, I do feel that my perspective has been clearly vindicated by current events.
I want to push back very strongly against the idea that this was primarily about Sam’s appetite for risk. Yes, he has an absurd appetite for risk, but what’s more important is what kinds of risks he has an appetite for. He consistently displayed a flagrant disregard for legal structures and safeguards, a belief that rules do not apply to him, and an inclination to see the ends as justifying the means. At this stage it’s clear that what happened at FTX was fraud, plain and simple, and his decision to engage in that fraud was entirely in character.
(As a minor note, I can confirm that the “wireless mouse” phrase does validate ftxthrowaway as someone who was there at the time, though of course now that it has been used this way publicly once it will no longer be valid in the future.)
I’m curious if you (or any other “SBF skeptic”) has any opinion regarding whether his character flaws should’ve been apparent to more people outside the organizations he worked at, e.g. on the basis of his public interviews. Or alternatively, were there any red flags in retrospect when you first met him?
I’m asking because so far this thread has discussed the problem in terms of private info not propagating. But I want to understand if the problem could’ve been stopped at the level of public info. If so that suggests that a solution of just getting better at propagating private info may be unsatisfactory—lots of EAs had public info about SBF, but few made a stink.
I’m also interested to hear “SBF skeptic” takes on the extent his character flaws were a result of his involvement in EA. Or maybe something about being raised consequentialist as a kid? Like, if we believe that SBF would’ve been a good person if it weren’t for exposure to consequentialist ideas, that suggests we should do major introspection.
One of the biggest lessons I learned from all of this is that while humans are quite good judges of character in general, we do a lot worse in the presence of sufficient charisma, and in those cases we can’t trust our guts, even when they’re usually right. When I first met SBF, I liked him quite a bit, and I didn’t notice any red flags. Even during the first month or two of working with him, I kind of had blinders on and made excuses for things that in retrospect I shouldn’t have.
It’s hard for me to say about what people should have been able to detect from his public presence, because I haven’t watched any of his public interviews. I put a fair amount of effort into making sure that news about him (or FTX) didn’t show up in any of my feeds, because when it did I found it pretty triggering.
Personally, I don’t think his character flaws are at all a function of EA. To me, his character seems a lot more like what I hear from friends who work in politics about what some people are like in that domain. Given his family is very involved in politics, that connection seems plausible to me. This is very uncharitable, but: from my discussions with him he always seemed a lot more interested in power than in doing good, and I always worried that he just saw doing good as an opportunity to gain power. There’s obviously no way for me to have any kind of confidence in that assessment, though, and I don’t think people should put hardly any weight on it.
Thanks for the reply!
In terms of public interviews, I think the most interesting/relevant parts are him expressing willingness to bite consequentialist/utilitarian bullets in a way that’s a bit on the edge of the mainstream Overton window, but I believe would’ve been within the EA Overton window prior to recent events (unsure about now). BTW I got these examples from Marginal Revolution comments/Twitter.
This one seems most relevant—the first question Patrick asks Sam is whether the ends justify the means.
In this interview, search for “So why then should we ever spend a whole lot of money on life extension since we can just replace people pretty cheaply?” and “Should a Benthamite be risk-neutral with regard to social welfare?”
In any case, given that you think people should put hardly any weight on your assessment, it seems to me that as a community we should be doing a fair amount of introspection. Here are some things I’ve been thinking about:
We should update away from “EA exceptionalism” and towards self-doubt. (EDIT: I like this thread about “EA exceptionalism”, though I don’t agree with all the claims.) It sounds like you think more self-doubt would’ve been really helpful for Sam. IMO, self-doubt should increase in proportion to one’s power. (Trying to “more than cancel out” the normal human tendency towards decreased self-doubt as power increases.) This one is tricky, because it seems bad to tell people who already experience Chidi Anagonye-style crippling self-doubt that they should self-doubt even more. But it certainly seems good for our average level of self-doubt to increase, even if self-doubt need not increase in every individual EA. Related: Having the self-awareness to know where you are on the self-doubt spectrum seems like an important and unsolved problem.
I’m also wondering if I should think of “morality” as being two different things: A descriptive account of what I value, and (separately) a prescriptive code of behavior. And then, beyond just endorsing the abstract concept of ethical injunctions, maybe it would be good to take a stab at codifying exactly what they should be. The idea seems a bit under-operationalized, although it’s likely there are relevant blog posts that aren’t coming to my mind. Like, I notice that the EA who’s most associated with the phrase “ethical injunctions” is also the biggest advocate of drastic unilateral action, and I’m not sure how to reconcile that (not trying to throw shade—genuinely unsure). EDIT: This is a great tweet; related.
Institutional safeguards are also looking better, but I was already very in favor of those and puzzled by lack of EA interest, so I can’t say it was a huge update for me personally.
EA self-doubt has always seemed weirdly compartmentalized to me. Even the humblest of people in the movement is often happy to dismiss considered viewpoints by highly intelligent people on the grounds that it doesn’t satisfy EA principles. This includes me—I think we are sometimes right to do so, but probably do so far too much nonetheless.
Seems plausible, I think it would be good to have a dedicated “translator” who tries to understand & steelman views that are less mainstream in EA.
Wasn’t sure about the relevance of that link?
(from phone) That was an example of an ea being highly upvoted for dismissing multiple extremely smart and well meaning people’s life’s work as ‘really flimsy and incredibly speculative’ because he wasn’t satisfied that they could justify their work within a framework that the ea movement had decided is one of the only ones worth contemplating. As if that framework itself isn’t incredibly speculative (and therefore if you reject any of its many suppositions, really flimsy)
Thanks!
I’m not sure I share your view of that post. Some quotes from it:
...
...
...
I don’t think any of these observations hinge on the EA framework strongly? Like, do we have reason to believe Andrew Carnegie spent a significant amount trying to figure out if libraries were a great donation target by his own lights, as opposed to according to the EA framework?
The thing that annoyed me about that post was that at the time it was written, it seemed to me that the EA movement was also fairly guilty of this! (It was written before the criticism/red teaming contest.)
I’m not familiar enough with the case of Andrew Carnegie to comment and I agree on the point of political tribalism. The other two are what bother me.
On the professor, the problem is there explicitly: you omitted a key line ‘I tried asking for his opinion on existential threats’, which is a strongly EA-identifying approach, and one which many people feel is too simplistic. Eg see Gideon Futurman’s EAGx Rotterdam talk when it’s up—he argues the way EAs think about x-risk is far too simplified, focusing on single-event narratives, ignoring countless possible trajectories that could end in extinction or similar any one of which is vanishingly unlikely, but which collectively we should take much more seriously. Whether or not one agrees with this view, it seems to me to be one a smart person could reasonably hold, and shows that by asking someone ‘his opinion on existential threats, and which specific scenarios these space settlements would help with’, you’re pigeonholing them into EA-aligned specific-single-event way of thinking.
As for Elon Musk, I think the same problem is there implicitly: he’s written a paper called ‘Making Humans a Multiplanetary Species’, spoken extensively on the subject and spent his life thinking that it’s important, and while you could reasonably disagree with his arguments, I don’t see any grounds for dismissing them as ‘really flimsy and incredibly speculative’ without engagement, unless your reason for doing so is ‘there exists a pool of important research which contradicts them and which I think is correct’. There are certainly plenty of other smart people who think as he does, some of them EAs (though maybe that doesn’t contribute to my original complaint). Since there’s a very clear mathematical argument that it’s harder to kill all of a more widespread and numerous civilisation, to say that the case is ‘really flimsy’, you basically need to assume the EA-aligned narrative that AI is highly likely to kill us all.
Thanks!
What’s interesting about this interview clip though is that he seems to explicitly endorse a set of principles that directly contradict the actions he took!
Well that’s the thing—it seems likely he didn’t see his actions as contradicting those principles. Suggesting that they’re actually a dangerous set of principles to endorse, even if they sound reasonable. That’s what’s really got me thinking.
I wonder if part of the problem is a consistent failure of imagination on the part of humans to see how our designs might fail. Kind of like how an amateur chess player devotes a lot more thought to how they could win than how their opponent could win. So if the principles Sam endorsed are at all recoverable, maybe they could be recovered via a process like “before violating common-sense ethics for the sake of utility, go down a massive checklist searching for reasons why this could be a mistake, including external observers in the decision if possible”.
My guess is standard motivated reasoning explains why he thought he wasn’t in violation of his stated principles.
Question, but why do you think the principles were dangerous, exactly? I am confused about the danger you state.
I think your first paragraph provides a potential answer to your second :-)
There’s an implicit “Sam fell prey to motivated reasoning, but I wouldn’t do that” in your comment, which itself seems like motivated reasoning :-)
(At least, it seems like motivated reasoning in the absence of a strong story for Sam being different from the rest of us. That’s why I’m so interested in what people like nbouscal have to say.)
So you think there’s too much danger of cutting yourself and everyone else via motivated reasoning, ala Dan Luu’s “Normalization of Deviance” and the principles have little room for errors in implementing them, is that right?
Here’s a link to it:
https://danluu.com/wat/
And a quote:
I’m not sure what you mean by “the principles have little room for errors in implementing them”.
That quote seems scarily plausible.
EDIT: Relevant Twitter thread
Specifically, I was saying that wrong results would come up if you failed in one of the steps of reasoning, and there’s no self-correction mechanism for bad reasoning like Sam Bankman-Fried was doing.
Can I ask the obvious question of whether you made money by shorting ftt? You were both one of the most anti-FTX and most still involved in crypto trading, so I suspect if you didn’t then no one did.
Ps: apologies for burning the “wireless mouse” Commons. If others want to make throwaways, feel free to dm me what that is referring to and I will publicly comment my verification.
Also no non-disparagement clause in my agreement. FWIW I was one of the people who negotiated the severance stuff after the 2018 blowup, and I feel fairly confident that that holds for everyone. (But my memory is crappy, so that’s mostly because I trust the FB post about what was negotiated more than you do.)
DM’d you.
Confirming this account made an Alameda research reference in my DMs.
… I assume you realise that that narrows you down to one of two people (given it’s safe to assume Nishad is not currently spending his time on the EA Forum)
I do think I was probably just remembering incorrectly about this to be honest, I looked back through things from then and it looks like there was a lot of back-and-forth about the inclusion of an NDA (among other clauses), so it seems very plausible that it was just removed entirely during that negotiation (aside from the one in the IP agreement).
yep, not too worried about this. thanks for flagging :)
Here is some questions/content that might be interesting to discuss if you’re interested?
I’ve been on leave from work due to severe burnout for the last couple months (and still am), and was intentionally avoiding seeing anything about SBF/FTX outside of work until recent events made that basically impossible. So no, I didn’t personally trade on any of this at all.
Fair. Sorry to hear that, I hope you can go back to ignoring the situation soon!
Can you answer two questions related to the source of SBF’s early business wealth?
Were the Kimchi arb returns real?
As you know, the “Kimchi premium” was this difference in BTC price between Korea (Japan?) and the rest of the world.
The narrative is that SBF arbed this price difference to make many millions and create his early wealth.
The Sequoia puff piece makes this cute story:
After SBF’s fall, Twitter speculation says this is dubious.
This is because the cause of the Kimchi premium was strict legal capital controls, and the liquidity was orders of magnitude too small to produce the wealth in SBF later used. At best, SBF was actively breaking laws by this trade. The amount of money he could make may have been too small to justify the narratives around his early success.
Do you have any comments on the above?
Jaan Tallinn investment
Tallinn later ended up funding SBF with $50M.
What would you say to the speculation that it was this funding, and not the Kimchi arb , that really launched SBF’s career?
If this is mostly true, the takeaway is that there’s little cleverness or competency being expressed here here?
It seems like power, money and access led to SBF’s success. This theme would fit with SBF’s later behavior, with bluffing and overaweing spend.
That tradition seems hollow and bad, maybe contagious to the things that SBF created or touched.
This could be useful in some way? It seems like the vector EA or EA PR could take, could counter this.
I don’t mind sharing a bit about this. SBF desperately wanted to do the Korea arb, and we spent quite a bit of time coming up with any number of outlandish tactics that might enable us to do so, but we were never able to actually figure it out. The capital controls worked. The best we could do was predict which direction the premium would go and trade into KRW and then back out of it accordingly.
Japan was different. We were able to get a Japanese entity set up, and we did successfully trade on the Japan arb. As far as I know we didn’t break any laws in doing so, but I wasn’t directly involved in the operational side of it. My recollection is that we made something like 10-30 million dollars (~90%CI) off of that arb in total, but I’m not at all confident on the exact amount.
Is that what created his early wealth, though? Not really. Before we all left, pretty much all of that profit had been lost to a series of bad trades and mismanagement of assets. Examples included some number of millions lost to a large directional bet on ETH (that Sam made directly counter to the predictions of our best event trader), a few million more on a large OTC trade in some illiquid shitcoin that crashed long before we could get out of it, another couple million in a series of XRP transfers that nobody noticed had never arrived, and that had fallen in value by something like 90% when they finally showed up much later, and various other random small things like a junior trader accidentally transferring half a million dollars of USDT to a BTC address (or something like that) due to a complete lack of safeguards on transfers, etc. Not to mention absurd levels of expenditures, e.g. an AWS bill that at one point reached about a quarter million dollars per month.
My knowledge of the story ends when we left, and my recollection is that at that point the Japan arb had long been closed and most of our profits from it had been squandered. I don’t know how he achieved his later success, but if I were to guess, I’d say it probably has a lot more to do with setting up FTX, launching highly predatory instruments like leveraged ETF tokens on it, and doing similarly shady stuff to the things that brought it all crashing down, but during a bull market that made all of those risks pay off. That’s entirely guesswork though, I have no inside knowledge about anything that happened after April 2018.
Note: All of this is purely from memory, I have not cross-checked it with anyone else who was there, and it could be substantially wrong in the details. It has been a long time, and I spent most of that time trying to forget all about it. I’m sharing this because I believe the broad strokes of it to be accurate, but please do not update too strongly from it, nor quote it without mentioning this disclaimer.
What about the GBTC arb trade? Did Alameda get into that during your time there?
Good question, but tbh I just don’t remember the answer.
Thank you for sharing, I can understand why you might be feeling burnt out!! I’ve been in a workplace environment that reminds me of this, and especially if you care about the people and projects there...it’s painful.
Here is some questions/content that might be interesting to discuss?
(You might not want to given if your fatigue though.)
Thanks for sharing this nbouscal. How many people did you tell about this at the time?
Personally, I remember telling at least a handful of people at the time that Sam belonged in a jail cell, but I expect that people thought I was being hyperbolic (which was entirely fair, I was traumatised and was probably communicating in a way that signalled unreliability).
I was told that conversations were had with people in leadership roles in EA. I wasn’t part of those conversations and don’t know the full details of what was discussed or with whom.
It would be awesome for the names of senior people who knew to be made public, plus the exact nature of what they were told and their response or lack thereof.
I think this could be a nice-to-have, but really, I think it’s too much to ask,
”For every senior EA, we want a long list of exactly each thing they knew about SBF”
This would probably be a massive pain, and much of the key information will be confidential (for example, informants who want to remain anonymous).
My guess is that there were a bunch of flags that were more apparent than nbouscal’s stories.
I do think we should have really useful summaries of the key results. If there were a few people who were complicit or highly negligent, then that should be reported, and appropriate actions taken.
I strongly believe it is hyperrelevant to know who knew what, when so that these people are held to account. I don’t think this is too much to ask, nor does it have to be arduous in the way you described of getting every name with max fidelity. I see so many claims that “key EA members knew what was going on” and never any sort of name associate with it.
I agree this is really important and would really, really want it to be figured out, and key actions taken. I think I’m less focused on all of the information of such a discovery being public, as opposed to much of it being summarized a bit.
A summary of sorts is being compiled here:
What would you suggest might be appropriate actions for complicity or negligence?
I don’t feel like I’m in a good place to give a good answer. First, I haven’t really thought about it nor am I an expert in these sorts of matters.
Second, I’m like several layers deep in funding structures that start with these people. It’s sort of like asking me to publicly write what I love/hate, objectively, about my boss.
I think I could say that I’d expect appropriate actions to look a lot like they do with top companies (mainly ones without lots of known management integrity problems). At these companies, I believe that when some officials are investigated for potential issues, often they’re given no punishment, and sometimes they’re fired. It really depends on the details of the findings.
I think it is very important to understand what was known about SBF’s behaviour during the initial Alameda breakup, and for this to be publicly discussed and to understand if any of this disaster was predictable beforehand. I have recently spoken to someone involved who told me that SBF was not just cavalier, but unethical and violated commonsense ethical norms. We really need to understand whether this was known beforehand, and if so learn some very hard lessons.
It is important to distinguish different types of risk-taking here. (1) There is the kind of risk taking that promises high payoffs but with a high chance of the bet falling to zero, without violating commonsense ethical norms, (2) Risk taking in the sense of being willing to risk it all secretly violating ethical norms to get more money. One flaw in SBF’s thinking seemed to be that risk-neutral altruists should take big risks because the returns can only fall to zero. In fact, the returns can go negative—eg all the people he has stiffed, and all of the damage he has done to EA.
Are you in a position to be more specific about what SBF did that this is referring to?
no
In 2021 I tried asking about SBF among what I suppose you could call “EA leadership”, trying to distinguish whether to put SBF into the column of “keeps compacts but compact very carefully” versus “un-Lawful oathbreaker”, based on having heard that early Alameda was a hard breakup. I did not get a neatly itemized list resembling this one on either points 1 or 2, just heard back basically “yeah early Alameda was a hard breakup and the ones who left think they got screwed” (but not that there’d been a compact that got broken) (and definitely not that they’d had poor capital controls), and I tentatively put SBF into column 1. If “EA leadership” had common knowledge of what you list under items 1 or 2, they didn’t tell me about it when I asked. I suppose in principle that I could’ve expended some of my limited time and stamina to go and inquire directly among the breakup victims looking for one who hadn’t signed an NDA, but that’s just a folly of perfect hindsight.
My own guess is that you are mischaracterizing what EA leadership knew.
Huh, I am surprised that no one responded to you on this. I wonder whether I was part of that conversation, and if so, I would be interested in digging into what went wrong.
I definitely would have put Sam into the “un-lawful oathbreaker” category and have warned many people I have been working with that Sam has a reputation for dishonesty and that we should limit our engagement with him (and more broadly I have been complaining about an erosion of honesty norms among EA leadership to many of the current leadership, in which I often brought up Sam as one of the sources of my concern directly).
I definitely had many conversations with people in “EA leadership” (which is not an amazingly well-defined category) where people told me that I should not trust him. To be clear, nobody I talked to expected wide-scale fraud, and I don’t think this included literally everyone, but almost everyone I talked to told me that I should assume that Sam lies substantially more than population-level baseline (while also being substantially more strategic about his lying than almost everyone else).
I do want to add to this that in addition to Sam having a reputation for dishonesty, he also had a reputation for being vindictive, and almost everyone who told me about their concerns about Sam did so while seeming quite visibly afraid of retribution from Sam if they were to be identified as the source of the reputation, and I was never given details without also being asked for confidentiality.
Can you give some context on why Lightcone accepted a FTX Future Fund grant (a) given your view of his trustworthiness?
So far I have been running on the policy that I will accept money from people who seem immoral to me, and indeed I preferred getting money from Sam instead of Open Philanthropy or other EA funders because I thought this would leave the other funders with more marginal resources that could be used to better ends (Edit: I also separately thought that FTX Foundation money would come with more freedom for Lightcone to pursue its aims independently, which I do think was a major consideration I don’t want to elide).
To be clear, I think there is a reasonable case to be made for the other end of this tradeoff, but I currently still believe that it’s OK for EAs to take money from people whose values or virtues they think are bad (and that indeed this is often better than taking money from the people who share your values and virtues, as long as its openly and willingly given). I think the actual tradeoffs are messy, and indeed I ended up encouraging us to go with a different funder for a loan arrangement for a property purchase we ended up making, since that kind of long-term relationship seemed much worse to me, and I was more worried about that entangling us more with FTX.
To be again clear, I was not suspecting large-scale fraud. My sense was that Sam was working in a shady industry while being pretty dishonest in the way the crypto industry often is, but was primarily making money by causing tons of people to speculate in crypto while also being really good at trading against them and eating their lunch, which I think is like, not a great thing to do, but was ultimately within the law and was following reasonable deontological constraints in my opinion.
I am seriously considering giving back a bunch of the money we received. I also for pretty similar reasons think that giving that money back does definitely not entail giving that money back to FTX right now, who maybe are just staging a hack on their own servers (or are being hacked) and should not be trusted with more resources. I expect this will instead require some kind of more sophisticated mechanism of actually helping the people who lost funds (conditional on the bankruptcy proceedings not doing clawbacks, which I think is reasonable given that I think clawbacks are unlikely).
I think it personally might have been better to have a policy of refusing funds from institutions that I think are bad and have power in my social ecosystem, so that I feel more comfortable speaking out against them. I personally prefer the policy of taking their money while also having a policy of just speaking out against them anyways (Dylan Matthews did this in one of his Future Perfect articles in a way I find quite admirable), but I do recognize this is setting myself up for a lot of trust in my own future integrity, and it might be better to tie myself to a mast here.
I think the key damage caused by people in my reference class receiving funds from FTX was that they felt less comfortable criticizing FTX, and I think indeed in-retrospect I was more hesitant than I wish I would have been to speak out against Sam and FTX for this reason, and am currently spending a lot of my time trying to understand how to update and learn from this. It’s pretty plausible to me that I fucked up pretty badly here, though I currently think my fuckup was not being more public about my concerns, and not the part where I accepted Sam’s money. I also think confidentiality concerns were a major problem here, and it’s pretty plausible another component of my fuckup was to agree to too much confidentiality in a way that limited what I could say here.
In situations like this, it might be a good habit to state reservations publicly at the same time you receive the grant? Then your accepting the grant isn’t a signal that you endorse the grantmaker, and you can be less worried about your relationship with the grantmaker damaging your future ability to be candid. Either they stop giving you money, or they continue giving you money even though you badmouthed them (which makes it more clear that you have impunity to do so again in the future).
Interesting idea.
But it seems unrealistic to expect a recipient of a grant, upon receiving it, to publicly announce ethical and legal reservations about the grant-giver… and then for the grant-giver to be OK with that, and to follow through on providing the grant funding.
‘Biting the hand that feeds you’ doesn’t typically result in good outcomes.
Sure, though I think altruistic grantmakers should want their grantees to criticize them (because an altruistic grantmaker should care more about getting useful and actionable criticism than about looking good in the moment), and I think a lot of EA grantmakers walk the walk in that respect. E.g., MIRI has written tons of stuff publicly criticizing Open Phil, even though Open Phil is by far our largest all-time funder; and I don’t think this has reduced our probability of getting future Open Phil funding.
One advantage of the norm I proposed is that it can help make this a more normal and expected practice, and (for that reason) less risky than it currently is.
And since everything’s happening in public, grantmakers can accumulate track records. If you keep defunding people when they criticize you (even when the criticisms seem good and the grant recipients seem worthy, as far as others can tell), others can notice this fact and dock the grantmaker reputational points. (Which should matter to grantmakers who are optimizing this hard for their reputation in the first place.)
Fair points. I guess if any community can create a norm where it’s OK for grant receivers to criticize grantmakers, it’s the EA community.
I was really just pointing out that creating and maintaining such an open, radically honest, self-reflective, criticism-welcoming culture is very much an uphill struggle, given human nature.
That’s very surprising!!
Do you know if anybody attempted to propagate this information to any of the EAs who were promoting SBF publicly? (If so, do you know if they succeeded in conveying that information to them?)
And just to check, did any of the people who warn you privately promote SBF/FTX publicly?
I ask because it seems weird for a lot of EAs to be passing around warnings about SBF being untrustworthy while a lot of (other?) EAs are promoting him publicly; I very much hope these sets were disjoint, but also it’s weird for them to be so disjoint, I would have expected better information flow.
Yep, I was and continue to be confused about this. I did tell a bunch of people that I think promoting SBF publicly was bad, and e.g. sent a number of messages when some news article that people were promoting (or maybe 80k interview?) was saying that “Sam sleeps on a bean bag” and “Sam drives a Corolla” when I was quite confident that they knew that Sam was living in one of the most expensive and lavish properties in the Bahamas and was definitely not living a very frugal livestyle. This was just at the same time as the Carrick stuff was happening, and I would have likely reached out to more people if I hadn’t spent a lot of my social energy on pushing back on Carrick stuff at the time (e.g. ASB’s piece on Carrick’s character).
Overall, I did not message many people, and I personally did not speak out very publicly about my broader concerns. I also think a lot of that promotion was happening in a part of the EA ecosystem I interface much less with (80k, UK EAs, Will, etc.), and I’ve had historically somewhat tense relationships to that part of the ecosystem, so I did not have many opportunities to express my concerns.
It would be useful to say whether any of the people you told would be considered ‘EA leadership’; and if so, who.
How can both of these be true:
You (and others, if all of the accounts I’ve been reading about are true) told EA leadership about a deep mistrust of SBF.
EA decided to hold up and promote SBF as a paragon of EA values and on of the few prominent faces in the EA community.
If both of those are true, how many logical possibilities are there?
The accounts that people told EA leadership are false.
The accounts are true and EA leadership didn’t take these accounts seriously.
EA leadership took the accounts seriously, but still proceeded to market SBF.
I find them all super implausible so I don’t know what to think!
My understanding is that the answer is basically 2.
I’d love to share more details but I haven’t gotten consent from the person who told me about those conversations yet, and even if I were willing to share without consent I’m not confident enough of my recollection of the details I was told about those conversations when they happened to pass that recollection along. I hope to be able to say more soon.
EDIT: I’ve gotten a response and that person would prefer me not to go into more specifics currently, so I’m going to respect that. I do understand the frustration with all of the vagueness. I’m very hopeful that the EA leaders who were told about all of this will voluntarily come forward about that fact in the coming days. If they don’t, I can promise that they will be publicly named eventually.
My guess is different parts of leadership. I don’t think many of the people I talked to promoted SBF a lot. E.g. see my earlier paragraph on a lot of this promotion being done by the more UK focused branches that I talk much less to.
That could very well be and there are a lot of moving parts. That is why I think it is important for people who supposedly warned leadership to say who was told and what they were told. If we are going to unravel this this all feels like necessary information.
The people who are staying quiet about who they told have carefully considered reasons for doing so, and I’d encourage people to try to respect that, even if it’s hard to understand from outside.
My hope is that the information will be made public from the other side. EA leaders who were told details about the events at early Alameda know exactly who they are, and they can volunteer that information at any time. It will be made public eventually one way or another.
I respect that people who aren’t saying what they know have carefully considered reasons for doing so.
I am not confident it will come from the other side as it hasn’t to date and there is no incentive to do so.
May I ask why you believe it will be made public eventually? I truly hope that is the case.
The incentives for them to do so include 1) modelling healthy transparency norms, 2) avoiding looking bad when it comes out anyway, 3) just generally doing the right thing.
I personally commit to making my knowledge about it public within a year. (I could probably commit to a shorter time frame than that, that’s just what I’m sure I’m happy to commit to having given it only a moment’s thought.)
What do you find super implausible about 2?
If insiders were making serious accusations about his character to EA leadership and they went on to promote him that would be weird to me. Especially if many people did it which is what has been claimed. Of course I have no idea who “leadership” is because no one is being specific.
To be fair sometimes people make accusations that are incorrect? Your decision procedure does need to allow for the possibility of not taking a given accusation seriously. I don’t know who knew what and how reasonable a conclusion this was for any given person given their state of knowledge, in this case, but also people do get this wrong sometimes, this doesn’t seem implausible to me.
My decision procedure does allow for that and I have lots of uncertainties, but it feels that given many insiders claim to have warned people in positions of power about this and Sam got actively promoted anyway. If multiple people with intimate knowledge of someone came to you and told you that they thought person X was of bad character you wouldn’t have to believe them hook line and sinker to be judicious about promoting that person.
Maybe this is the most plausible of the 3 and I shouldn’t have called it super implausible, but it doesn’t seem very plausible for me, especially from people in a movement that takes risks more seriously than any other that I know.
I found this comment annoying enough to read that I felt compelled to give a simplified version:
This removes some nuance, but maybe adds some clarity.
Edit: Reworded, see original here.
I did not say that it’d be good if somebody was a ruthless negotiator.
If you’re going to paraphrase somebody, please be more careful to paraphrase things they actually said, by dereferencing, and not add implications you thought they meant.
I didn’t say I was paraphrasing you, I said I was giving a simplified version. I also pointed out the sentence was not in the original.
Adding in an unflattering sentiment that was not said or clearly implied in the original is not “simplifying”.
Ok, fine, reworded. You can still find the original here.
I consider this credible.
It suggests that my categorization of “EA leadership” was probably too broad and that fewer people knew the details of the situation than I believed.
That means there is a question of how many people knew. I am confident that Nick Beckstead and Will MacAskill knew about the broken agreement and other problems at Alameda. I am confident they are not the only ones that knew.
Why are you confident of that? In general, I think there’s just less time and competence and careful checking to go around, in this world, than people would want to believe. This isn’t Hieronym’s To The Stars or the partially Hieronym-inspired world of dath ilan.
Huge thanks for spelling out the specific allegations about SBF’s behavior in early Alameda; for the past couple days I’d been seeing a lot of “there was known sketchy stuff at Alameda in 2017-18” and it was kind of frustrating how hard it was to get any information about what is actually alleged to have happened, so I really appreciate this clear point-by-point summary.
Same here, this is really helping me understand the (at least perceived) narrative flow of events
I decided to speak about it because if true, it would imply bad things about how EA hasn’t remembered the last time things went wrong.
In many senses, this is EA’s first adversarial interaction, where we can’t rely on internal norms of always cooperating anymore.
After the involved EAs consult with their lawyers, they may find a receptive audience to tell their stories at the Department of Justice or another federal agency. I would be shocked if the NDAs were effective as against cooperating with a federal investigation. If the quoted description is true, it seems relevant to the defense SBF seems to be trying to set up.
I knew about Sam’s bad character early on, and honestly I’m confused about what people would have expected me to do.
I should have told people that Sam has a bad character and can’t be trusted, that FTX is risky? Well, I did those things, and as far as I can tell, that has made the current situation less bad than it would have been otherwise (yes, it could have been worse!). In hindsight I should have done more of this though.
Should I have told the authorities that Sam might be committing fraud? All I had were vague suspicions about his character and hints that he might be dishonest, but no convincing evidence or specific worries about fraud. (Add jurisdictional problems, concerns about the competence of regulators, etc)
Should I not have “covered up” the early scandal? Well, EAs didn’t, and I think Kerry’s claim is wrong.
Should I have publicly spread concerns about SBF’s character? That borders on slander. Also, I was concerned that SBF would permanently hate me after that (you might say I’m a coward, but hey, try it yourself).
Should I have had SBF banned from EA? Personally, I’m all for a tough stance, but the community is usually against complete bans of bad actors, so it just wasn’t feasible. (EG, if I were in charge, Jacy and Kerry would be banned, but many wouldn’t like that.)
SBF was powerful and influential. EA didn’t really have power over him.
What could have been done better? I am sincerely curious to get suggestions.
My current, extremely tentative, sense of the situation is not that individuals who were aware of some level of dishonesty and shadiness were not open enough about it. I think individuals acted in pretty reasonable ways, and I heard a good amount of rumors.
I think the error likely happened at two other junctions:
Some part of EA leadership ended up endorsing SBF very publicly and very strongly despite having very likely heard about the concerns, and without following up on them (In my model of the world Will fucked up really hard here)
We didn’t have any good system for aggregating rumors and related information, and we didn’t have anyone who was willing to just make a public post about the rumors (I think this would have been a scary and heroic thing to do, I am personally ashamed that I didn’t do it, but I don’t think it’s something that we should expect the average person to do)
I think if we had some kind of e.g. EA newspaper where people try to actively investigate various things that seem concerning, then I think this would have helped a bunch. This kind of thing could even be circulated privately, though a public version seems also good.
I separately also think that we should just much more deeply embed the virtues of honesty and truth-seeking into the core idea of EA. I think it shouldn’t be possible to be seen as “an effective EA” without also being actually good at truth-seeking and helping other people orient to the world.
I think when a billionaire shows up with billions of dollars, or an entrepreneur builds a great company, I think it should just be a strict requirement that they are also honest and good at truth-seekingness in order to gain status and reputation within the community, in the same way that no matter how much money you make, people are not going to think you are a “good scientist” without actually having discovered new verifiable regularities in the natural world (you might be a “great supporter of science”, but I think that doesn’t usually mean you would get invited to all the scientific conferences, or get the Nobel Prize, or something, and I think people would have a healthy understanding of the relationship of you to the rest of the scientific ecosystem).
Agree with much of what you say here. (Though I don’t think we currently have strong enough evidence to single out specific EA leaders as being especially responsible for the recent tragic events; at least I don’t think I personally have that kind of information.)
As a substitute, or complement, to an investigative EA newspaper, what do you think about an “EA rumours” prediction market?[1] Some attractive features of such a market:
It turns private information held by individual people with privileged access to sources into public information available to the entire EA community, increasing the likelihood that the information will reach those for whom it is most valuable and actionable.
It potentially reduces community drama by turning “hot” debates influenced by tribal allegiances and virtue signaling into “cold” assignments of probability and assessments of evidence.
It makes rumours more accurate, by incentivizing users to estimate their probability correctly.
It makes false rumours less damaging to their targets, by explicitly associating them with a low probability.
I think this market would need judicious moderation to function well and avoid being abused. But overall it seems to me like it might be an idea worth exploring further, and of the sort that could make future events in the same reference class as the FTX debacle less likely to happen.
By ‘market’, I do not necessarily mean a real-money prediction market like Polymarket or PredictIt; it could also be a play-money market like Manifold Markets or a forecasting platform like Metaculus.
Yeah, I feel excited about something in this space. Generally I feel like prediction markets have a lot of good things going for them in situations like this, though I do worry that they will somehow just end up gamed when the stakes are high. Like, my guess is Sam could have likely moved the probability of a market here a lot, either with money, or by encouraging other people to move it.
Should EA people just be way more aggressive about spreading the word (within the community, either publicly or privately) about suspicions that particular people in the community have bad character?
(not saying that this is an original suggestion, you basically mention this in your thoughts on what you could have done differently)
Confirming that this account DM’d me with information indicating that they worked at Alameda.
I met Sam in February and wrote a profile of him for Bloomberg. In hindsight, there are a lot of red flags that everyone missed, myself included. Of course, it all looked different when he was on top.
At the time, I tried to research Alameda’s early years and the dispute that led to the big breakup, but didn’t get anywhere. I’m now working on a book—on the off chance that any insiders from Alameda or FTX read this, please DM me here or on Twitter.
I’m unclear how to update on this, but note that Kerry Vaughan was at CEA for 4 years, and a managing director there for one year before, as I understand it, being let go under mysterious circumstances. He’s now the program manager at a known cult that the EA movement has actively distanced itself from. So while his comments are interesting, I wouldn’t treat him as a particularly credible source, and he may have his own axe to grind.
All this conversation about Leverage and Kerry’s motives and character misses the point that he’s talking about events that have little to nothing to do with him. He’s saying that there was a blowup at Alameda early on reflecting badly on SBF that lots of EA leaders knew about and turned a blind eye to. This can be investigated and confirmed or denied without delving into conversations about Leverage or Kerry that are besides the point at hand.
To the extent that Kerry’s allegation involves his own judgment of Sam’s actions as bad or shady, I think it matters that there’s reason not to trust Kerry’s judgment or possibly motives in sharing the information. However we should definitely try to find out what actually happened and determine whether it was truly predictive of worse behavior down the line.
Agreed! IMO it’s good for people to be aware that Kerry has an axe to grind; but the thing to do with that information is to look into the matter further.
I commented above that I think Kerry’s comment is incorrect, so I feel obligated to state that I have no reason to think this is the result of bias. I am inclined to think he’s doing the best he can in an information-scarce environment.
I retract this comment. Kerry has continued repeating the same claim on Twitter without noting that there’s disagreement about its truth. This does not seem like unbiased behavior.
The claim on Twitter is different.
Can you clarify what you think is unfair? Happy to issue a correction.
https://twitter.com/KerryLVaughan/status/1591508739236188160?t=qL-dGKXar3b7EQ4EHs597Q&s=19
Edit: if anyone else wants to take a stab at explaining why the Twitter thread is unfair given this thread feel free. Would want to issue a correction sooner rather than later.
-1 on this comment. In particular, being at CEA for 4 years seems like something which makes criticism more plausible. And it’s not surprising that EA has distanced itself from groups critical of us (while I have some concerns about Leverage, I think there are a bunch of ways that they’ve been treated unfairly).
Hard disagree on Leverage. They’ve absorbed a tonne of philanthropic funding over the years to produce nothing but pseudoscience and multiple allegations of emotional abuse.
I’m not saying Kerry wouldn’t know about this stuff—I think he likely does. I’m saying a) that he was one of the ‘top leaders’ he refers to, so had ample chance to do something about this himself, b) he has a track record of questionable integrity, and c) he has potential motive to undermine the people he’s criticising.
I think this comment is a pretty clear example of one way in which Leverage has been treated unfairly, which is that people lump “not very productive” and “abusive” into a single criticism. The latter is much more serious, but the former is much easier to quickly verify, and so the former ends up lending credibility to the latter even though I personally think we probably have too few groups taking philanthropic funding to do crazy research that may end up looking like pseudoscience.
To be very clear, I’m not claiming that Leverage was not an abusive environment, and I take the allegations you mention very seriously. I’ve just also seen people piling onto Leverage in not-very-careful ways that I’m not very happy about.
I’m not a fan of Leverage, but I agree with Richard here. I think Kerry is better modeled as “normal philosophy-friendly EA” with the modifications “less conflict-averse than the average EA” and “mad at EA (for plenty of good reasons and also plenty of bad reasons, IMO) and therefore pretty axe-grindy”. If you model him with a schema closer to “crazy cultist” than to “bitter ex-EA”, I expect you to make worse predictions.
I’m guessing I have a lower opinion of Leverage than you based on your tone, but +1 on Kerry being at CEA for 4 years making it more important to pay serious attention to what he has to say even if it ultimately doesn’t check out. We need to be very careful to minimize tribalism hurting our epistemics.
For what it’s worth, these different considerations can be true at the same time:
“He may have his own axe to grind.”: that’s probably true, given that he’s been fired by CEA.
“Kerry being at CEA for four years makes it more important to pay serious attention to what he has to say even if it ultimately doesn’t check out.”: it also seems like he may have particularly useful information and contexts.
“He’s now the program manager at a known cult that the EA movement has actively distanced itself from”: it does seem like Leverage is shady and doesn’t have a very good culture and epistemic, which doesn’t reflect greatly on Kerry.
So I would personally be inclined to pay close attention to his criticisms of CEA. At the same time, I would need more “positive” contexts from others to be able to trust what he says.
I agree that these can technically all be true at the same time, but I think the tone/vibe of comments is very important in addition to what they literally say, and the vibe of Arepo’s comment was too tribalistic.
I’d also guess re: (3) that I have less trust in CEA’s epistemics to necessarily be that much better than Leverage’s , though I’m uncertain here (edited to add: tbc my best guess is it’s better, but I’m not sure what my prior should be if there’s a “he said / she said” situation, on who’s telling the truth. My guess is closer to 50⁄50 than 95⁄5 in log odds at least).
I agree that the tone was too tribalistic, but the content is correct.
(Seems a bit like a side-topic, but you can read more about Leverage on this EA Forum post and, even more importantly, in the comments. I hope that’s useful for you! The comments definitely changed my views—negatively—about the utility of Leverage’s outputs and some cultural issues.)
I’ve read it. I’d guess we have similar views on Leverage, but different views on CEA. I think it’s very easy for well-intentioned, generally reasonable people’s epistemics to be corrupted via tribalism, motivated reasoning, etc.
But as I said above I’m unsure.
Edited to add: Either way, might be a distraction to debate this sort of thing further. I’d guess that we both agree in practice that the allegations should be taken seriously and investigated carefully, ideally by independent parties.
Mea culpa for not being clear enough. I don’t think handwavey statements from someone whose credibility I doubt have much evidential value, but I strongly think CEA’s epistemics and involvement should be investigated—possibly including Vaughan’s.
I find it bleakly humourous to be interpreted as tribalistically defending CEA when I’ve written gradually more public criticisms of them and their lack of focus -and honestly, while I don’t understand thinking they’re as bad as Leverage, I think they’ve historically probably been a counterfactual negative for the movement, and don’t have a good sense of whether things have improved.
Thanks for clarifying. To be clear, I didn’t say I thought they were as bad as Leverage. I said “I have less trust in CEA’s epistemics to necessarily be that much better than Leverage’s , though I’m uncertain here”
I thought CEA started the movement?
As I understood it, CEA was originally just a legal entity to save 80k and GWWC from having to both individually get charitable status, though GWWC had been around in some form since maybe 2007ish, and 80k for a year or two (and Givewell, which had started about the same time as CEA and arguably has as good a claim to having started it had no formal association with any of these orgs). The emerging movement might have taken its name from the new org, or maybe just started using the phrase in response to the poll result.
At some stage IIRC, CEA started taking on more responsibilities and distanced itself, and eventually split from its child orgs. From that point on, I feel like they have generally not been well run—the staff seem to have been hired for enthusiasm and allegiance to the cause, and sometimes apparent nepotism (they seem to have hired internally for quite a few positions) rather than competence. As far as I can tell, staff have neither a carrot to motivate them or a stick: I know of only two examples of CEA employees being pushed out, one of who was CEO, and those were, as I understand it, for behaviour that was unambiguously termination-worthy (CEA may not want to disclose details of specific individuals being let go, and if that has happened those individuals might understandably not want to talk about it either, but the org doesn’t eg have a clear policy for expecting high standards). Meanwhile they run multiple programs, the nature of which is constantly changing and lacks meaningful outcome metrics, meaning both that it’s hard to gauge how well they do what they do, and hard for alternative organisations to offer them high fidelity competition.
(excuse all the self-citations—I don’t know anyone else who’s been publicly writing anything highly critical of CEA since the funds criticism, though I’ve had a number of conversations with people who’re also cynical about the org. I’ve been fairly reluctant go on record with these views myself, and suspect I’m harming myself in expectation by doing so, since I’m interested in doing future EA-funded work)
To be clear a) I don’t think all CEA staff have been bad—some I think highly of, the vast majority I have no specific opinion of, just that the overall org has generally functioned ineffectively, b) most of the specific actions I have in mind date back at least a couple of years, before Max Dalton became ED, and c) I had a recent conversation with him and gave him these concerns, which he seemed somewhat open to. So it may be that they’re in a much better state under him. But I’m also wary of under-new-management-itis, under which a nonprofit org can’t be criticised for a couple or years after a change—which potentially puts the org beyond reproach if it cycles EDs often enough.
But good on you for being brave enough to publicly criticise your funding sources (“I have received EA funding in multiple capacities, and feel quite constrained in my ability to criticise CEA publicly”) or people you like (“I like everyone I’ve interacted with from CEA”).
I really like this comment, and I agree with it.
++++
Why would being dismissed from CEA and being part of Leverage mean he has an axe to grind regarding SBF?
Regarding ‘top EA leaders’ knowing about it (see further in the thread).
If you’d like to investigate whether Leverage was a cult, there are now several additional sources of information available.
One source is Cathleen’s post which is detailed, extensive, and written directly by a former employee. A board member conducted their own investigation into what Leverage could have done better between 2012 and 2019 by conducting interviews with former members of Leverage staff.
You can also view Leverage’s website to learn more about what we’ve been working on post-2019. The fact that I work at Leverage is best explained by my having a very different view of the organization’s history and current work than you do.
In any case, I don’t see why disagreements about the value of Leverage’s current or past work have anything to do with the specific claims I’ve made about what happened at Alameda in 2018.
I’d also recommend reading Zoe Curzi’s essay about her own (traumatic) experience at Leverage, the publishing of which was publicly supported by Leverage founder Geoff Anders.
At least judging by Geoff Anders’ friends list on Facebook, some very prominent EAs haven’t really.
Not everybody maintains their FB friends list carefully or at all, really. I’ve seen way nuttier people on friends lists.
I heard the same claim, from a different source: that SBF did something unethical at Alameda Research prior to founding FTX, that some EAs had left Alameda saying that SBF was unethical and no one should work with him, and that there were privately circulated warnings to this effect. (The person I heard this from hasn’t spoken publicly about it yet as far as I know. They are someone with no previous or current involvement with FTX or Alameda Research, who I think is reporting honestly and is well positioned to have heard such things.)
(EDIT: others along the rumor-path via which I heard this have now spoken on this thread, in greater detail than I have; so this comment is a duplicate report and should not be coutned.)
+ 1 for way more investigations and background checks for major donations, megaprojects, and association with EA.
I think this suggests that the EA orgs which had close ties to FTX and SBF should have investigations performed by outside parties. If this is true it makes the situation even worse than it appears at the moment since it could have been prevented by having higher ethical standards.
Edit: I now see one part of this has already been stated, but I still think the part about some EAs still being ok with Geoff Anders is important.
Just want to point out that this is incredibly ironic, as Vaughan is part of Leverage Research, a known cult, which has still not been entirely disavowed by some prominent EAs, and whose head Geoff Anders still appears to be on good terms with some of them.
Thank you so much for your time, dedication, and efforts.
It seems like, for many of us, difficult times lay ahead. Let us not forget the power of our community—a community of brilliant, kind-hearted, caring people trying to do good better together.
This is a crisis—but we have the ability to overcome it.
I was really looking forward to maybe implementing impact markets in collaboration with Future Fund plus FTX proper if you and they wanted, and feel numb with regard to this shocking turn. I really believed FTX had some shot at ‘being the best financial hub in the world’, SBF ‘becoming a trillionaire’, and this longshot notion I had of impact certificates being integrated into the exchange, funding billions of dollars of EA causes through it in the best world. This felt so cool and far out to imagine. I woke up two days ago and this dream is now ash. I have spiritually entangled myself with this disaster.
I don’t want to be the first commenter to be that guy, and forgive me if I’m poking a wound, but when you have the time and slack can you please explain to us to what extent you guys grilled FTX leadership about the integrity of the sources of money they were giving you? Surely you had an inside view model of how risky this was if it blew up? If it’s true SBF has had a history of acting unethically before (rumors, I don’t know), isn’t that something to have thoroughly questioned and spoken against? If there was anyone non-FTX who could have pressured them to act ethically, it would have been you. As an outsider it felt like y’all were in a highly trusted concerted relationship with each other going back a decade.
In any case, thank you for what you’ve done.
Sven Roneshould’ve won a prize in the Red Teaming contest[1]:
The Effective Altruism movement is not above conflicts of interest
[published Sep 1st 2022]
(Note that this issue was commented on here a month ago.) This whole thing is now starting to look like the classic “ends justify the means” criticism of Utilitarianism writ large :(
although looks like it wasn’t actually entered? Edit: it was, but not posted as a top-level post on the EA Forum (see comments below).
I wrote that comment from over a month ago. And I actually followed it up with a more scathing comment that got downvoted a lot, and that I deleted out of a bit of cowardice, I suppose. But here’s the text:
Consider this bit from the origin story of FTX:
Binance, you say? This Binance?
Or consider FTX’s hiring of Daniel Friedberg as a chief compliance officer. This article claims that he had been involved in previous cheating/fraud at other businesses:
Then there are all the recent examples of FTX trying to buy up other crypto players. For example, in July, FTX signed a deal to buy BlockFi for up to $240 million, and to give it $400 million in revolving credit. BlockFi is most famous for having agreed to pay $100 million in penalties for its securities fraud. It’s not clear why FTX would want to spend this amount of money on buying a fraudulent firm.
Just last week, there was a story that FTX is thinking about buying Celsius, another fraudulent firm.
Another story from July had the remarkable claim that SBF is even thinking of putting his own cash into bailing out other crypto firms:
Why is FTX and perhaps SBF himself putting so much money into buying up other people’s scams? I would hope it’s because they intend to reform the crypto industry and put it on more of a moral footing, although that would reduce the market size by an order of magnitude or two.
***
At least, SBF and FTX ought to provide more transparency into where exactly all the wealth came from, and what (if anything) they are actively doing to prevent crypto frauds/scams. And one might argue that FTX Foundation has a particular moral duty to establish a fund to help out all of the people whose lives were ruined by falling for crypto’s many Ponzi schemes and other assorted scams.
Wow, I didn’t see it at the time but this was really well written and documented. I’m sorry it got downvoted so much and think that reflects quite poorly on Forum voting norms and epistemics.
Moreover, Sven Rone is a pseudonym. The author used a pen name astheir views were unpopular and underappreciated at the time; they likely feared career repercussions if they went public with it. It’s unfortunate that this was the environment they found themselves in.
Seconded. This whole saga has really made me sour on some already mixed views on EA epistemics.
I find myself having a mixed opinion of how EA responded. It wasn’t outright terrible epistemics, unlike most of the world reacting to a similar event, but there were real failures of epistemics.
On the other hand, there was also successes in EA epistemics, as well.
I think the post ended up around 0 or 1 karma, is that right? (I mean before people changed their voting based on hindsight!) I think it’s important to distinguish between “got downvoted a lot but ended up at neutral karma” vs. “got downvoted double digits into no longer being visible.” The former reflects somewhat poorly on EA, the latter very poorly.
I think the most informative signal here is not the exact karma that comment ended up with but rather that the author ended up deleting it despite believing that what he was saying was potentially important and not receiving any reasons to think he was wrong. A culture where people feel compelled to silence themselves is worse than one where some comments are wrongly downvoted without much consequence to the author.
I think the most important data points here are any comments that were left, and the net karma of the comment. People have in fact been known to overreact, or react in idiosyncratic ways, in forum discussions; I haven’t seen the thread in question, but if the responses were friendly and the comment got ~0 net karma, then that would be a large update for me.
I definitely took “that got downvoted a lot” to mean that the comment got a lot of net downvotes, not just that people offset its upvotes to keep it around a neutral 0. I think it’s pretty bad to describe vote patterns that misleadingly, if it was hovering around 0.
Good point. :S
Are we talking about this deleted comment? It has 6 overall karma in 9 votes, and −3 agreement in 5 votes.
No, I was talking about Stuart Buck’s initial comment in that same thread, which is still up and now has high upvotes.
But Stuart also mentioned he deleted a second comment after it got downvoted too, so that must be the one you’re linking to. (We also don’t know if some people retroactively upvoted the deleted comment, it’s at +6 now but could’ve been negative at the time of deletion. I think I’m still able to vote on the deleted comment – though maybe that’s just because I had already voted on it before it got deleted [strong upvote and weak agree vote]).
Either way it seems highly unlikely that the deleted comment I linked to had lots of negative votes. It had a few disagree votes but very likely not more than 1-2 karma downvotes.
I like how Hacker News hides comment scores. Seems to me that seeing a comment’s score before reading it makes it harder to form an independent impression.
I fairly frequently find myself thinking something like: “this comment seems fine/interesting and yet it’s got a bunch of downvotes; the downvoters must know something I don’t, so I shouldn’t upvote”. If others also reason this way, the net effect is herd behavior? What if I only saw a comment’s score after voting/opting not to vote?
Maybe quadratic voting could help, by encouraging everyone to focus their voting on self-perceived areas of expertise? Commenters should be trying to impress a narrow & sophisticated audience instead of a broad & shallow one?
EDIT: Another thought: If there was a way I could see my recent votes, I could go back and reflect on them to ensure I’m voting in a consistent manner across threads
I think that what FTX is accused of this comment is legitimately way more something where a charitable recipient is not morally obliged to demand this level of careful checking of everything, because our civilization is just not actually able to support this level of competency pornography.
Stealing your customers’ funds is a very different matter from “some of the people who use our services are criminals”. Why, MIRI has in the past accepted matching funds from Google, which I’m sure profits a whole lot off criminals using their services! And some of those criminals may even be bad people!
But you can’t, actually, run a post-agricultural civilization on the principle of everybody who engages in every transaction checking out the full moral character of everybody who transacts with them. If you did try to build clever infrastructure for that, its first use on the margin would be by the right to hunt down sex workers (as already occurs with Visa) and by the left to hunt down people who said bad things on Twitter.
In a hunter-gatherer tribe it maybe makes sense to demand that people not transact with that bad guy over there; it scales as far as it needs to scale. And MIRI would not take money from somebody what we knew had stolen in charity’s name. But to figure it all out—if you want to read about Civilizations that have the basic infrastructure and competence to run those kind of traces, go read science fiction; here on Earth you’ve got VC firms trying to run six months of due diligence and then they invest in FTX.
IMO the amount of diligence someone ought to perform on their counterparties’ character is different in different circumstances. “This person is one of hundreds of people I transact with every week” carries different obligations than “This person is one of the four big donors who fund my organization” carries different obligations than “This person has been my only source of income for the past two years”. Different EAs were at different points along this spectrum.
I generally agree with you, but in this case SBF
1) hired a high-level person with a long history of fraud (you don’t see Asana or Stripe doing this); and
2) described his own business as a Ponzi scheme (see https://www.bloomberg.com/news/articles/2022-04-25/sam-bankman-fried-described-yield-farming-and-left-matt-levine-stunned ).
It was obvious that he was up to no good.
“But Sequoia”—I’m not convinced that they did any due diligence, judging by what they published on their own website: https://twitter.com/zebulgar/status/1590394857474109441 It’s not the only occasion when it looks to me like “top” VC firms leapt into investments out of FOMO, with zero effort at due diligence: https://medium.com/swlh/why-are-investors-eager-to-lose-money-on-health-tech-f8c678ccc417
Quoting from the article you linked about the involvement of Daniel Friedberg, FTX’s Chief Regulatory Officer, in a previous scandal:
Sorry this text got heavily downvoted? If so, we should be ashamed.
“They tell you to do your thing but they don’t mean it. They don’t want you to do your own thing, not unless it happens to be their thing, too. It’s a laugh, Goober, a fake. Don’t disturb the universe, Goober, no matter what the posters say.”—Robert Cormier, the Chocolate War
Yes, we should. People hesitate or are averse to bringing issues up with authorities/communities due to fears of being punished. As groups collectivize and become increasingly memetically homogeneous, that which coincides with the solidification of power/influence/financial structures and hierarchies, dissent of any form becomes decreasingly tolerated. It becomes safer/easier to criticize EA as an outsider than as member who simultaneously want to grow in EA, be well received by potential EA organization employers, and rise up the oft unstated hierarchies that developed as EA blossomed.
Until this debacle, SBF was lionized beyond comparison by the major community organizations. And moreover, he was closely associated with EA giants via the foundation/future fund and other projects. He had excellent PR presence due to the constant EA affiliated media attention. He was 80k’s paragon of earning to give.
That’s not to say figures like him were untouchable (nothing in EA is untouchable fortunately), but criticizing the most popular embodiment of success would result in online backlash at best or at worst, damage to the critic’s career capital. In a situation similar to Stuart’s, that is precisely why Sven’s essay on conflicts of interest in EA was anonymous. It’s also why it didn’t even get honorable mention in the essay competition. Even if the criticisms themselves were valid and justified, the PR risks of promoting dissent made sure it wasn’t given a prize. Demands for greater transparency or accountability from EA vanguards in the wake of recent developments may also be viewed instinctively or intuitively as threats to harmony.
Not everyone enjoys having beloved paragons and prophets criticized. Not everyone likes having their faith or trust in institutions shattered, let alone challenged. Not everyone maintains a cynical, skeptical attitude towards those in authority positions. During EA training newcomers certainly aren’t prepared for such developments, perhaps because events like such are not expected to ever come up in the first place.
It remains a problem the community has faced since day one, although much of it is attributable due to hierarchical and tribalistic human psychology rather than EA itself. While EA has better epistemics and remains more open to criticisms than the average ideological movement, harshness or cynical sternness, used to be (in EA’s early days), much more commonplace and welcomed than it is now. As EA has grown and become more of a community, intra-group harmonic cohesion became increasingly prized and promoted. Those who elicit controversy by means of intellectual dissent (rather than conforming) are at a higher likelihood of being downvoted.
Spouting off this stuff isn’t productive on my end. I don’t have a solution, but there needs to be better ways to increase reception towards contrarian/unpopular takes, minimizing unjustified repercussions for dissenters. Those who are harshest or most skeptical among EAs should not be dismissed as impediments to progress. I have faith EA has the capacity to ameliorate this.
By the way, it looks like the comment is now heavily upvoted. I’ve seen this happen quite a few times, so it seems like it might be good to withhold judgment about the net votes for a day or two. But of course it could be that it became highly upvoted because of reactions like this, so I’m not sure what the best course of action is.
I don’t follow crypto, or it’s space, but this seems like a bad habit or norm to downvote pieces that criticize EA’s questionable relationship to crypto.
Cryptocurrency doesn’t actually work, and only is there for scams and fraud. Not surprising that FTX collapsed.
I think you may be getting a lot of disagree-votes because I don’t think crypto was the issue here. People who just have USD sitting in FTX right now lost their money too.
FTX shouldn’t have been risky. It wasn’t a DAO, or based entirely off some token or chain, it was an exchange. It should have just been connecting people who wanted to buy crypto with people who wanted to sell crypto, and taking a fee for doing this. The exchange itself shouldn’t be taking any risk.
The reason as to how looks at least in part to do with leveraged transactions, allowing customers to buy more crypto by supplementing their purchase with a loan. But we’ve let leveraged transactions happen with stock for a hundred years. This looks a lot more like garden-variety financial crime than some problem with crypto.
Here’s a quote from former US Treasury Secretary Larry Summers in a recent Bloomberg interview that backs up some of the claims in this comment:
Sorry for misfiring here, I’ll retract my comment.
The relation to crypto is that the bulk of crypto is poorly regulated. Some of that is solvable—well regulated exchanges should be possible. The extreme volatility also increases the temptation toward fraud. So the fraud risk is higher than in a well-regulated industry.
I’d submit that a well-regulated and managed exchange is going to find it much harder to achieve a stratospheric valuation, and other parts of crypto are harder to regulate well. So some skepticism toward huge crypto-linked donors is warranted.
More crypto regulation is coming, and many crypto protocols have worked hard already to be regulatory-compliant. But regulation won’t be uniform across jurisdictions; there will always be loopholes that allow regulatory arbitrage.
Some exchanges, such as Coinbase and Kraken, are based and regulated within the US, and are subject to much stricter oversight than FTX—which seems to have been deliberated based in Hong Kong and then the Bahamas precisely in order to avoid US regulatory oversight. (Arguably, this should have been a red flag in terms of EA’s relationship with FTX).
The US, UK, or EU can regulate all they want, but crypto finance is a global business, and there are plenty of less-regulated havens willing to host crypto businesses.
Hopefully crypto investors, traders, and users will become savvier about checking where businesses are operating, and what regulatory scrutiny they’re subject to.
Agreed on that. My point was that it would be a lot harder for an individual to get super-rich quick in a regulated market. No sane regulator is going to allow a regulated party to risk customer assets for the party’s benefit, and few will allow crazy leverage. And the whole thing will require significantly more of a buffer in fiat currency, again limiting any single person’s ability to get megarich.
In short, I think there are few ways for a well-regulated exchange to be stratospherically profitable. So people should not expect the rise of new crypto megadonors who hail from regulated backgrounds.
I would agree with this. Separate from the object-level causes of the current crisis, crypto as an industry has accepted and normalized a lack of accountability that other industries haven’t. And I agree that lack of regulation and high volatility make fraud more likely.
I would want to avoid purely focusing on crypto, because I think the meta-lesson I might take away is less “crypto bad” and more “make sure donors and influential community members are accountable,” whether that be to regulators, independent audits, or otherwise. (And accountable in a real due diligence sense, because it’s easy for that word to just be an applause light.) But yes, skepticism of crypto-linked donors would be justified under this framework.
I have no idea why this comment is no longer endorsed by its author because it’s entirely correct. Not only is crypto a great way to scam people because transactions can’t be reversed & there’s virtually no regulation for most of the space, the fact that it’s so hard to make money in crypto across an entire cycle means that entities have a huge incentive to resort to scamming.
I can tell you why I downvoted it.
False, it works just fine. It’s a token that can’t be duplicated and people can send to each other without any centralized authority.
There are indeed a lot of those, but scams and fraud were very clearly not the intention of its creators. Realistically they were cryptography nerds who wanted to make something cool, or libertarians with overly-idealistic visions of the future.
Clear hindsight bias. This person should have made some money betting against FTX before it collapsed and then I’d take them more seriously.
Basically, the comment is just your standard “cryptocurrency bad” take, without any attempt at justifying their claims or even saying much of anything other than expressing in an inflammatory way that they don’t like cryptocurrency.
“This person should have made some money betting against FTX before it collapsed and then I’d take them more seriously.”
this is naive EMH fundamentalism
not everything can be shorted, not everything can be shorted easily, not everything should be shorted, markets can be manipulated. Especially the crypto market. It both can be the case that people 100% think X is a fraud, and X collapses, and shorting X would have been a losing trade over most timeframes. “Never short” is an oversimplification but honestly not a bad one.
Most of that isn’t even clearly bad, and I find it hard to see good faith here.
Your criticism of Binance amounts to “it’s cryptocurrency”. Everyone knows crypto can be used to facilitate money laundering; this was, for Bitcoin, basically the whole point. Similarly the criticism of Ponzi schemes; there were literally dozens of ICOs for things that were overtly labeled as Ponzis—Ponzicoin was one of the more successful ones, because it had a good name. Many people walked into this with eyes open; many others didn’t, but they were warned, they just didn’t heed the warnings. Should we also refuse to take money from anyone who bets against r/wallstreetbets and Robinhood? Casinos? Anyone who runs a platform for sports bets? Prediction markets? Your logic would condemn them all.
FTX would prefer that the crypto sector stay healthy, and backstopping companies whose schemes were failing serves that goal. That is an entirely sufficient explanation and one with no clear ethical issues or moral hazard.
Even in retrospect, I think this was bad criticism and it was correct to downvote it.
My criticism of Binance was not “it’s cryptocurrency.” My criticism of Binance was that at the very time that that SBF allied with Binance, it was a “hub for hackers, fraudsters and drug traffickers.” Apparently your defense of SBF is that “everyone knows” crypto is good for little else . . . but perhap if someone enters a field that is mostly or entirely occupied by criminal activity, that isn’t actually an excuse?
As for backstopping other scams and frauds, that isn’t a way to make sure that the “crypto sector stays healthy” (barring very unusual definitions of the word “healthy”), and in actuality, we’re now seeing evidence that FTX was just trying to extract assets from other companies in a desperate attempt to shore up their own malfeasance and fraud. https://twitter.com/AutismCapital/status/1591569275642589184
Yeah, still not seeing much good faith. You’re still ahead of AutismCapital, though, which is 100% bad faith 100% of the time. If you believe a word it says I have a bridge to sell you.
Is this Sam in disguise? You’re literally the only person in existence who seems to think it was somehow unfair to be suspicious (and correctly so!) of SBF for having hired a chief compliance officer with a long history of fraud, and of his pattern of trying to buy up other people’s frauds/scams.
The only flaw in my earlier comment is that I was too charitable towards SBF in suggesting that there might be some plausible excuse for the multiple red flags I noticed.
Thanks for this! I echo Lizka’s comment about linkposting.
In light of the recent events I’m struggling a bit with taking my hindsight-bias shades off, and while I scored it reasonably highly, I don’t think I can fairly engage with whether it should have received a prize over other entries even if I had the capacity to (let alone speak for other panelists). I do remember including it in the comment mainly because I thought it was a risk that didn’t receive enough attention and was worth highlighting (though I have a pretty limited understanding of the crypto space and ~0 clue that things would happen in the way they did).
I think it’s worth noting that there has been at least one other post on the forum that engaged with this specifically, but unfortunately didn’t receive much attention. (Edit: another one here)
Ultimately though, I think it’s more important to think about what actionable and constructive steps the EA community can take going forward. I think there are a lot of unanswered questions wrt accountability from EA leaders in terms of due diligence, what was known or could have been suspected prior to Nov 9th this year, and what systems or checks/balances were in place etc that need to be answered, so the community can work out what the best next steps are in order to minimise the likelihood of something like this from happening again.
I also think there are questions around how these kinds of decisions are made when benefits affect one part of the EA community but the risks are pertinent to all, and how to either diversify these risks, or make decision-making more inclusive of more stakeholders, keeping in mind the best interests of the EA movement as a whole.
This is something I’m considering working on at the moment and will try and push for—do feel free to DM me if you have thoughts, ideas, or information.
(Commenting in personal capacity etc)
Strongly disagree. That criticism is mostly orthogonal to the actual problems that surfaced. Conflicts of interest were not the problem here.
I’d regard incentive to discount highly immoral business practices (e.g. what happened with Alameda in 2018) as stemming from a conflict of interest (i.e. interest 1: promote integrity in EA; interest 2: get lots of money from SBF for EA. These were in conflict!)
Again, that’s orthogonal to the actual problems that surfaced.
I wouldn’t say orthogonal, more upstream. If SBF had been shunned from the community in 2018, would we be in this situation now? Sure, he might still have committed massive fraud with the ends of gaining wealth and influence, but the focus would be on the Democrats, or whatever other group became his main affiliation.
No, you’re thinking about it entirely wrong. If everyone who did something analogous to Alameda 2018 was shunned, there probably wouldn’t be any billionaire EA donors at all. It was probably worse than most startups, but not remarkably worse. It was definitely not a reliable indicator that a fraud or scandal was coming down the road.
Dustin Moskovitz and Jaan Tallinn were already EA ~billionaire donors well before 2018. They haven’t done anything analogous to what SBF/FTX/Alameda did. What examples are you thinking of?
Those two are perfectly good examples. They did. Every successful startup does something approximately that bad, on the way to the top.
It seems it was entered, according to the (second) comment from Bruce here: Winners Red Teaming
Thanks (link to the comment). I think those entries really should’ve been put on the EA Forum as posts to be interacted with (like with the Future Fund AI Worldview Prize[1])
Which I imagine is no longer happening :(
Yeah, I can confirm that we evaluated that submission.
Re: putting them on the Forum — we didn’t have the capacity to do that (and I’m not sure it would have been helpful to do that for all the submissions), but in general, I really encourage people to link-post relevant content to the EA Forum. So, you could link-post this (or similar content in the future).
[I should note that I have low capacity right now and might not reply to this thread. Apologies in advance!]
FTX had received several billion dollars in funding from major investors. One was a province pension fund, so it wasn’t just crypto folks. That generally involves having the investors’ accountants do substantial due diligence on the target firm’s financials. That tells me that either the books were fairly clean at the time of investment or they were cooked in a way that even the due dilligence specialists didn’t detect. It’s not clear to me how the Future Fund people, who to my knowledge are not forensic accountants or crypto experts, would have had a better ability to pick up on funny business. So I don’t see why it would be unreasonable for them to have relied on third-party expert vetting.
From what I understand (please correct me if I’m wrong), FTX didn’t have a CFO, it’s COO was a friend with no experience, and it didn’t have a proper board of directors. Clearly, that flimsy corporate governance would not pass a standard due diligence test.
EDIT: This flow chart of shells nested in shells, like Russian dolls, speaks to why the company’s governance should have been a red-flag.
https://i.redd.it/078p4g7m6cz91.jpg
I don’t think a highly branched company structure is a red flag: my understanding is that to operate a financial business legally across many jurisdictions you generally need to have subsidiaries in each jurisdiction. Ex: https://wise.com/help/articles/2974131/what-are-the-wise-group-entities
In the autopsy, the biggest red flag will probably be the lack of appropriate internal controls. One should not be able to move that kind of money without vetting by staff with appropriate background and independence, but no ownership interest. Based on the reported en masse resignation of the bulk of legal and compliance staff, it seems that it was technically possible to transfer billions in customer assets to the CEO’s company without legal/compliance involvement.
I think the class of issues that would make it inappropriate to accept donations is much narrower than the issues that would and should make a public investor (like a province pension fund) decline to invest.
Few private businesses are going to let an outsider come in on a regular basis, conduct a hard look at sensitive internal documents, and potentially publish derogatory information to the public. Even for investors, this kind of stuff is generally done under a heavy NDA and for good reason. That would make it extremely difficult to do this on a regular basis—so any scrutiny would at best catch fraud that existed at the time of scrutiny.
I wouldn’t be very confident in the level of due diligence undertaken by supposedly sophisticated investors:
https://twitter.com/zebulgar/status/1590394857474109441
This just isn’t plausible on reasonable priors. You need to assume that multiple investment firms working in different sectors, whose survival in a highly competitive environment in large part depends on being skilled at scrutinizing a company’s financials, would miss warning signs that should have been apparent to folks with no relevant domain expertise. See also Eliezer’s Twitter thread.
ETA: Alexander:
I would disagree, there are numerous examples such as Theranos and WeWork which show that sophisticated investors do not necessarily scrutinize potential investments thoroughly. Thus I don’t think assuming they do is a good prior. I think this is actually a reason these problems happen, since everyone else assumes that Respectable Company/Person X has scrutinized it.
I am making a comparative, not an absolute, claim: however bad the professionals may be, it is unreasonable to expect outsiders to do better.
I agree with the point that in general one should expect less from “unsophisticated” investors/parties than from sophisticated ones. I do not disagree with that.
I was disagreeing with “This just isn’t plausible on reasonable priors.” which seemed to mean that you disagreed with Stuart’s comment.
But I also don’t think VC scrutiny is necessarily a high bar in general in the absolute sense, and Stuart has posted some warning signs here in other comments such as the hiring of Friedberg. Then considering how important FTX and SBF was to the EA community it could have been investigated more, i.e. the low VC scrutiny bar could have been surpassed by hiring experts or something similar. To a VC firm this is just another losing bet among many they expect to make. This is why I don’t think the comparison with VC firms is very apt.
Stuart’s comment was in reply to the claim that “It’s not clear to me how the Future Fund people, who to my knowledge are not forensic accountants or crypto experts, would have had a better ability to pick up on funny business.” I disagreed with Stuart’s comment in the sense that I disputed the reasonableness of expecting unsophisticated outsiders to do better because sophisticated investors sometimes perform poorly. I did not mean to dispute that sophisticated investors sometimes perform poorly; indeed, there’s plenty of evidence of that, including the evidence you provide in your comment.
Yeah that makes sense, I think I overinterpreted your comments.
In retrospect, I think my original comment was insufficiently clear. Anyway, thanks for the dialogue.
And that’s what Sequoia proudly publicly posted themselves
Or the best auditors are inadequate, and overlooked fairly obvious flaws for some reason.
Please feel free to “be that guy” as hard as possible when we are talking about massive financial fraud.
This sounds a bit hindsight-bias-y to me; we know to poke at this specific topic now because we know what happened. SBF claims to not have known himself that this was happening, which I take to mean that either this info was super siloed or buried somehow, or that Sam is lying. (And is relying on few-to-no people knowing the truth, or someone would immediately call him out on the lie.)
The idea that SBF didn’t know what was happening is farcical. You don’t unknowingly loan out $10bn of customer funds, which you then lose on bad bets, and then try and cover up your insolvency. I think it’s healthy to wait for a clearer picture of what happened before making any summary judgement, but we know enough to say that SBF was not an honest actor.
To be honest, I’m at a point now where I’m putting significant weight on lying. Some evidence here that FTX bailed out Alameda for ~$4B in FTT on Sep 28th. There are the blockchain transactions (disclaimed by SBF at the time), and the resignation of a high-profile figure (President of FTX.US) the day before. (Note that whilst this doesn’t look good, it’s still inconclusive. I’m sure the truth will come out eventually.)
I agree that there’s a lot of hindsight bias here, but I don’t think that tweet tells us much.
My question for Dony is: what questions could we have asked FTX that would have helped? I’m pretty sure I wouldn’t have detected any problems by grilling FTX. Maybe I’d have gotten some suspicions by grilling people who’d previously worked with SBF, but I can’t think of what would have prompted me to do that.
There were IMO some orange flags (such as the connection to the questionable lawyer who also works with tether), but admittedly think it’s difficult to notice such things when there’s an aura of success around someone. I think it isn’t just hindsight, though. I think people need to get a lot better at being cynical, because it’s important. For instance, it was odd how FTX positioned itself as the savior of crypto by proposing to buy out entities like Voyager and Blockfi and then it comes out that Alameda owes them money. They said they could “pay anytime,” but it still looked weird.
Hope you’re feeling okay Dony.
[on phone] Thank you so much for all of your hard work managing the fund. I really appreciated it and I think that it did a lot of good. I doubt that you could have ever have reasonably expected this outcome so I don’t hold you responsible for it.
Reading this announcement was surprisingly emotional for me. It made me realise how many exceptionally good people who I really admire are going to be deeply impacted by all of this. That’s really sad in addition to all the other stuff to be sad about. I probably don’t have much to offer other than my thoughts and sympathy but please let me know if I can help.
I suppose that I should disclose that I recently received a regrant from FTX which I will abstain from drawing on for the moment. I don’t think that this has much, if any, relevance to my sentiments however.
I would not draw on that grant for quite some time, if ever: you should be worried about clawbacks.
I have no idea under what circumstances clawbacks can happen. If you have good reasons to believe this is plausible, then it seems worth it to write a top level post on it.
https://forum.effectivealtruism.org/posts/BesfLENShzSMeb7Xi/community-support-given-ftx-situation?commentId=y7hEdxGhjsYzpg6p3 OpenPhilantropy expects to put out an explainer about clawbacks tomorrow
The comment below that this is like Bernie Madoff is not right as far as I can see. This is a different situation, with different facts—including that we have, as yet, no idea what those facts are! Your situation will also be individual—if you took the funds as a limited company is different to if you took them individually, for example, with different effects most likely. It is also entirely unknown what is happening. Nothing has been made clear officially, no one knows what’s going on and you—importantly—had nothing to do with any of that stuff that is being potentially alleged (not yet actually alleged by any authority).
I’m not giving legal advice here. I’m just stating that being calm is the right response and that googling Bernie Madoff (as suggested below), won’t most likely be of any help.
Google exists, use it, don’t be lazy. If you search “bernie madoff restitution” that should help.
I commend you on your moral leadership and I join everyone else in the comments in expressing gratitude for the tremendous good you’ve done so far. However, I’m curious about your decision to resign. I get the moral justification, but surely there are many grantees with many questions who’d be able to get better answers were you still within Future Fund. Something as simple as access to documents or previous emails would enable you to better support grantees who are likely in significant distress. Why did you see it as imperative to resign effective immediately? Why not at the very least see out your notice period?
How does it take moral leadership to distance yourself from and condemn massive fraud? Even entirely selfish actors would do the same.
I’m curious about this as well. Does leaving immediately not impede the chances of getting a better (I’d never dare say “full”) picture of what went down? Additionally, in terms of accountability, I guess now we’ll never know or have records of (from emails etc.) who knew what and when.
I don’t think staying on would add to what the insolvency trustee, regulatory authorities, and likely criminal prosecutors will uncover. The court has already appointed a liquidation trustee whose mission is preserving assets and does not include working with EA. Its unclear to me whether the trustee is in control of the FTX Foundation now, but the statement did say related entities. The FTX principals are doubtless preoccupied and are presumably attuned enough to legal exposure to not be having unnecessary conversations.
Hey team—thank you for all the work you did. The Future Fund has been tremendously inspiring to see. I’ll reach out to you about how we (myself or Protocol Labs) might be able to help.
Pooling the expertise of the Future Fund team and Protocol Labs would be amazing! <3
Thanks very much for posting this update!
My main question re the Future Fund at the moment is: why does it seem like there weren’t any ring-fenced funds under legal ownership by the Future Fund or the FTX Foundation? Are there any? Were there any when it was founded last year (i.e. presumably when FTX/Alameda was still solvent)? If not, why not? Did this not raise suspicions amongst any of you? I can imagine maybe SBF saying something like the max-EV thing to do is keeping all the funds in the for-profit companies to maximise their growth, and you going along with it because you trusted him (or you just independently agreed and didn’t put any significant weight on FTX/Alameda collapsing or even just becoming less rich). Obviously an error in hindsight. Or maybe you kept asking about getting (more) ring-fenced funds, and kept getting fobbed off? That should’ve raised alarm bells if so! Sorry if this is a bit ranty and speculative, or too soon, or too accusatory, but I’m grasping for answers here. I’m grateful for everything you’ve done for the world and EA in your careers, but can’t help feeling that you might’ve messed up a bit here.
I asked some further questions in this direction here.
https://www.nytimes.com/2022/11/13/business/ftx-effective-altruism.html
Thanks for linking. That should’ve raised alarm-bells, in hindsight. Could he not at least have donated illiquid assets to the Foundation, for them to liquidate as they see fit (and put the Foundation under independent control)? Although guess that still might not’ve helped much in this case with FTT and FTX stock collapsing.
I think this (the fact that there is no endowment) was (or at least should have been) pretty well-known in the EA community from the point in time that the FTX Future Fund started to pay grants, as these came from all kinds of sources, but not from an endowed foundation. And it obviously would have been known to the people working for FTX Foundation from when they started working there.
(And I would guess one reason that it didn’t raise more alarm bells for lots of people in the EA community that learned about this, is probably that they put high trust in the people working for FTX Foundation.)
How much did the Future Fund actually pay out? The website lists $160 million in committed grants.
I agree this would be very useful information. In theory, the FTX Future Fund team should know this information but they probably are not allowed to share it.
Of course, someone could try to collect this information by contacting all named FTX Future Fund grantees and it might be worth the effort to try to do this. (Though it’s unclear who might be best suited to do that, given that they’d have to be trusted enough by all grantees for them to share their individual details with them.) Maybe the largest recipients (I think these are CEA and Longview) could start by stating how much they received.
Made this into a post: Why didn’t the FTX Foundation secure its bag?
Can anyone find the original source for the “interview last month”? Clicking that link from the link above takes me to https://www.nytimes.com/2022/10/08/business/effective-altruism-elon-musk.html (a) which doesn’t contain the quote.
Please let us know if there is anything we at GoodX can do to help. Our main project is to build an impact marketplace, but ultimately we want to get resources to where they are needed (as efficiently as possible).
(E.g., it wouldn’t be my first time running an emergency fundraiser to bail out customers of a failed venture.)
Strikes me as…premature? We’ll have a lot more clarity in the coming days, and resigning + questioning the ethics at FTX when we still fundamentally don’t know what happened doesn’t seem particularly productive.
If FTX just took risks and lost, this will look very dumb in hindsight. And if there turn out to be lots of unethical calls, we’ll have more than enough time to criticize them all to our hearts’ content. But at least we’ll have the facts.
Looking dumb is an acceptable risk. If the team prematurely resigned and there is still usable money . . . the usable money is presumably locked in the FTX Foundation and in DAFs, it is not lost.
Premature send, ETA: As far as “questioning the ethics at FTX,” it would be very easy for FTX to have denied raiding customer funds if they didn’t do it as reported. It’s appropriate to draw the obvious inference that they did, and that alone is more than enough to “question[] the ethics at FTX” which is a pretty mild response to the news in my book.
The PR attention is at its height this week, the risk of “looking dumb” (which I think is very unlikely) is outweighed by the need to engage in damage control. No one will be listening if EA waits a few weeks to start distancing itself....
From The Snowball, dealing with Warren Buffett’s son’s stint as a director and PR person for ADM:
The facts are plenty clear (with respect to the type of criminal activity taking place, if not the specifics or quantum) if you do some digging on twitter. Crypto-forensics have been having a field day and SBF himself has surprisingly been continuing to dig his grave deeper.
I would highly, highly recommend that people just wait up to 72 hours for more information, rather than digging through Twitter or Reddit threads.
Edit: This is not to imply that I have secret information—just that this is unfolding very quickly and I expect to learn a lot more in the coming days.
Why? Coin Desk’s leak—which set off the death spiral—is clear enough. Multiple investors that SBF tried to get bail-out funds from have told the WSJ and FT that SBF admitted to loaning out customer funds to Alameda. Binance pulled out of the deal for a reason. There is plenty of data online about FTX’s movements on the blockchain. And, of course, there’s the obvious fact that SBF is now very publicly looking for $8bn of funding to cover FTX’s liabilities.
Feels like you are implying you have secret info, but it just seems extremely unlikely to me that this was anything other than huge mismanagement of customer funds against their wishes.
What odds are you willing to bet that we will see it differently in 72 hours?
I don’t think the bet suggestions (not just from you—there were a bunch in others’ comments on your own post) are helping make the situation any less tense.
Edit: I also think the interpretation of “implying to have secret information” rather than “trying to de-escalate” is not really grounded, and results in your comment being combative in my eyes.
I think bets with real stakes can be a good de-escalation procedure! It’s easy to fire increasingly heated claims back and forth while there’s no concrete consequences, but when there’s money on the line you have to back off and figure out what you actually believe, and then also once the bet is made there is less incentive to keep arguing while you wait for resolution.
Didn’t mean to imply secret info, edited the comment above.
That said, seeing most of their legal and compliance teams quit gives me much more serious reservations about illegal or unethical behavior.
Edit: I think I retract this second part—I don’t know if everyone’s quitting now that they can’t pay salaries, or just the legal/compliance teams.
I’ve made this into a post on the forum, because I’m afraid it’ll get buried in the comments here. Please comment on the forum post instead.
https://forum.effectivealtruism.org/posts/9YodZj6J6iv3xua4f/another-ftx-post-suggestions-for-change
I suggested that we would have trouble with FTX and funding around 6 months ago.
It was quite obvious that this would happen—although the specific details with Alameda were not obvious. Stuart Buck was the only one who took me seriously at the time.
Below are some suggestions for change.
1. The new button of “support” is great, but I think EA forum should have a way to *sort* by controversiality. And, have the EA forum algorithm occasionally (some ϵ% of the time), punt controversial posts back upwards to the front page. If you’re like me, you read the forum sorted by Magic (New and Upvoted). But this promotes herd mentality. The red-teaming and self-criticism are excellent, but if the only way we aggregate how “good” red-teaming is is by up-votes, that is flawed. Perhaps the best way to know that criticism has touched a nerve is to compute a fraction: how many members of the community disagree vs how many agree. (or, even better, if you are in an organization, use a weighted fraction, where you put lower weight on the people in the organization that are in positions of power (obviously difficult to implement in practice))
2. More of you should consider anonymous posts. This is EA forum. I cannot believe that some of you delete your posts simply because it ends up being downvoted. Especially if you’re working higher up in an EA org, you ought to be actively voicing your dissent and helping to monitor EA.
For example, this is not good:
What makes EA, *EA*, what makes EA antifragile, is its ruthless transparency. If we are self-censoring because we have *already concluded something is super effective*, then there is no point in EA. Go do your own thing with your own money. Become Bill Gates. But don’t associate with EA.
3. Finances should be partially anonymized. If an EA org receives some money above a certain threshold from an individual contribution, we should be transparent in saying that we will reject said money if it is not donated anonymously. You may protest that this would decrease the number of donations by rich billionaires. But take it this way: if they donate to EA, it’s because they believe that EA can spend it better. Thus, they should be willing to donate anonymously, to not affect how EA spends money. If they don’t donate to EA, then they can establish a different philanthropic organization and hire EA-adjacent staff, making for more competition.
Being honest, I do genuinely think that climate change is less important than runaway AI, primarily because of both option value issues and the stakes of the problem. One is a big problem that could hurt or kill millions, while AI could kill billions.
But I’m concerned that they couldn’t simply state why they believe AI is more important than climate change rather than do this over-complicated scheme.
Disagree, this would make transparency worse without providing much benefit.
Disagree here because I don’t want to see an EA forum that values controversial posts.
Hi, thanks for replying! I’ve made this into an EA forum post, instead because I’m afraid it’ll get buried in the comments here. https://forum.effectivealtruism.org/posts/9YodZj6J6iv3xua4f/another-ftx-post-suggestions-for-change
Question just to double-check: are posts no longer going to be evaluated for the AI Worldview Prize? Given that is, that the FTX Future team has resigned.
I think it would be good if others stepped in to help see it through (perhaps offering smaller prizes), given how critical the answers are to determining EA resource allocation. Have asked Holden re OpenPhil fulfilling this role.
Why do you think it’s any more important than the FTX Fund’s other obligations? If there’s to be a settlement matching partial assets to all of the fund’s liabilities, it should done in an open and fair way. Maybe the assets are 0, in which case that becomes moot. My own view is that there are many other projects of equal or greater merit with funding commitments from the FTX Fund.
That’s reasonable. I guess from my perspective, I think the top EA grantmakers need persuading that p(doom|AGI) is significantly greater than 35%. If OpenPhil already think this, then that’s great, but if they don’t (and their probabilites are similar to the Future Fund’s), then the Worldview prize is very important. Even if your probabilities are the same, or much lower, it’s still very high Value of Information imo.
In the survey I did last year, four Open Phil staff respectively gave probability 0.5, 0.5, 0.35, and 0.06 to “the overall value of the future will be drastically less than it could have been, as a result of AI systems not doing/optimizing what the people deploying them wanted/intended”.
That’s just four people, and isn’t necessarily representative of the rest of longtermist Open Phil, but it at least shows that “higher than 35%” isn’t an unrepresented view there.
Interesting, thanks. What about short timelines? (p(AGI by 2043) in Future Fund Worldview Prize terms)
Ajeya Cotra’s median guess is that AGI is 18 years away; the last time I talked to a MIRI person, their median guess was 14 years. So the Cotra and MIRI camps seem super close to me in timelines (though you can find plenty of individuals whose median year is not in the 2036-2040 range).
If you look at (e.g.) animal welfare EAs vs. AI risk EAs, I expect a much larger gap in timeline beliefs.
One could also argue for prioritizing funding for work that has already been done over work that has been approved but not yet done. If someone was going to receive a grant to do certain work and has it been pulled, that is unfair and a loss to them . . . but it’s not as bad (or as damaging to the community / future incentives) as denying people payment for work they have already done.
How this logic translates to a prize program is murky. But unless you believe that the prize’s existence did not cause people to work more (i.e., that the prize program was completely ineffective), its cancellation would mean people are not going to be paid for work already performed.
Of course, it might be possible to honor the commitment made for that work in some fashion that doesn’t involve awarding full prizes.
Potential Help for FF Grantees. I work at a major philanthropic organization, Stand Together, on technology and innovation related efforts. I was a big fan of Future Fund’s ambition and methods, even where I didn’t share your priors.
At Stand Together, we work on a wide range of issues, all seeking to break the barriers that prevent individuals from reaching their true potential. On technology, we think technological innovation has been the primary driver of widespread human prosperity and we are looking to promote both a culture that embraces innovation rather than fears it and a regulatory environment that enables it.
If you are a Future Fund grantee interested in alternative funding and any of the above seems to line up with your work, please reach out: nchilson@standtogether.org.
And best of luck to everyone.
Wishing much strength to everyone affected by this. Let’s support each other and get through this together.
as a non EA reading this thread, on balance, makes me really happy. You guys just have some good old fashioned cleansing to do and you’ll be fine.
FWIW, everyone who’s had any dealings with the Alameda crew knew that they were the worst kind of trash—we just thought that meant they have so much money that surely they don’t need to steal ours.
cheers.
It seems like there are quite a lot of people/orgs who made plans based on promised money that now seems unlikely to arrive. Is there a lesson that can be learned about how to reduce risk in grant awarding e.g. by waiting until funds are securely in the foundation’s hands? Or is there no way to avoid this risk given potential clawbacks, even in cases of bankruptcy that don’t involve any fraud?
Thank you for your good work over the last months, and thank you for your commitment to integrity in these hard times. I’m sure this must also be hard for you on a personal level, so I hope you’re able to find consolation in all the good that will be created from the projects you helped off the ground, and that you still find a home in the EA community.
I trust you guys to decide that this is the right time to resign, but I do hope as a community that we are able to hold value of our friendships together with the importance of holding people who made mistakes to account, without either one negating the other. We don’t yet know what kind of ethical errors Sam made, but the larger those mistakes are, the more important it is that we offer friendship of a kind that is compatible with holding people to account.
In his post announcing the new found wealth of EA movement stemming from FTX Will included this argument for why charitable enterprises are more dangerous than for profit companies:
At the time I remarked at how wrongheaded this seemed to me. Of course for profit companies can do a large amount of harm! In fact, because for profit companies have ability to use their profits to increase their scale, they have the potential to do immense harm.
Hopefully, the FTX fallout makes abundantly clear the original point I was trying to make and encourages some deeper reflection in this community of about how earn part of earn to give has potential to cause great harm.
This feels like a weird interpretation of Will’s comment, which doesn’t (in my view) imply that for-profit companies can’t do a lot of harm, but rather that if you start a company with the sole goal of making a profit, usually the worst outcome (with regards to your goal of making a profit) is that you go bankrupt.
As FTX just spectacularly demonstrated, Will was wrong. This is because even though FTX was ostensibly started with the sole goal of making a profit, it turns out there were other important implicit goals like “don’t steal billions of dollars from thousands of people”, implicit goals like that always exist, and failure to meet those implicit goals is very bad.
This sounds like a human form of alignment failure, specifically, the What Failure Looks Like story part I.
Here’s a link to it:
https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like
Should have called it, but I’ll do it now: It’s a double standard applied, so the comparison is not what you think.
Exactly, just as charities might unintentionally do harm, so can for-profit entities. Will’s statement erred in assuming financial viability for companies is the only dimension on which they can be assessed.
How much money was committed in grants that will no not be paid out?
Additionally it would be useful to know the distribution among cause areas for this money.
While some people are focused on figuring out what went wrong with FTX and why, the rest of us needs to focus on mitigating the immediate damage from broken funding promises. I would be helpful to know the total scale of this situation.
Is there any reason why, when you commit to a grant, you cannot set aside the money as gold or index funds or some other reliable asset, and instead have to rely on a single company’s ability to pay in the future?
If you guys need help with ideas on improving oversight or adding internal audit reviews for future EA projects, let me know. It’s always the lack of governance measures that leads to these unfortunate events.
Recently, I listened to a podcast with Douglas Rushkoff about his new book describing recent encounters with billionaires, and combined with this incident, how does the EA movement take into account harm which may be caused by those in high wealth generating endeavors? Is that a factor when one thinks of the total good calculation of EA? Would philanthropy, society, and Sam Bankman-Fried have been better off if he had pursued his original interest of animal welfare instead of finance to form the EA movement?
i don’t believe you.
Hey gang, I’m realising need to effectively donate to all the people that SBF ripped off. Thinking about this logically we now have major attention on EA, more than ever. IF SBF has effectively crippled EA’s reputation -- We need to strike back and shift the narrative, otherwise this movement might be dead overnight. Think about how severely this will limit the amount of $ donated effectively in future
To be clear, this is an account that joined from Twitter to post this comment (link).
Oh wow, I didn’t understand this comment the first time I saw it and didn’t bother to click. In case there’s anyone else like me, the tweet says “Doing my part to get us refunded” with a screenshot of their comment.
Yep; I interpret this comment as an attempt to manipulate EAs, not as an honest good-faith proposal.
Even if Mitch is a committed EA (which seems unlikely), it fails to mention the obvious conflict of interest.
The losses are likely in the billions. Even assuming we could come up with (say) $100MM, that might move the needle from a reported 9.4B shortfall to . . . 9.3B. That’s not going to change the narrative in any meaningful way.
One could argue for the EA community returning every dollar ever received from FTX-aligned sources . . . but I’m just not sure how much even that would move the needle of public opinion. Especially because the bad PR will be heaviest in the next few weeks to months, and it would seem exceedingly difficult to come up with that amount of money that fast.
A couple of hours ago, I tweeted:
Reimbursing people for the money spent within the EA ecosystem (if the above conditions hold) might take years, but it strikes me as doable, and much more obviously “our job” than trying to undo all harm FTX caused.
That said:
This strikes me as unreasonable and panicky, and also as weirdly manipulative. The reason we should help people we harmed is because it’s the right thing to do, not because we need to “shift the narrative”.
I have the same objection here. If the facts shake out such that FTX indeed effectively stole money from people, and gave that money to EA projects, and no other channels suffice to reimburse the people who were stolen from, then as a community we should make it a priority to pay them back over time, up to the amount that was stolen from them and used by us.
I don’t know how likely it is that all those conditions will hold, but if they do hold, we should respond even if it’s useless for PR purposes and happens way too late to go viral on Twitter or whatever—because it’s the right thing to do. The reason to go fast would be because the victims will be more harmed if they’re parted from their money longer, not because going fast has better optics.
It does look increasingly to me like MIRI maybe couldn’t have plausibly benefited much from FTX, because the process whereby SBF selected EAs to work with was such as to exclude Bay Area people who knew about the early Alameda breakup, and this caused FTX money to go to Oxford-respectable longtermism and not to places like MIRI, not even dropping a mil there for old time’s sake while spending $135m on a stadium. I pinged SBF a couple of times to see if he wanted to have a conversation at some point; he never responded. I think there’s a legit sense in which SBF wasn’t one of ours, and it’s not clear to me that MIRI ought to be so easily lumped in with the groups that did potentially benefit.
This isn’t your fault, but you almost certainly “benefitted”—any (increased) funding from other EA funders is a counterfactual result of FTX generously funding other groups that otherwise would have competed for funds. And many regrantors certainly helped MIRI more indirectly by funding things you would have wanted that helped MIRI’s agenda in various ways.
This is not really a disagreement but rather nitpicking, but I noticed that according to https://intelligence.org/topcontributors/ MIRI did receive a donation from Alameda Research. Not a large one, but some money from the SBF ecosystem arrived at MIRI, apparently. But this does not really contradict the speculations you make about SBF avoiding certain Bay Area people.
Yep, well-picked nit, I was just told about that myself. Perfectly good substantive disagreement with the original thesis, imo, you don’t need to downplay it that much.
It also makes sense that the money would’ve come from the Alameda side (maybe in 2020 or early 2021 according to Wayback, somebody said) rather than the FTX side. Alameda would have had the Bay Areans, while FTX’s philanthropic side was constructed (exclusively?) out of Oxfordians.
Separate from the “Alameda did donate” point, I wouldn’t have predicted SBF to be excited about MIRI, because I modeled SBF and Future Fund as having very Will-MacAskill-y views of AI risk. (And of a constellation of other claims that are entangled with MIRI’s coolness, like “is causal decision theory good?”) That’s a very un-MIRI-ish set of heuristics and beliefs about the world, compared to the modal longtermist EA’s heuristics and beliefs.
I strongly think that those views are wrong, but they provide an alternative explanation for SBF liking Oxford more than the Bay, and I feel more wary of saying “the Bay shouldn’t help pay back money stolen by SBF and given to EA” insofar as SBF merely factually disagreed with rationalists a lot. (Also, I guess I lean toward the Bay helping out regardless.)
Apparently there was a $132K Alameda donation to MIRI in 2020 or early 2021. Didn’t actually know that.
Well, obviously they donated less to MIRI after they turned evil, and the stopping of MIRI donations was a huge red flag that we all should have noticed. Sage nod.
For the sake of completeness, here’s a thread with all the financial interactions between MIRI and FTX/Alameda/SBF: https://twitter.com/robbensinger/status/1595893840484843521
It’s worth pointing out that EA as a community does not directly generate revenue. If EA were to pay back funds already spent then those funds must come from new or existing donors, and that may be a particularly tough sell.
In addition, I think most of us are familiar with the Copenhagen Interpretation of Ethics. While returning donations from unethical sources would likely reflect well upon EA, soliciting new donations to pay back past donations may be perceived differently, especially in light of existing questions surrounding the knowledge and/or involvement of EA leadership.
I agree re soliciting money from non-EAs. But if the conditions hold for my proposal upthread, and the logic checks out upon further scrutiny, then I like the idea of committed large EA donors making this a priority.
And if they don’t cover the gap, I like the idea of committed small EA donors doing what we can here—not as the only thing we fund, but as part of our portfolio until the stolen money (if it was stolen) is returned.
I don’t have a lot to donate, but I would donate to a fund like that, if those conditions held. It seems to me like a “good citizenship” donation, and a donation in favor of integrity in a community I care about.
I don’t feel like I’m atoning for something bad I did, or for some sort of Communal Guilt—I don’t really see how that concept makes sense. But I do feel like I’d be contributing to EA being an honorable and principled thing, and also drawing a clear line in the sand that disincentivizes future “do evil things to give to EA” actions.
Seeking clarity: are you suggesting that the most appropriate course of action would be to return the present net balance of funds but not specifically plan to use future funds to offset losses because that would raise the difficulty bar on acquiring the new funds—“Why should I donate money to pay for someone else’s unethical actions”—or because repaying already spent funds rather than spending any future-donated money on more direct causes would be of lower utility overall?
Suppose that Alice is a committed EA in good standing with tons of social ties to other EAs, and Alice can achieve a lot more stuff in the world because her EA associations help establish her benevolence, integrity, etc.
Alice then goes on a burglary spree and steals $100 each from Bob, Carol, and Dan.
Alice burns Bob’s $100.
Alice donates Carol’s $100 to FHI.
Alice holds on to Dan’s $100, and ends up giving it back to Dan (or a court returns the money).
FHI spends $70 of the stolen money, before learning that it was stolen. Alice gets arrested, etc.
My proposal in this hypothetical is:
FHI keeps the $30 (if this is compatible with the law of the land), and doesn’t give $70 back either. (Because I think the generalized norm “the onus is on the donation recipient to give everything back” would destroy a lot of projects and hurt a lot of people who are innocent bystanders that had no reason to believe the money was stolen.)
The larger EA community makes it a priority to internally raise $100 and give it to Carol.
(IRL, we could do this immediately because $100 is a small sum. If the sum is a lot larger, then we should try to pay it off over a few years, in a way that’s not thrashing a lot of EA careers and projects unnecessarily / keeps funding flows as stable as possible.)EAs don’t make it a collective priority to raise money for Bob (or for Dan).
This is the norm I’m proposing in this super-simplified hypothetical. I’m pretty uncertain about how relevantly analogous it is to the FTX situation, however. So I’m separately interested in hearing disagreement with the proposed norm (if anyone disagrees), and disagreement with whether it applies here.
Why not give back unspent funds though?
Comparing two scenarios:
I snap my fingers to instantly teleport all unspent money in this category (totaling $n) back to its owners.
I snap my fingers to instantly start a conference call with the thirty largest EA donors that’s guaranteed to result in commitments to pay $n back to the same people ASAP.
I’d expect 1 to cause a lot of long-term damage to a big chunk of the world’s quality-adjusted philanthropic efforts. I don’t know what spread of things has received FTX money, but I’m imagining orgs and projects shuttering (orgs and projects that would have been fine if they’d had a year or two to secure new funding, rather than a few days or weeks), individuals suddenly scrambling to make rent or try to apply to other grant processes, and a cascade of EAs (and EA allies) being unable to keep formal and informal commitments they’ve already made (e.g., to employees).
This seems like a weird form of destruction to inflict on a bunch of bystanders who didn’t know anything about FTX’s dysfunctions. Money is fungible! If we think this is important, then it seems perverse to make things right in a way that messes up a bunch of other people’s lives or careers, when we have other options as a community.
Obviously not every project will be affected in a big way by giving back the money. But I’m still wary of orgs doing stuff that might contribute to a virtue-signaling equilibrium that destroys a lot of value (“I gave back X, which makes my friend look like an asshole if she doesn’t give back Y, which makes her friends look like assholes if...”), if there are in fact any better options.
And I feel broadly icky about putting the moral onus on recipient orgs and individuals to make things right (when they bear no more blame than any of the rest of us), as opposed to taking ownership of the responsibility at a community level. (Not the full responsibility for everything bad FTX-ish folks have ever done; but community-level responsibility for any wrongfully acquired funds the community has used.)
The nature of insolvency and fraud is that someone ends up getting hit with pain. The question is who should have to bear it. I don’t think it is correct to say that the community was the beneficiary of any ill-gotten funds; the true beneficiaries were the public. “The community” is even further removed from FTX than grantees, and the people who won’t be getting bednets (or whatever) because we diverted funds from effective charities are at an even further remove. I do not like the idea of diverting the pain from FTX depositors to bednet recipients unless that is morally obligatory. Without suggesting that the depositors are to blame, they are both better off and were more connected to the incident than the bednet recipients.
And I don’t see a moral obligation to return spent funds; the grantees were contracted to perform certain work that FTX-aligned people wanted done and they did it. We didn’t expect ordinary employees at Enron to give back their earned wages because their payor was a massive fraud.
However, the Enron employees were not morally entitled to unearned wages when that money could have gone to fraud victims. So too here. Because I think there generally is a moral obligation to return unused funds if those were the product of fraud, I am more inclined toward the idea of the community replacing those funds on behalf of grantees. But I would conceptually frame that a bit differently: we would be providing a grant to former FTX grantees to allow them to meet their obligation to refund unspent grant funds, so that their charitable work can continue. Maybe it’s just a matter of optics, but that feels subtly different than seeking monies to clean up the messes created by FTX.
The former, with the latter as its first order cause.
I think the most appropriate course of action won’t be discernable for a while. But returning the present net balance of funds is clearly the correct move, while the ethical decision regarding returning past funds is ambiguous.
Someone raised the point that if EAs try to offset the harm SBF caused, this creates a moral hazard of the form “people may be more willing to cause harm in the name of EA in the future, expecting other EAs to offset that harm”.
I think that’s a stronger objection to “offset all of SBF’s harms” (which I don’t endorse anyway) than to “collectively give back the amount EA received”, but maybe it will shift my view once I’ve chewed on it a bit more. At a glance, I don’t think I’d expect this concern to be a dominant factor?
I disagree with paying back being obviously the right thing to do. The implications of “pulling back” money whenever something large shady appears would be difficult to handle, and it would be costly. (If you are arguing that the current case is special and in future cases of alleged / proven financial crime we should evaluate case by case then I am very interested in what the specific argument is.)
I would look into options for vetting integrity of big donors in the future as the right thing to do though.
I agree with this. Actually, I think we could go further and initiate some form of productive public dialog with the wider world on this question. “Do you think that we ought to take money in the EA ecosystem and pay it back to people [potentially] defrauded by FTX, or should we put this money into the charities for which it was intended?”
That seems like responsible stewardship, and I’d expect people’s opinions would vary widely.
The question would be how we’d make such decisions, how we’d hold this dialog, and how much time and energy we’d want to put into that endeavor. One way might be to solicit input from groups that we think ought to have a say: charities we donate to, ethical thinkers, community leaders, and people who lost money in the FTX meltdown, to name a few. We could potentially make the decision by running some sort of vote, which could be as sophisticated as we like. We could vote on whether to return the money, but also how much of it should be returned.
Just brainstorming here, I don’t expect that these are the ideal way to deal with this. Just a starting point.
I say give the money back (at the EA-community level, not the individual org/project level), and let the theft victim decide if they want to redonate their money to charity. It’s their money, after all!
(If the thing that happened is basically theft, and if they don’t get their money back by some other channel. I’d be interested to hear counter-arguments on either of those fronts, or on the general policy I’m suggesting.)
Public discussions sound great too, but we can invite a public conversation without using that as a reason to put off making decisions about this.
(I do think we should think hard about the relevant factors here, before acting. There are a ton of things I expect to be confusing here; maybe the whole idea just doesn’t make sense because of some subtlety about comparing the counterfactuals. But I’m guessing a large public conversation wouldn’t give us much additional insight here, and would be useful for reasons other than improving the quality of our decision.)
Hi effectenator. I do appreciate the concern for EA’s reputation. (And, charitably, I assume you’re also factoring in that this might just be the right thing to do anyway regardless of the effects on EA’s image.)
But I also wonder: Do you think non-complicit FTX employees/freelancers should donate their salaries to these people too?
My guess is that the large majority of people would answer “No.” But I’m not sure that this case is all that different from grants. Future Fund wasn’t distributing gifts; it was distributing money to fund work. In fact, when I received an EA grant (not from FTX), I assumed that legally it would be classified as self-employment income and indeed that also turned out to be the main recommendation of the people giving me the grant, so it’s possible that in many cases these are in fact legally the same thing.
I think there is still a weak case for people paying back (especially unspent) salaries / severance pay / fees / grants—perhaps staff and grantees had a moral obligation to have done more due diligence before accepting money from FTX, and this is plausibly a much stronger consideration than the “prudential” obligations of people to do sufficient due diligence before trusting FTX with their money.
But still, in all of these scenarios you end up with a situation where many people who owned $XXX a week ago and were making financial decisions accordingly, now no longer have it and are potentially in financial difficulty. You’d just be changing who the victims are.
I think your metaphor is a good one, but doesnt entirely get you where you want to go.
Insolvency is all about distributing pain. I agree that non-executive employees (including indirect employees through grants) have a high priority moral claim for work already performed and should not be clawed back. There are also practical utilitarian reasons for this rule more generally.
However, I don’t think they have this moral superpriority status for work performed after the insolvency became known. Taking away the expectation for future compensation for future work that does not benefit those with high-priority moral claims on the insolvent entity is better than the alternative of foisting more losses on depositors than absolutely necessary. Depositors have actual losses, not losses of expectancy, and there are utilitarian reasons to grant them special protections over ordinary claimants.
Moreover, most individuals funded by grant work can find alternative employment in a fairly short period of time. So they are in a much better place to mitigate loses than depositors..which is another reason I think their claims are weaker.
Yes I think I agree on both counts.
[Edited to add: Maybe after including severance pay or equivalent? Sudden loss of income seems much worse than sudden loss of savings.]
EA is not an insurance fund. Beyond simply not being feasible, paying back all lost funds would create a significant moral hazard and attract bad actors who wish to appear trustworthy to the public.
Dear God no.
If a con artist happens to give money to a beggar, you don’t go and try to get the money back off the beggar, it’s just gone.
It’s rough, but rough things happen in this world. I’m not saying anyone deserved to lose their money, but it’s not like people were unaware that investing in crypto is inherently risky.
This raises a good point. Charities such as AMF or GiveDirectly giving back money would be harming people who are mostly (much) worse off than the average FTX customer who’s lost money.
However, most of the FTX charitable donations seem to have been (via the Future Fund) to longtermist orgs that pay people in rich countries high salaries. It’s not unreasonable to suggest that these orgs/people take significant pay cuts to give money back, especially if they can still do their work living more frugally.
However, the difficulty lies with the amounts. The Future Fund has given out ~$200M. If the money lost by FTX customers amounts to ~$10B, that’s not going to make a dent. You can either pay everyone back 2%, which might be appreciated but mostly feel like nothing (or perhaps even a slap in the face?), or you could preferentially pay certain people back 100%, but then how do you choose who is in and who is out? (A lottery? Pay back smaller balances first? Means testing to pay back the poorest first?) Or should EA/Longtermism keep paying money back until everyone is whole? Seems like it would take years at best. It’s a mess :(
A lottery seems most reasonable to me. E.g.: keep selecting random individuals who haven’t gotten their money back; pay them (maybe up to some high cap?); repeat until you’ve paid out a total of ~$200M.
I’m not sure if it’s that simple. It seems quite likely that with a lottery there would be at least a few cases of relatively rich people (say people who had >$200k and lost $100k of it on FTX) getting paid back. This would inevitably lead to (justified) anger and complaints from people who were left much worse off (e.g. those who only had $10k and lost all of it). Maybe it needs to be means tested for hardship? It would take a lot more work, but perhaps there could be an application process and vetting by an independent (from EA) panel? That would still be open to manipulation though I guess (e.g. people shouting the loudest on social media).
I’d vote for an algorithm that is transparent and has limited if any discretion. Maybe the end result would be somewhat suboptimal to discretionary independent-panel vetting, but it would be critical that the process be seen as fair.
Plus, a complicated procedure will eat up a lot of the funds intended for victims, unless you think the EA community should both return any FTX money and pay for an involved claims process. That goes beyond mere disgorgement.
Nice try—I like your on-the-nose username
Is stealing billions of deposits from your customers “effective altruism” ? You people claiming to be “effective altruists” always end up being the most messed up and predatory . No honour . How can you trust the kind of people who happily let their wife get screwed by other men because it “increases utility? spiritually sick people.
This comment violates the Forum’s norms by being extremely rude, aggressive, and off-topic. It is inappropriate for the Forum. We’re banning michaelroberts for one month, and will expect the user to follow our norms to a high standard should they return (or they will be banned permanently — this is a bad start to their engagement with the Forum).
did that hit too close to home? judging from the number of votes it looks like it did. It should. Sort yourselves out, you sick people.
I’m not sure why I don’t see any answers to your questions. I would say (1) the community consensus is very clear that stealing billions is not “effective altruism”; (2) most EAs think that people with non-traditional sexual relationships can still be good people, which is not an opinion limited to EAs; and (3) we’re not generally messed up, predatory or dishonorable. But don’t take my word for it—try giving thousands of dollars to save children from malaria, or stop global warming, as I have, and see how predatory and dishonorable it makes you. I think you’ll find it actually doesn’t make you any worse than you were before.