Update Apr. 4: I’ve now spoken with another EA who was involved in EA’s response to the FTX implosion. To summarize what they said to me:
They thought that the lack of an investigation was primarily due to general time constraints and various exogenous logistical difficulties. At the time, they thought that setting up a team who could overcome the various difficulties would be extremely hard for mundane reasons such as:
thorough, even-handed investigations into sensitive topics are very hard to do (especially if you start out low-context);
this is especially true when they are vaguely scoped and potentially involve a large number of people across a number of different organizations;
“professional investigators” (like law firms) aren’t very well-suited to do the kind of investigation that would actually be helpful;
legal counsels were generally strongly advising people against talking about FTX stuff in general;
various old confidentiality agreements would make it difficult to discuss what happened in some relevant instances (e.g. meetings that had Chatham House Rules);
it would be hard to guarantee confidentiality in the investigation when info might be subpoenaed or something like that;
and a general plethora of individually-surmountable but collectively-highly-challenging obstacles.
They flagged that at the time, most people involved were already in an exceptionally busy and difficult time, and so had less bandwidth for additional projects than usual.
A caveat here is that the EV board did block some people from speaking publicly during the initial investigation into EV’s legal situation. That investigation ended back in the summer of 2023.
Julia Wise and Ozzie Gooen wrote on the EA Forum that this is a potentially useful project for someone to take on, which as far as this person knew isn’t something any EA leadership did or would try to stop, and the impression of the person I spoke to was that Julia and Ozzie indeed tried to investigate what reforms should happen, though the person I spoke to didn’t follow that situation closely.
The person I spoke to didn’t want to put words in the mouth of EA leaders, and their information is mostly from ~1 year ago and might be out of date. But to the extent some people aren’t currently champing at the bit to make this happen, their impression (with respect to the EA leaders they have interacted with relatively extensively) is that this has little to do with a desire to protect the reputation of EA or of individual EAs.
Rather, their impression is that for a lot of top EA leaders, this whole thing is a lot less interesting, because those EAs think they know what happened (and that it’s not that interesting). So the choice is like “should I pour in a ton of energy to try to set up this investigation that will struggle to get off the ground to learn kinda boring stuff I already know?” And maybe they are underrating how interesting others would find it, but that made the whole idea not so important-seeming (at least in the early days after FTX’s collapse, relative to all the other urgent and confusing things swirling around in the wake of the collapse) from their perspective.
I vouch for this person as generally honest and well-intentioned. I update from the above that community leaders are probably less resistant to doing some kind of fact-finding inquiry than I thought. I’m hoping that this take is correct, since it suggests to me that it might not be too hard to get an SBF postmortem to happen now that the trial and the EV legal investigation are both over (and now that we’re all talking about the subject in the first place).
If the take above isn’t correct, then hopefully my sharing it will cause others to chime in with further objections, and I can zigzag my way to understanding what actually happened!
I shared the above summary with Oliver Habryka, and he said:
Hmm, I definitely don’t buy the “this has little to do with EA leadership desire to protect their reputation”. A lot of the reason for the high standards is for PR reasons.
I think people are like “Oh man, doing a good job here seems really hard, since doing it badly seems like it would be really costly reputation-wise. But if someone did want to put in the heroic effort to do a good enough job to not have many downsides, then yeah, I would be in favor of that. But that seems so hard to do that I don’t really expect it to happen.”
Like, the primary thing that seems to me the mistake is the standard to which any such investigation is being held to before people consider it net-positive.
I’ll also share Ozzie Gooen’s Twitter take from a few days ago:
My quick guess (likely to be wrong!)
There are really only a few people “on top” of EA.
This would have essentially been “these top people investigating each other, and documenting that for others in the community”
These people don’t care that much about being trusted in the community. They fund the community and have power
the EA community really doesn’t have much power over the funders/leaders.
These people generally feel like they understand the problem well enough.
And, some corrections to my earlier posts about this:
I said that “there was a narrow investigation into legal risk to Effective Ventures last year”, which I think may have overstated the narrowness of the investigation a bit. My understanding is that the investigation’s main goal was to reduce EV’s legal exposure, but to that end the investigation covered a somewhat wider range of topics (possibly including things like COI policies), including things that might touch on broader EA mistakes and possible improvements. But it’s hard to be sure about any of this because details of the investigation’s scope and outcomes weren’t shared, and it doesn’t sound like they will be.
I said that Julia Wise had “been calling for the existence of such an investigation”; Julia clarifies on social media, “I would say I listed it as a possible project rather than calling for it exactly.”
Specifically, Julia Wise, Ozzie Gooen, and Sam Donald co-wrote a November 2023 blog post that listed “comprehensive investigation into FTX<>EA connections / problems” as one of four “projects and programs we’d like to see”, saying “these projects are promising, but they’re sizable or ongoing projects that we don’t have the capacity to carry out”. They also included this idea in a list of Further Possible Projects on EA Reform.
(I’m going to wrap up a few disparate threads together here, and will probably be my last comment on this post ~modulo a reply for clarification’s sake. happy to discuss further with you Rob or anyone via DMs/Forum Dialogue/whatever)
(to Rob & Oli—there is a lot of inferential distance between us and that’s ok, the world is wide enough to handle that! I don’t mean to come off as rude/hostile and apologies if I did get the tone wrong)
Thanks for the update Rob, I appreciate you tying this information together in a single place. And yet… I can’t help but still feel some of the frustrations of my original comment. Why does this person not want to share their thoughts publicly? Is it because they don’t like the EA Forum? Because their scared of retaliation? It feels like this would be useful and important information for the community to know.
I’m also not sure what to make of Habryka’s response here and elsewhere. I think there is a lot of inferential distance between myself and Oli, but it does seem to me to come off as a“social experiment in radical honesty and perfect transparency” ,which is a vibe I often get from the Lightcone-adjacent world. And like, with all due respect, I’m not really interested in that whole scene. I’m more interested in questions like:
Were any senior EAs directly involved in the criminal actions at FTX/Alameda?
What warnings were given about SBF to senior EAs before the FTX blowup, particularly around the 2018 Alameda blowup, as recounted here.
If these warnings were ignored, what prevented people from deducing that SBF was a bad actor?[1]
Critically, if these warnings were accepted as true, who decided to keep this a secret and to supress it from the community at large, and not act on it?
Why did SBF end up with such a dangerous set of beliefs about the world? (I think they’re best described as ‘risky beneficentrism’ - see my comment here and Ryan’s original post here)
Why have the results of these investigations, or some legally-cleared version, not been shared with the community at large?
Do senior EAs have any plan to respond to the hit to EA-morale as a result of FTX and the aftermath, along with the intensely negative social reaction to EA, apart from ‘quietly hope it goes away’?
Writing it down, 2.b. strikes me as what I mean by ‘naive consequentialism’ if it happened. People had information that SBF was a bad character who had done harm, but calculated (or assumed) that he’d do more good being part of/tied to EA than otherwise. The kind of signalling you described as naive consequentialism doesn’t really seem pertinent to me here, as interesting as the philosophical discussion can be.
tl’dr—I think there can be a difference between a discussion about what norms EA ‘should’ have, or senior EAs should act by, especially in the post-FTX and influencing-AI-policy world, but I think that’s different from the ‘minimal viable information-sharing’ that can help the community heal, hold people to account, and help make the world a better place. It does feel like the lack of communication is harming that, and I applaud you/Oli pushing for it, but sometimes I wish you would both also be less vague too. Some of us don’t have the EA history and context that you both do!
epilogue: I hope Rebecca is doing well. But this post & all the comments makes me feel more pessimistic about the state of EA (as a set of institutions/organisations, not ideas) post FTX. Wounds might have faded, but they haven’t healed 😞
the choice is like “should I pour in a ton of energy to try to set up this investigation that will struggle to get off the ground to learn kinda boring stuff I already know?”
I’m not the person quoted, but I agree with this part, and some of the reasons for why I expect the results of an investigation like this to be boring aren’t based on any private or confidential information, so perhaps worth sharing.
One key reason: I think rumor mills are not very effective fraud detection mechanisms.
(This seems almost definitionally true: if something was clear evidence of fraud then it would just be described as “clear evidence of fraud”; describing something as a “rumor” seems to almost definitionally imply a substantial probability that the rumor is false or at least unclear or hard to update on.[1])
E.g. If I imagine a bank whose primary fraud detection mechanism was “hope the executives hear rumors of malfeasance,” I would not feel very satisfied with their risk management. If fraud did occur, I wouldn’t expect that their primary process improvement to be “see if the executives could have updated from rumors better.” I am therefore somewhat confused by how much interest there seems to be in investigating how well the rumor mill worked for FTX.[2]
To be clear: I assume that the rumor mill could function more efficiently, and that there’s probably someone who heard “SBF is often overconfident” or whatever and could have updated from that information more accurately than they did. (If you’re interested in my experience, you can read my comments here.) I’m just very skeptical that a new and improved rumor mill is substantial protection against fraud, and don’t understand what an investigation could show me that would change my mind.[3] Moreover, even if I somehow became convinced that rumors could have been effective in the specific case of FTX, I will still likely be skeptical of their efficacy in the future.
Relatedly, I’ve heard people suggest that 80k shouldn’t have put SBF on their website given some rumors that were floating around. My take is that the base rate of criminality among large donors is high, having a rumor mill does not do very much to lower that rate, and so I expect to believe that the risk will be relatively high for high net worth people 80k puts on the front page in the future, and I don’t need an investigation to tell me that.
To make some positive suggestions about things I could imagine learning from/finding useful:
I have played around with the idea of some voluntary pledge for earning to give companies where they could opt into additional risk management and transparency policies (e.g. selecting some processes from Sarbanes-Oxley). My sense is that these policies do actually substantially reduce the risk of fraud (albeit at great expense), and might be worth doing.[4]
At least, it seems like this should be our first port of call. Maybe we can’t actually implement industry best practices around risk management, but it feels like we should at least try before giving up and doing the rumor mill thing.
My understanding is that a bunch of work has gone into making regulations so that publicly traded companies are less likely to commit fraud, and these regulations are somewhat effective, but they are so onerous that many companies are willing to stay private and forgo billions of dollars in investment just to not have to deal with them. I suspect that EA might find itself in a similarly unfortunate situation where reducing risks from “prominent individuals” requires the individuals in question to do something so onerous that no one is willing to become “prominent.” I would be excited about research into a) whether this is in fact the case, and b) what to do about it, if so.
Some people probably disagree with my claim that rumor mills are ineffective. If so, research into this would be useful. E.g. it’s been on my backlog for a while to write up a summary of Why They Do It, or a fraud management textbook.
Why They Do It is perhaps particularly useful, given that one of its key claims is that, unlike with blue-collar crime, character traits don’t correlate well with propensity to commit white-collar crimes crimes, and I think this may be a crux between me and people who disagree with me.
All that being said, I think I’m weakly in favor of someone more famous than me[5] doing some sort of write up about what rumors they heard, largely because I don’t expect the above to convince many people, and I think such a write up will mostly result in people realizing that the rumors were not very motivating.
One possible reason for this is that people are aiming for goals other than detecting fraud, e.g. they are hoping that rumors could also be used to identify other types of misconduct. I have opinions about this, but this comment is already too long so I’m not going to address it here.
e.g. I appreciate Nate writing this, but if in the future I learned that a certain person has spoken to Nate, I’m not going to update my beliefs about the likelihood of them committing financial misconduct very much (and I believe that Nate would agree with this assessment)
Part of why I haven’t prioritized this is that there aren’t a lot of earning to give companies anymore, but I think it’s still potentially worth someone spending time on this
I feel like “people who worked with Sam told people about specific instances of quite serious dishonesty they had personally observed” is being classed as “rumour” here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word “rumour” conjures. Also, even if people only did receive stuff that was more centrally rumour, I feel like we still want to know if any one in leadership argued “oh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upside”. That’s a signal someone is a bad leader in my view, which is useful knowledge going forward. (I’m not saying it is instant proof they should never hold leadership positions ever again: I think quite a lot of people might have said something like that in similar circumstances. But it is a bad sign.)
I feel like “people who worked with Sam told people about specific instances of quite serious dishonesty they had personally observed” is being classed as “rumour” here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word “rumour” conjures.
I agree with this.
[...] I feel like we still want to know if any one in leadership argued “oh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upside”. That’s a signal someone is a bad leader in my view, which is useful knowledge going forward.
I don’t really agree with this. Everyone has some probability of turning out to be dodgy; it matters exactly how strong the available evidence was. “This EA leader writes people off immediately when they have even a tiny probability of being untrustworthy” would be a negative update about the person’s decision-making too!
I meant something in between “is” and “has a non-zero chance of being”, like assigning significant probability to it (obviously I didn’t have an exact number in mind), and not just for base rate reasons about believing all rich people to be dodgy.
I’m not the person quoted, but I agree with this part, and some of the reasons for why I expect the results of an investigation like this to be boring aren’t based on any private or confidential information, so perhaps worth sharing.
One key reason: I think rumor mills are not very effective fraud detection mechanisms.
Huh, the same reason you cite for why you are not interested in doing an investigation is one of the key reasons why I want an investigation.
It seems to me that current EA leadership is basically planning to continue a “our primary defense against bad actors is the rumor mill” strategy. Having an analysis of how that strategy did not work, and in some sense can’t work for things like this seems like it’s one of the things that would have the most potential to give rise to something better here.
Do you think “[doing an investigation is] one of the things that would have the most potential to give rise to something better here” because you believe it is very hard to find alternatives to the rumor mill strategy? Or because you expect alternatives to not be adopted, even if found?
My current sense is that there is no motivation to find an alternative because people mistakenly think it works fine enough and so there is no need to try to find something better (and also in the absence of an investigation and clear arguments about why the rumor thing doesn’t work, people probably think they can’t really be blamed if the strategy fails again)
Suppose I want to devote some amount of resources towards finding alternatives to a rumor mill. I had been interpreting you as claiming that, instead of directly investing these resources towards finding an alternative, I should invest these resources towards an investigation (which will then in turn motivate other people to find alternatives).
Is that correct? If so, I’m interested in understanding why – usually if you want to do a thing, the best approach is to just do that thing.
It seems to me that a case study of how exactly FTX occurred, and where things failed, would be among one of the best things to use to figure out what thing to do instead.
Currently the majority of people who have an interest in this are blocked by not really knowing what worked and didn’t work in the FTX case, and so probably will have trouble arguing compellingly for any alternative, and also lack some of the most crucial data. My guess is you might have the relevant information from informal conversations, but most don’t.
I do think also just directly looking for an alternative seems good. I am not saying that doing an FTX investigation is literally the very best thing to do in the world, it just seems better than what I see EA leadership spending their time on instead. If you had the choice between “figure out a mechanism detecting and propagating information about future adversarial behavior” and “do an FTX investigation”, I would feel pretty great about both, and honestly don’t really know which one I would prefer. As far as I can tell neither of these things is seeing much effort invested into it.
Okay, that seems reasonable. But I want to repeat my claim[1] that people are not blocked by “not really knowing what worked and didn’t work in the FTX case” – even if e.g. there was some type of rumor which was effective in the FTX case, I still think we shouldn’t rely on that type of rumor being effective in the future, so knowing whether or not this type of rumor was effective in the FTX case is largely irrelevant.[2]
I think the blockers are more like: fraud management is a complex and niche area that very few people in EA have experience with, and getting up to speed with it is time-consuming, and also ~all of the practices are based under assumptions like “the risk manager has some amount of formal authority” which aren’t true in EA.
(And to be clear: I think these are very big blockers! They just aren’t resolved by doing an investigation.)
Or maybe more specifically: would like people to explicitly refute my claim. If someone does think that rumor mills are a robust defense against fraud but were just implemented poorly last time, I would love to hear that!
Again, under the assumption that your goal is fraud detection.
It seems like a goal of ~”fraud detection” not further specified may be near the nadir of utility for an investigation.
If you go significantly narrower, then how EA managed (or didn’t manage) SBF fraud seems rather important to figuring out how to deal with the risk of similar fraudulent schemes in the future.[1]
If you go significantly broader (cf. Oli’s reference to “detecting and propagating information about future adversarial behavior”), the blockers you identify seem significantly less relevant, which may increase the expected value of an investigation.
My tentative guess is that it would be best to analyze potential courses of action in terms of their effects on the “EA immune system” at multiple points of specificity, not just close relations of a specific known pathogen (e.g., SBF-like schemes), a class of pathogens (e.g., “fraud”), or pathogens writ large (e.g., “future adversarial behavior”).
Given past EA involvement with crypto, and the base rate of not-too-subtle fraud in crypto, the risk of similar fraudulent schemes seems more than theoretical to me.
I have played around with the idea of some voluntary pledge for earning to give companies where they could opt into additional risk management and transparency policies (e.g. selecting some processes from Sarbanes-Oxley). My sense is that these policies do actually substantially reduce the risk of fraud (albeit at great expense), and might be worth doing.
I think that would be worth exploring. I suspect you are correct that full Sarbanes-Oxley treatment would be onerous.
On the other hand, I don’t see how a reasonably competent forensic accountant or auditor could have spent more than a few days at FTX (or at Madoff) without having a stroke. Seeing the commingled bank accounts would have sent alarm bells racing through my head, at least. (One of the core rules of legal ethics is that you do not commingle your money with that of your clients because experience teaches all sorts of horrible things can and often do happen.)
I certainly don’t mean to imply that fraud against sophisticated investors and lenders is okay, but there is something particularly bad about straight-up conversion of client funds like at FTX/Madoff. At least where hedge funds and big banks are concerned, they have the tools and access to protect themselves if they so wish. Moreover, the link between the fraud and the receipt of funds is particularly strong in those cases—Enron was awash in fraud, but it wouldn’t be fair to say that a charity that received a grant from Enron at certain points in time was approximately and unknowingly in possession of stolen funds.
Thankfully, procedures meant to ferret out sophisticated Enron-style fraud shouldn’t be necessary to rule out most straight-up conversion schemes. Because of the risk that someone will rat the fraudsters out, my understanding is that the conspiracy usually is kept pretty small in these sorts of frauds. That imposes a real limit on how well the scheme will withstand even moderate levels of probing with auditor-level access.
If you want a reference class of similar frauds, here is the prosecution’s list of cases (after the Booker decision in 2005) with losses > $100MM and fraud type of Ponzi scheme, misappropriation, or embezzlement:
For example, one might be really skeptical if auditing red flags associated with prior frauds are present. Madoff famously had his audits done by a two-person firm that reported not conducting audits. FTX was better, but apparently still used “questionable” third-tier firms that “do audit a few public companies but none of the size or complexity of FTX.” Neither “the Armanino nor the Prager Metis audit reports for 2021 provides an opinion on the FTX US or FTX Trading internal controls over accounting and financial reporting”—and the audit reports tell the reader as much (same source). The article, written by an accounting lecturer at Wharton, goes on to describe other weirdness in the audit reports. Of course, that’s not foolproof—Enron had one of the then-Big Five accounting firms, for instance.
Catching all fraud is not realistic . . . for anyone, much less a charitable social movement. But it seems like some basic checks to make fairly sure the major or whole basis for the company / for the individual’s wealth is not a fraudulent house of cards seems potentially attainable at a reasonable burden level.
I guess the question I have is, if the fraud wasn’t noticed by SBF’s investors, who had much better access to information and incentives to find fraud, why would anyone expect the recipients of his charitable donations to notice it? If it was a failure of the EA movement not to know that FTX was fraudulent, isn’t it many times more of a failure that the fraud was unnoticed by the major sophisticated investment firms that were large FTX shareholders?
I think investing in FTX was genuinely a good idea, if you were a profit maximizer, even if you strongly suspected the fraud. As Jason says, as an investor losing money due to fraud isn’t any worse than losing money because a company fails to otherwise be profitable, so even assigning 20%-30% probability to fraud for a high-risk investment like FTX where you are expecting >2x returns in a short number of years will not make a huge difference to your bottomline.
In many ways you should expect being the kind of person who is willing to commit fraud to be positively associated with returns, because doing illegal and fradulent things means that the people who run the organization take on massive risk where you are not exposed to the downside, but you are exposed to the upside. It’s not worth it to literally invest in fraud, but it is worth it to invest into the kind of company where the CEO is willing to go to prison, since you don’t really have any risk of going to prison, but you get the upside of the legal risk they take on (think of Uber blatantly violating laws until they established a new market, which probably exposed leadership to substantial legal risk, but investors just got to reap the profits).
I wasn’t suggesting we should expect this fraud to have been found in this case with the access that was available to EA sources. (Perhaps the FTXFF folks might have caught the scent if they were forensic accountants—but they weren’t. And I’m not at all confident on that in any event.) I’m suggesting that, in response to this scandal, EA organizations could insist on certain third-party assurances in the future before taking significant amounts of money from certain sources.
Why the big money was willing to fork over nine figures each to FTX without those assurances is unclear to me. But one observation: as far as a hedge fund or lender is concerned, a loss due to fraud is no worse than a loss due to the invested-in firm being outcompeted, making bad business decisions, experiencing a general crypto collapse, getting shut down for regulatory issues, or any number of scenarios that were probably more likely ex ante than a massive conversion scheme. In fact, such a scheme might even be less bad to the extent that the firm thought it might get more money back in a fraud loss than from some ordinarily-business failure modes. Given my understanding that these deals often move very quickly, and the presence of higher-probability failure modes, it is understandable that investors and lenders wouldn’t have prioritized fraud detection.
In contrast, charitable grantees are much more focused in their concern about fraud; taking money from a solvent, non-fraudulent business that later collapses doesn’t raise remotely the same ethical, legal, operational, and reputational concerns. Their potential exposure in that failure mode are likely several times larger than those of the investors/lenders after all non-financial exposures are considered. They are also not on a tight time schedule.
legal counsels were generally strongly advising people against talking about FTX stuff in general
Will MacAskill waited until April to speak fully and openly on the extra cautious advice of legal counsel. If that period ended to the point Will spoke to the matter of the FTX collapse, and the before and after, has he had ever wanted to, surely almost everyone could do the same now. The barrier or objection of not talking according to the strong advice of legal counsel seems like it’d be null for most people at this point.
Edit: in the 2 hours since I first made this comment, I’ve read most of the comments with arguments both for and against why someone should begin pursuing at least some parts of what could constitute an overall investigation as has been suggested. Finding the arguments for doing so far better than the arguments against, I have now decided to personally begin pursuing the below project. Anyone interested in helping or supporting me in that vein, please reply to this comment, or contact me privately. Any number of messages I receive along the lines of “I think this is a bad idea, I disagree with what you intend to do, I think this will be net negative, please don’t do this”, etc., absent other arguments, are very unlikely to deter me. On the contrary, if anything, such substanceless objections may motivate me to pursue this end with more vigour.
I’m not extremely confident I could complete an investigation of the whole of the EA community’s role in this regard at the highest level all by myself, though I am now offering to investigate or research parts of this myself. Here’s some of what I could bring to the table.
I’d be willing to do some relatively thorough investigation from a starting point of being relatively high-context. For those who wouldn’t think I’d be someone who knows a lot of context here, this short form post I made a while ago could serve as proof of concept I have more context than you might expect. I could offer more information, or answer more questions others have, in an attempt to genuinely demonstrate how much context I have.
I have very little time constraints compared to perhaps most individuals in the EA community who might be willing or able to contribute to some aspect of such an investigation. Already on my own time, I occasionally investigate issues in and around EA by myself. I intend to do so more in the future. I’d be willing to research more specific issues on my own time if others were to provide some direction. Some of what I might pursue further may be related to FTX anyway without urging from others.
I’d be willing to volunteer a significant amount of time doing so, as I’m not currently working full-time and may not be working full-time in the foreseeable future. If the endeavour required a certain amount of work or progress achieved within a certain time frame, I may need to be hired in some capacity to complete some of the research or investigating. I’d be willing to accept such an opportunity as well.
Having virtually no conflict of interests, there’s almost nothing anyone powerful in or around EA could hold over me to attempt to stop me from trying to investigate.
I’m champing at the bit to make this happen probably about as much as anyone.
I would personally find the contents of any aspect of such an investigation to be extremely interesting and motivating.
I wouldn’t fear any retaliation whatsoever. Some attempts or threats to retaliate against me could be indeed be advantageous for me, as I am confident they would fail to achieve their desired goals, and thus serve as evidence to others that any further such attempts would be futile wastes of efforts.
I am personally in semi-regular contact or have decent rapport with some whistleblowers or individuals who retain private information about events related to the whole saga of FTX dating back to 2018. They, or their other peers who’ve also exited the EA community in the last several years, may not be willing to talk freely with most individuals in EA who might participate in such an investigation. I am very confident at least some of them would be willing to talk to me.
I’m probably less nervous personally, i.e., being willing to be radically transparent and honest, about speaking up or out about anything EA-related than most people who have continuously participated in the EA community for over a decade. I suspect that includes even you and Oliver Habryka, who have already been noted in other comments here as among those in that cohort who are the least nervous. Notably that may at this point be a set of no more than a few hundred people.
To produce common-knowledge documents to help as large a subset of the EA community, if not the whole community, to learn what happened, and what could be done differently in the future, would be a goal of any such investigation that I could be most motivated to accomplish. I’d be much more willing to share such a document more widely than most other people who might be willing or able to produce one.
Update Apr. 4: I’ve now spoken with another EA who was involved in EA’s response to the FTX implosion. To summarize what they said to me:
They thought that the lack of an investigation was primarily due to general time constraints and various exogenous logistical difficulties. At the time, they thought that setting up a team who could overcome the various difficulties would be extremely hard for mundane reasons such as:
thorough, even-handed investigations into sensitive topics are very hard to do (especially if you start out low-context);
this is especially true when they are vaguely scoped and potentially involve a large number of people across a number of different organizations;
“professional investigators” (like law firms) aren’t very well-suited to do the kind of investigation that would actually be helpful;
legal counsels were generally strongly advising people against talking about FTX stuff in general;
various old confidentiality agreements would make it difficult to discuss what happened in some relevant instances (e.g. meetings that had Chatham House Rules);
it would be hard to guarantee confidentiality in the investigation when info might be subpoenaed or something like that;
and a general plethora of individually-surmountable but collectively-highly-challenging obstacles.
They flagged that at the time, most people involved were already in an exceptionally busy and difficult time, and so had less bandwidth for additional projects than usual.
A caveat here is that the EV board did block some people from speaking publicly during the initial investigation into EV’s legal situation. That investigation ended back in the summer of 2023.
Julia Wise and Ozzie Gooen wrote on the EA Forum that this is a potentially useful project for someone to take on, which as far as this person knew isn’t something any EA leadership did or would try to stop, and the impression of the person I spoke to was that Julia and Ozzie indeed tried to investigate what reforms should happen, though the person I spoke to didn’t follow that situation closely.
The person I spoke to didn’t want to put words in the mouth of EA leaders, and their information is mostly from ~1 year ago and might be out of date. But to the extent some people aren’t currently champing at the bit to make this happen, their impression (with respect to the EA leaders they have interacted with relatively extensively) is that this has little to do with a desire to protect the reputation of EA or of individual EAs.
Rather, their impression is that for a lot of top EA leaders, this whole thing is a lot less interesting, because those EAs think they know what happened (and that it’s not that interesting). So the choice is like “should I pour in a ton of energy to try to set up this investigation that will struggle to get off the ground to learn kinda boring stuff I already know?” And maybe they are underrating how interesting others would find it, but that made the whole idea not so important-seeming (at least in the early days after FTX’s collapse, relative to all the other urgent and confusing things swirling around in the wake of the collapse) from their perspective.
I vouch for this person as generally honest and well-intentioned. I update from the above that community leaders are probably less resistant to doing some kind of fact-finding inquiry than I thought. I’m hoping that this take is correct, since it suggests to me that it might not be too hard to get an SBF postmortem to happen now that the trial and the EV legal investigation are both over (and now that we’re all talking about the subject in the first place).
If the take above isn’t correct, then hopefully my sharing it will cause others to chime in with further objections, and I can zigzag my way to understanding what actually happened!
I shared the above summary with Oliver Habryka, and he said:
I’ll also share Ozzie Gooen’s Twitter take from a few days ago:
And, some corrections to my earlier posts about this:
I said that “there was a narrow investigation into legal risk to Effective Ventures last year”, which I think may have overstated the narrowness of the investigation a bit. My understanding is that the investigation’s main goal was to reduce EV’s legal exposure, but to that end the investigation covered a somewhat wider range of topics (possibly including things like COI policies), including things that might touch on broader EA mistakes and possible improvements. But it’s hard to be sure about any of this because details of the investigation’s scope and outcomes weren’t shared, and it doesn’t sound like they will be.
I said that Julia Wise had “been calling for the existence of such an investigation”; Julia clarifies on social media, “I would say I listed it as a possible project rather than calling for it exactly.”
Specifically, Julia Wise, Ozzie Gooen, and Sam Donald co-wrote a November 2023 blog post that listed “comprehensive investigation into FTX<>EA connections / problems” as one of four “projects and programs we’d like to see”, saying “these projects are promising, but they’re sizable or ongoing projects that we don’t have the capacity to carry out”. They also included this idea in a list of Further Possible Projects on EA Reform.
(I’m going to wrap up a few disparate threads together here, and will probably be my last comment on this post ~modulo a reply for clarification’s sake. happy to discuss further with you Rob or anyone via DMs/Forum Dialogue/whatever)
(to Rob & Oli—there is a lot of inferential distance between us and that’s ok, the world is wide enough to handle that! I don’t mean to come off as rude/hostile and apologies if I did get the tone wrong)
Thanks for the update Rob, I appreciate you tying this information together in a single place. And yet… I can’t help but still feel some of the frustrations of my original comment. Why does this person not want to share their thoughts publicly? Is it because they don’t like the EA Forum? Because their scared of retaliation? It feels like this would be useful and important information for the community to know.
I’m also not sure what to make of Habryka’s response here and elsewhere. I think there is a lot of inferential distance between myself and Oli, but it does seem to me to come off as a “social experiment in radical honesty and perfect transparency” , which is a vibe I often get from the Lightcone-adjacent world. And like, with all due respect, I’m not really interested in that whole scene. I’m more interested in questions like:
Were any senior EAs directly involved in the criminal actions at FTX/Alameda?
What warnings were given about SBF to senior EAs before the FTX blowup, particularly around the 2018 Alameda blowup, as recounted here.
If these warnings were ignored, what prevented people from deducing that SBF was a bad actor?[1]
Critically, if these warnings were accepted as true, who decided to keep this a secret and to supress it from the community at large, and not act on it?
Why did SBF end up with such a dangerous set of beliefs about the world? (I think they’re best described as ‘risky beneficentrism’ - see my comment here and Ryan’s original post here)
Why have the results of these investigations, or some legally-cleared version, not been shared with the community at large?
Do senior EAs have any plan to respond to the hit to EA-morale as a result of FTX and the aftermath, along with the intensely negative social reaction to EA, apart from ‘quietly hope it goes away’?
Writing it down, 2.b. strikes me as what I mean by ‘naive consequentialism’ if it happened. People had information that SBF was a bad character who had done harm, but calculated (or assumed) that he’d do more good being part of/tied to EA than otherwise. The kind of signalling you described as naive consequentialism doesn’t really seem pertinent to me here, as interesting as the philosophical discussion can be.
tl’dr—I think there can be a difference between a discussion about what norms EA ‘should’ have, or senior EAs should act by, especially in the post-FTX and influencing-AI-policy world, but I think that’s different from the ‘minimal viable information-sharing’ that can help the community heal, hold people to account, and help make the world a better place. It does feel like the lack of communication is harming that, and I applaud you/Oli pushing for it, but sometimes I wish you would both also be less vague too. Some of us don’t have the EA history and context that you both do!
epilogue: I hope Rebecca is doing well. But this post & all the comments makes me feel more pessimistic about the state of EA (as a set of institutions/organisations, not ideas) post FTX. Wounds might have faded, but they haven’t healed 😞
Not that people should have guessed the scale of his wrongdoing ex-ante, but was there enough to start to downplay and disassociate?
I’m not the person quoted, but I agree with this part, and some of the reasons for why I expect the results of an investigation like this to be boring aren’t based on any private or confidential information, so perhaps worth sharing.
One key reason: I think rumor mills are not very effective fraud detection mechanisms.
(This seems almost definitionally true: if something was clear evidence of fraud then it would just be described as “clear evidence of fraud”; describing something as a “rumor” seems to almost definitionally imply a substantial probability that the rumor is false or at least unclear or hard to update on.[1])
E.g. If I imagine a bank whose primary fraud detection mechanism was “hope the executives hear rumors of malfeasance,” I would not feel very satisfied with their risk management. If fraud did occur, I wouldn’t expect that their primary process improvement to be “see if the executives could have updated from rumors better.” I am therefore somewhat confused by how much interest there seems to be in investigating how well the rumor mill worked for FTX.[2]
To be clear: I assume that the rumor mill could function more efficiently, and that there’s probably someone who heard “SBF is often overconfident” or whatever and could have updated from that information more accurately than they did. (If you’re interested in my experience, you can read my comments here.) I’m just very skeptical that a new and improved rumor mill is substantial protection against fraud, and don’t understand what an investigation could show me that would change my mind.[3] Moreover, even if I somehow became convinced that rumors could have been effective in the specific case of FTX, I will still likely be skeptical of their efficacy in the future.
Relatedly, I’ve heard people suggest that 80k shouldn’t have put SBF on their website given some rumors that were floating around. My take is that the base rate of criminality among large donors is high, having a rumor mill does not do very much to lower that rate, and so I expect to believe that the risk will be relatively high for high net worth people 80k puts on the front page in the future, and I don’t need an investigation to tell me that.
To make some positive suggestions about things I could imagine learning from/finding useful:
I have played around with the idea of some voluntary pledge for earning to give companies where they could opt into additional risk management and transparency policies (e.g. selecting some processes from Sarbanes-Oxley). My sense is that these policies do actually substantially reduce the risk of fraud (albeit at great expense), and might be worth doing.[4]
At least, it seems like this should be our first port of call. Maybe we can’t actually implement industry best practices around risk management, but it feels like we should at least try before giving up and doing the rumor mill thing.
My understanding is that a bunch of work has gone into making regulations so that publicly traded companies are less likely to commit fraud, and these regulations are somewhat effective, but they are so onerous that many companies are willing to stay private and forgo billions of dollars in investment just to not have to deal with them. I suspect that EA might find itself in a similarly unfortunate situation where reducing risks from “prominent individuals” requires the individuals in question to do something so onerous that no one is willing to become “prominent.” I would be excited about research into a) whether this is in fact the case, and b) what to do about it, if so.
Some people probably disagree with my claim that rumor mills are ineffective. If so, research into this would be useful. E.g. it’s been on my backlog for a while to write up a summary of Why They Do It, or a fraud management textbook.
Why They Do It is perhaps particularly useful, given that one of its key claims is that, unlike with blue-collar crime, character traits don’t correlate well with propensity to commit white-collar crimes crimes, and I think this may be a crux between me and people who disagree with me.
All that being said, I think I’m weakly in favor of someone more famous than me[5] doing some sort of write up about what rumors they heard, largely because I don’t expect the above to convince many people, and I think such a write up will mostly result in people realizing that the rumors were not very motivating.
Thanks to Chana Messinger for this point
One possible reason for this is that people are aiming for goals other than detecting fraud, e.g. they are hoping that rumors could also be used to identify other types of misconduct. I have opinions about this, but this comment is already too long so I’m not going to address it here.
e.g. I appreciate Nate writing this, but if in the future I learned that a certain person has spoken to Nate, I’m not going to update my beliefs about the likelihood of them committing financial misconduct very much (and I believe that Nate would agree with this assessment)
Part of why I haven’t prioritized this is that there aren’t a lot of earning to give companies anymore, but I think it’s still potentially worth someone spending time on this
I have done my own version of this, but my sense is that people (very reasonably) would prefer to hear from someone like Will
I feel like “people who worked with Sam told people about specific instances of quite serious dishonesty they had personally observed” is being classed as “rumour” here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word “rumour” conjures. Also, even if people only did receive stuff that was more centrally rumour, I feel like we still want to know if any one in leadership argued “oh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upside”. That’s a signal someone is a bad leader in my view, which is useful knowledge going forward. (I’m not saying it is instant proof they should never hold leadership positions ever again: I think quite a lot of people might have said something like that in similar circumstances. But it is a bad sign.)
I agree with this.
I don’t really agree with this. Everyone has some probability of turning out to be dodgy; it matters exactly how strong the available evidence was. “This EA leader writes people off immediately when they have even a tiny probability of being untrustworthy” would be a negative update about the person’s decision-making too!
I took that second quote to mean ‘even if Sam is dodgy it’s still good to publicly back him’
I meant something in between “is” and “has a non-zero chance of being”, like assigning significant probability to it (obviously I didn’t have an exact number in mind), and not just for base rate reasons about believing all rich people to be dodgy.
Huh, the same reason you cite for why you are not interested in doing an investigation is one of the key reasons why I want an investigation.
It seems to me that current EA leadership is basically planning to continue a “our primary defense against bad actors is the rumor mill” strategy. Having an analysis of how that strategy did not work, and in some sense can’t work for things like this seems like it’s one of the things that would have the most potential to give rise to something better here.
Interesting! I’m glad I wrote this then.
Do you think “[doing an investigation is] one of the things that would have the most potential to give rise to something better here” because you believe it is very hard to find alternatives to the rumor mill strategy? Or because you expect alternatives to not be adopted, even if found?
My current sense is that there is no motivation to find an alternative because people mistakenly think it works fine enough and so there is no need to try to find something better (and also in the absence of an investigation and clear arguments about why the rumor thing doesn’t work, people probably think they can’t really be blamed if the strategy fails again)
Suppose I want to devote some amount of resources towards finding alternatives to a rumor mill. I had been interpreting you as claiming that, instead of directly investing these resources towards finding an alternative, I should invest these resources towards an investigation (which will then in turn motivate other people to find alternatives).
Is that correct? If so, I’m interested in understanding why – usually if you want to do a thing, the best approach is to just do that thing.
It seems to me that a case study of how exactly FTX occurred, and where things failed, would be among one of the best things to use to figure out what thing to do instead.
Currently the majority of people who have an interest in this are blocked by not really knowing what worked and didn’t work in the FTX case, and so probably will have trouble arguing compellingly for any alternative, and also lack some of the most crucial data. My guess is you might have the relevant information from informal conversations, but most don’t.
I do think also just directly looking for an alternative seems good. I am not saying that doing an FTX investigation is literally the very best thing to do in the world, it just seems better than what I see EA leadership spending their time on instead. If you had the choice between “figure out a mechanism detecting and propagating information about future adversarial behavior” and “do an FTX investigation”, I would feel pretty great about both, and honestly don’t really know which one I would prefer. As far as I can tell neither of these things is seeing much effort invested into it.
Okay, that seems reasonable. But I want to repeat my claim[1] that people are not blocked by “not really knowing what worked and didn’t work in the FTX case” – even if e.g. there was some type of rumor which was effective in the FTX case, I still think we shouldn’t rely on that type of rumor being effective in the future, so knowing whether or not this type of rumor was effective in the FTX case is largely irrelevant.[2]
I think the blockers are more like: fraud management is a complex and niche area that very few people in EA have experience with, and getting up to speed with it is time-consuming, and also ~all of the practices are based under assumptions like “the risk manager has some amount of formal authority” which aren’t true in EA.
(And to be clear: I think these are very big blockers! They just aren’t resolved by doing an investigation.)
Or maybe more specifically: would like people to explicitly refute my claim. If someone does think that rumor mills are a robust defense against fraud but were just implemented poorly last time, I would love to hear that!
Again, under the assumption that your goal is fraud detection. Investigations may be more or less useful for other goals.
It seems like a goal of ~”fraud detection” not further specified may be near the nadir of utility for an investigation.
If you go significantly narrower, then how EA managed (or didn’t manage) SBF fraud seems rather important to figuring out how to deal with the risk of similar fraudulent schemes in the future.[1]
If you go significantly broader (cf. Oli’s reference to “detecting and propagating information about future adversarial behavior”), the blockers you identify seem significantly less relevant, which may increase the expected value of an investigation.
My tentative guess is that it would be best to analyze potential courses of action in terms of their effects on the “EA immune system” at multiple points of specificity, not just close relations of a specific known pathogen (e.g., SBF-like schemes), a class of pathogens (e.g., “fraud”), or pathogens writ large (e.g., “future adversarial behavior”).
Given past EA involvement with crypto, and the base rate of not-too-subtle fraud in crypto, the risk of similar fraudulent schemes seems more than theoretical to me.
I think that would be worth exploring. I suspect you are correct that full Sarbanes-Oxley treatment would be onerous.
On the other hand, I don’t see how a reasonably competent forensic accountant or auditor could have spent more than a few days at FTX (or at Madoff) without having a stroke. Seeing the commingled bank accounts would have sent alarm bells racing through my head, at least. (One of the core rules of legal ethics is that you do not commingle your money with that of your clients because experience teaches all sorts of horrible things can and often do happen.)
I certainly don’t mean to imply that fraud against sophisticated investors and lenders is okay, but there is something particularly bad about straight-up conversion of client funds like at FTX/Madoff. At least where hedge funds and big banks are concerned, they have the tools and access to protect themselves if they so wish. Moreover, the link between the fraud and the receipt of funds is particularly strong in those cases—Enron was awash in fraud, but it wouldn’t be fair to say that a charity that received a grant from Enron at certain points in time was approximately and unknowingly in possession of stolen funds.
Thankfully, procedures meant to ferret out sophisticated Enron-style fraud shouldn’t be necessary to rule out most straight-up conversion schemes. Because of the risk that someone will rat the fraudsters out, my understanding is that the conspiracy usually is kept pretty small in these sorts of frauds. That imposes a real limit on how well the scheme will withstand even moderate levels of probing with auditor-level access.
If you want a reference class of similar frauds, here is the prosecution’s list of cases (after the Booker decision in 2005) with losses > $100MM and fraud type of Ponzi scheme, misappropriation, or embezzlement:
For example, one might be really skeptical if auditing red flags associated with prior frauds are present. Madoff famously had his audits done by a two-person firm that reported not conducting audits. FTX was better, but apparently still used “questionable” third-tier firms that “do audit a few public companies but none of the size or complexity of FTX.” Neither “the Armanino nor the Prager Metis audit reports for 2021 provides an opinion on the FTX US or FTX Trading internal controls over accounting and financial reporting”—and the audit reports tell the reader as much (same source). The article, written by an accounting lecturer at Wharton, goes on to describe other weirdness in the audit reports. Of course, that’s not foolproof—Enron had one of the then-Big Five accounting firms, for instance.
Catching all fraud is not realistic . . . for anyone, much less a charitable social movement. But it seems like some basic checks to make fairly sure the major or whole basis for the company / for the individual’s wealth is not a fraudulent house of cards seems potentially attainable at a reasonable burden level.
I guess the question I have is, if the fraud wasn’t noticed by SBF’s investors, who had much better access to information and incentives to find fraud, why would anyone expect the recipients of his charitable donations to notice it? If it was a failure of the EA movement not to know that FTX was fraudulent, isn’t it many times more of a failure that the fraud was unnoticed by the major sophisticated investment firms that were large FTX shareholders?
I think investing in FTX was genuinely a good idea, if you were a profit maximizer, even if you strongly suspected the fraud. As Jason says, as an investor losing money due to fraud isn’t any worse than losing money because a company fails to otherwise be profitable, so even assigning 20%-30% probability to fraud for a high-risk investment like FTX where you are expecting >2x returns in a short number of years will not make a huge difference to your bottomline.
In many ways you should expect being the kind of person who is willing to commit fraud to be positively associated with returns, because doing illegal and fradulent things means that the people who run the organization take on massive risk where you are not exposed to the downside, but you are exposed to the upside. It’s not worth it to literally invest in fraud, but it is worth it to invest into the kind of company where the CEO is willing to go to prison, since you don’t really have any risk of going to prison, but you get the upside of the legal risk they take on (think of Uber blatantly violating laws until they established a new market, which probably exposed leadership to substantial legal risk, but investors just got to reap the profits).
I wasn’t suggesting we should expect this fraud to have been found in this case with the access that was available to EA sources. (Perhaps the FTXFF folks might have caught the scent if they were forensic accountants—but they weren’t. And I’m not at all confident on that in any event.) I’m suggesting that, in response to this scandal, EA organizations could insist on certain third-party assurances in the future before taking significant amounts of money from certain sources.
Why the big money was willing to fork over nine figures each to FTX without those assurances is unclear to me. But one observation: as far as a hedge fund or lender is concerned, a loss due to fraud is no worse than a loss due to the invested-in firm being outcompeted, making bad business decisions, experiencing a general crypto collapse, getting shut down for regulatory issues, or any number of scenarios that were probably more likely ex ante than a massive conversion scheme. In fact, such a scheme might even be less bad to the extent that the firm thought it might get more money back in a fraud loss than from some ordinarily-business failure modes. Given my understanding that these deals often move very quickly, and the presence of higher-probability failure modes, it is understandable that investors and lenders wouldn’t have prioritized fraud detection.
In contrast, charitable grantees are much more focused in their concern about fraud; taking money from a solvent, non-fraudulent business that later collapses doesn’t raise remotely the same ethical, legal, operational, and reputational concerns. Their potential exposure in that failure mode are likely several times larger than those of the investors/lenders after all non-financial exposures are considered. They are also not on a tight time schedule.
Re your footnote 4, CE/AIM are starting an earning-to-give incubation program, so that is likely to change pretty soon
Oh good point! That does seem to increase the urgency of this. I’d be interested to hear if CE/AIM had any thoughts on the subject.
Will MacAskill waited until April to speak fully and openly on the extra cautious advice of legal counsel. If that period ended to the point Will spoke to the matter of the FTX collapse, and the before and after, has he had ever wanted to, surely almost everyone could do the same now. The barrier or objection of not talking according to the strong advice of legal counsel seems like it’d be null for most people at this point.
Edit: in the 2 hours since I first made this comment, I’ve read most of the comments with arguments both for and against why someone should begin pursuing at least some parts of what could constitute an overall investigation as has been suggested. Finding the arguments for doing so far better than the arguments against, I have now decided to personally begin pursuing the below project. Anyone interested in helping or supporting me in that vein, please reply to this comment, or contact me privately. Any number of messages I receive along the lines of “I think this is a bad idea, I disagree with what you intend to do, I think this will be net negative, please don’t do this”, etc., absent other arguments, are very unlikely to deter me. On the contrary, if anything, such substanceless objections may motivate me to pursue this end with more vigour.
I’m not extremely confident I could complete an investigation of the whole of the EA community’s role in this regard at the highest level all by myself, though I am now offering to investigate or research parts of this myself. Here’s some of what I could bring to the table.
I’d be willing to do some relatively thorough investigation from a starting point of being relatively high-context. For those who wouldn’t think I’d be someone who knows a lot of context here, this short form post I made a while ago could serve as proof of concept I have more context than you might expect. I could offer more information, or answer more questions others have, in an attempt to genuinely demonstrate how much context I have.
I have very little time constraints compared to perhaps most individuals in the EA community who might be willing or able to contribute to some aspect of such an investigation. Already on my own time, I occasionally investigate issues in and around EA by myself. I intend to do so more in the future. I’d be willing to research more specific issues on my own time if others were to provide some direction. Some of what I might pursue further may be related to FTX anyway without urging from others.
I’d be willing to volunteer a significant amount of time doing so, as I’m not currently working full-time and may not be working full-time in the foreseeable future. If the endeavour required a certain amount of work or progress achieved within a certain time frame, I may need to be hired in some capacity to complete some of the research or investigating. I’d be willing to accept such an opportunity as well.
Having virtually no conflict of interests, there’s almost nothing anyone powerful in or around EA could hold over me to attempt to stop me from trying to investigate.
I’m champing at the bit to make this happen probably about as much as anyone.
I would personally find the contents of any aspect of such an investigation to be extremely interesting and motivating.
I wouldn’t fear any retaliation whatsoever. Some attempts or threats to retaliate against me could be indeed be advantageous for me, as I am confident they would fail to achieve their desired goals, and thus serve as evidence to others that any further such attempts would be futile wastes of efforts.
I am personally in semi-regular contact or have decent rapport with some whistleblowers or individuals who retain private information about events related to the whole saga of FTX dating back to 2018. They, or their other peers who’ve also exited the EA community in the last several years, may not be willing to talk freely with most individuals in EA who might participate in such an investigation. I am very confident at least some of them would be willing to talk to me.
I’m probably less nervous personally, i.e., being willing to be radically transparent and honest, about speaking up or out about anything EA-related than most people who have continuously participated in the EA community for over a decade. I suspect that includes even you and Oliver Habryka, who have already been noted in other comments here as among those in that cohort who are the least nervous. Notably that may at this point be a set of no more than a few hundred people.
To produce common-knowledge documents to help as large a subset of the EA community, if not the whole community, to learn what happened, and what could be done differently in the future, would be a goal of any such investigation that I could be most motivated to accomplish. I’d be much more willing to share such a document more widely than most other people who might be willing or able to produce one.