This seems like an incredibly obvious first step from my perspective, not something I’d have expected a community like EA to be dragging its heels on years after the fact.
We’re happy to sink hundreds of hours into fun “criticism of EA” contests, but when the biggest disaster in EA’s history manifests, we aren’t willing to pay even one investigator to review what happened so we can get the facts straight, begin to rebuild trust, and see if there’s anything we should change in response? I feel like I’m in crazytown; what the heck is going on?
Update Apr. 4: I’ve now spoken with another EA who was involved in EA’s response to the FTX implosion. To summarize what they said to me:
They thought that the lack of an investigation was primarily due to general time constraints and various exogenous logistical difficulties. At the time, they thought that setting up a team who could overcome the various difficulties would be extremely hard for mundane reasons such as:
thorough, even-handed investigations into sensitive topics are very hard to do (especially if you start out low-context);
this is especially true when they are vaguely scoped and potentially involve a large number of people across a number of different organizations;
“professional investigators” (like law firms) aren’t very well-suited to do the kind of investigation that would actually be helpful;
legal counsels were generally strongly advising people against talking about FTX stuff in general;
various old confidentiality agreements would make it difficult to discuss what happened in some relevant instances (e.g. meetings that had Chatham House Rules);
it would be hard to guarantee confidentiality in the investigation when info might be subpoenaed or something like that;
and a general plethora of individually-surmountable but collectively-highly-challenging obstacles.
They flagged that at the time, most people involved were already in an exceptionally busy and difficult time, and so had less bandwidth for additional projects than usual.
A caveat here is that the EV board did block some people from speaking publicly during the initial investigation into EV’s legal situation. That investigation ended back in the summer of 2023.
Julia Wise and Ozzie Gooen wrote on the EA Forum that this is a potentially useful project for someone to take on, which as far as this person knew isn’t something any EA leadership did or would try to stop, and the impression of the person I spoke to was that Julia and Ozzie indeed tried to investigate what reforms should happen, though the person I spoke to didn’t follow that situation closely.
The person I spoke to didn’t want to put words in the mouth of EA leaders, and their information is mostly from ~1 year ago and might be out of date. But to the extent some people aren’t currently champing at the bit to make this happen, their impression (with respect to the EA leaders they have interacted with relatively extensively) is that this has little to do with a desire to protect the reputation of EA or of individual EAs.
Rather, their impression is that for a lot of top EA leaders, this whole thing is a lot less interesting, because those EAs think they know what happened (and that it’s not that interesting). So the choice is like “should I pour in a ton of energy to try to set up this investigation that will struggle to get off the ground to learn kinda boring stuff I already know?” And maybe they are underrating how interesting others would find it, but that made the whole idea not so important-seeming (at least in the early days after FTX’s collapse, relative to all the other urgent and confusing things swirling around in the wake of the collapse) from their perspective.
I vouch for this person as generally honest and well-intentioned. I update from the above that community leaders are probably less resistant to doing some kind of fact-finding inquiry than I thought. I’m hoping that this take is correct, since it suggests to me that it might not be too hard to get an SBF postmortem to happen now that the trial and the EV legal investigation are both over (and now that we’re all talking about the subject in the first place).
If the take above isn’t correct, then hopefully my sharing it will cause others to chime in with further objections, and I can zigzag my way to understanding what actually happened!
I shared the above summary with Oliver Habryka, and he said:
Hmm, I definitely don’t buy the “this has little to do with EA leadership desire to protect their reputation”. A lot of the reason for the high standards is for PR reasons.
I think people are like “Oh man, doing a good job here seems really hard, since doing it badly seems like it would be really costly reputation-wise. But if someone did want to put in the heroic effort to do a good enough job to not have many downsides, then yeah, I would be in favor of that. But that seems so hard to do that I don’t really expect it to happen.”
Like, the primary thing that seems to me the mistake is the standard to which any such investigation is being held to before people consider it net-positive.
I’ll also share Ozzie Gooen’s Twitter take from a few days ago:
My quick guess (likely to be wrong!)
There are really only a few people “on top” of EA.
This would have essentially been “these top people investigating each other, and documenting that for others in the community”
These people don’t care that much about being trusted in the community. They fund the community and have power
the EA community really doesn’t have much power over the funders/leaders.
These people generally feel like they understand the problem well enough.
And, some corrections to my earlier posts about this:
I said that “there was a narrow investigation into legal risk to Effective Ventures last year”, which I think may have overstated the narrowness of the investigation a bit. My understanding is that the investigation’s main goal was to reduce EV’s legal exposure, but to that end the investigation covered a somewhat wider range of topics (possibly including things like COI policies), including things that might touch on broader EA mistakes and possible improvements. But it’s hard to be sure about any of this because details of the investigation’s scope and outcomes weren’t shared, and it doesn’t sound like they will be.
I said that Julia Wise had “been calling for the existence of such an investigation”; Julia clarifies on social media, “I would say I listed it as a possible project rather than calling for it exactly.”
Specifically, Julia Wise, Ozzie Gooen, and Sam Donald co-wrote a November 2023 blog post that listed “comprehensive investigation into FTX<>EA connections / problems” as one of four “projects and programs we’d like to see”, saying “these projects are promising, but they’re sizable or ongoing projects that we don’t have the capacity to carry out”. They also included this idea in a list of Further Possible Projects on EA Reform.
(I’m going to wrap up a few disparate threads together here, and will probably be my last comment on this post ~modulo a reply for clarification’s sake. happy to discuss further with you Rob or anyone via DMs/Forum Dialogue/whatever)
(to Rob & Oli—there is a lot of inferential distance between us and that’s ok, the world is wide enough to handle that! I don’t mean to come off as rude/hostile and apologies if I did get the tone wrong)
Thanks for the update Rob, I appreciate you tying this information together in a single place. And yet… I can’t help but still feel some of the frustrations of my original comment. Why does this person not want to share their thoughts publicly? Is it because they don’t like the EA Forum? Because their scared of retaliation? It feels like this would be useful and important information for the community to know.
I’m also not sure what to make of Habryka’s response here and elsewhere. I think there is a lot of inferential distance between myself and Oli, but it does seem to me to come off as a“social experiment in radical honesty and perfect transparency” ,which is a vibe I often get from the Lightcone-adjacent world. And like, with all due respect, I’m not really interested in that whole scene. I’m more interested in questions like:
Were any senior EAs directly involved in the criminal actions at FTX/Alameda?
What warnings were given about SBF to senior EAs before the FTX blowup, particularly around the 2018 Alameda blowup, as recounted here.
If these warnings were ignored, what prevented people from deducing that SBF was a bad actor?[1]
Critically, if these warnings were accepted as true, who decided to keep this a secret and to supress it from the community at large, and not act on it?
Why did SBF end up with such a dangerous set of beliefs about the world? (I think they’re best described as ‘risky beneficentrism’ - see my comment here and Ryan’s original post here)
Why have the results of these investigations, or some legally-cleared version, not been shared with the community at large?
Do senior EAs have any plan to respond to the hit to EA-morale as a result of FTX and the aftermath, along with the intensely negative social reaction to EA, apart from ‘quietly hope it goes away’?
Writing it down, 2.b. strikes me as what I mean by ‘naive consequentialism’ if it happened. People had information that SBF was a bad character who had done harm, but calculated (or assumed) that he’d do more good being part of/tied to EA than otherwise. The kind of signalling you described as naive consequentialism doesn’t really seem pertinent to me here, as interesting as the philosophical discussion can be.
tl’dr—I think there can be a difference between a discussion about what norms EA ‘should’ have, or senior EAs should act by, especially in the post-FTX and influencing-AI-policy world, but I think that’s different from the ‘minimal viable information-sharing’ that can help the community heal, hold people to account, and help make the world a better place. It does feel like the lack of communication is harming that, and I applaud you/Oli pushing for it, but sometimes I wish you would both also be less vague too. Some of us don’t have the EA history and context that you both do!
epilogue: I hope Rebecca is doing well. But this post & all the comments makes me feel more pessimistic about the state of EA (as a set of institutions/organisations, not ideas) post FTX. Wounds might have faded, but they haven’t healed 😞
the choice is like “should I pour in a ton of energy to try to set up this investigation that will struggle to get off the ground to learn kinda boring stuff I already know?”
I’m not the person quoted, but I agree with this part, and some of the reasons for why I expect the results of an investigation like this to be boring aren’t based on any private or confidential information, so perhaps worth sharing.
One key reason: I think rumor mills are not very effective fraud detection mechanisms.
(This seems almost definitionally true: if something was clear evidence of fraud then it would just be described as “clear evidence of fraud”; describing something as a “rumor” seems to almost definitionally imply a substantial probability that the rumor is false or at least unclear or hard to update on.[1])
E.g. If I imagine a bank whose primary fraud detection mechanism was “hope the executives hear rumors of malfeasance,” I would not feel very satisfied with their risk management. If fraud did occur, I wouldn’t expect that their primary process improvement to be “see if the executives could have updated from rumors better.” I am therefore somewhat confused by how much interest there seems to be in investigating how well the rumor mill worked for FTX.[2]
To be clear: I assume that the rumor mill could function more efficiently, and that there’s probably someone who heard “SBF is often overconfident” or whatever and could have updated from that information more accurately than they did. (If you’re interested in my experience, you can read my comments here.) I’m just very skeptical that a new and improved rumor mill is substantial protection against fraud, and don’t understand what an investigation could show me that would change my mind.[3] Moreover, even if I somehow became convinced that rumors could have been effective in the specific case of FTX, I will still likely be skeptical of their efficacy in the future.
Relatedly, I’ve heard people suggest that 80k shouldn’t have put SBF on their website given some rumors that were floating around. My take is that the base rate of criminality among large donors is high, having a rumor mill does not do very much to lower that rate, and so I expect to believe that the risk will be relatively high for high net worth people 80k puts on the front page in the future, and I don’t need an investigation to tell me that.
To make some positive suggestions about things I could imagine learning from/finding useful:
I have played around with the idea of some voluntary pledge for earning to give companies where they could opt into additional risk management and transparency policies (e.g. selecting some processes from Sarbanes-Oxley). My sense is that these policies do actually substantially reduce the risk of fraud (albeit at great expense), and might be worth doing.[4]
At least, it seems like this should be our first port of call. Maybe we can’t actually implement industry best practices around risk management, but it feels like we should at least try before giving up and doing the rumor mill thing.
My understanding is that a bunch of work has gone into making regulations so that publicly traded companies are less likely to commit fraud, and these regulations are somewhat effective, but they are so onerous that many companies are willing to stay private and forgo billions of dollars in investment just to not have to deal with them. I suspect that EA might find itself in a similarly unfortunate situation where reducing risks from “prominent individuals” requires the individuals in question to do something so onerous that no one is willing to become “prominent.” I would be excited about research into a) whether this is in fact the case, and b) what to do about it, if so.
Some people probably disagree with my claim that rumor mills are ineffective. If so, research into this would be useful. E.g. it’s been on my backlog for a while to write up a summary of Why They Do It, or a fraud management textbook.
Why They Do It is perhaps particularly useful, given that one of its key claims is that, unlike with blue-collar crime, character traits don’t correlate well with propensity to commit white-collar crimes crimes, and I think this may be a crux between me and people who disagree with me.
All that being said, I think I’m weakly in favor of someone more famous than me[5] doing some sort of write up about what rumors they heard, largely because I don’t expect the above to convince many people, and I think such a write up will mostly result in people realizing that the rumors were not very motivating.
One possible reason for this is that people are aiming for goals other than detecting fraud, e.g. they are hoping that rumors could also be used to identify other types of misconduct. I have opinions about this, but this comment is already too long so I’m not going to address it here.
e.g. I appreciate Nate writing this, but if in the future I learned that a certain person has spoken to Nate, I’m not going to update my beliefs about the likelihood of them committing financial misconduct very much (and I believe that Nate would agree with this assessment)
Part of why I haven’t prioritized this is that there aren’t a lot of earning to give companies anymore, but I think it’s still potentially worth someone spending time on this
I feel like “people who worked with Sam told people about specific instances of quite serious dishonesty they had personally observed” is being classed as “rumour” here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word “rumour” conjures. Also, even if people only did receive stuff that was more centrally rumour, I feel like we still want to know if any one in leadership argued “oh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upside”. That’s a signal someone is a bad leader in my view, which is useful knowledge going forward. (I’m not saying it is instant proof they should never hold leadership positions ever again: I think quite a lot of people might have said something like that in similar circumstances. But it is a bad sign.)
I feel like “people who worked with Sam told people about specific instances of quite serious dishonesty they had personally observed” is being classed as “rumour” here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word “rumour” conjures.
I agree with this.
[...] I feel like we still want to know if any one in leadership argued “oh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upside”. That’s a signal someone is a bad leader in my view, which is useful knowledge going forward.
I don’t really agree with this. Everyone has some probability of turning out to be dodgy; it matters exactly how strong the available evidence was. “This EA leader writes people off immediately when they have even a tiny probability of being untrustworthy” would be a negative update about the person’s decision-making too!
I meant something in between “is” and “has a non-zero chance of being”, like assigning significant probability to it (obviously I didn’t have an exact number in mind), and not just for base rate reasons about believing all rich people to be dodgy.
I’m not the person quoted, but I agree with this part, and some of the reasons for why I expect the results of an investigation like this to be boring aren’t based on any private or confidential information, so perhaps worth sharing.
One key reason: I think rumor mills are not very effective fraud detection mechanisms.
Huh, the same reason you cite for why you are not interested in doing an investigation is one of the key reasons why I want an investigation.
It seems to me that current EA leadership is basically planning to continue a “our primary defense against bad actors is the rumor mill” strategy. Having an analysis of how that strategy did not work, and in some sense can’t work for things like this seems like it’s one of the things that would have the most potential to give rise to something better here.
Do you think “[doing an investigation is] one of the things that would have the most potential to give rise to something better here” because you believe it is very hard to find alternatives to the rumor mill strategy? Or because you expect alternatives to not be adopted, even if found?
My current sense is that there is no motivation to find an alternative because people mistakenly think it works fine enough and so there is no need to try to find something better (and also in the absence of an investigation and clear arguments about why the rumor thing doesn’t work, people probably think they can’t really be blamed if the strategy fails again)
Suppose I want to devote some amount of resources towards finding alternatives to a rumor mill. I had been interpreting you as claiming that, instead of directly investing these resources towards finding an alternative, I should invest these resources towards an investigation (which will then in turn motivate other people to find alternatives).
Is that correct? If so, I’m interested in understanding why – usually if you want to do a thing, the best approach is to just do that thing.
It seems to me that a case study of how exactly FTX occurred, and where things failed, would be among one of the best things to use to figure out what thing to do instead.
Currently the majority of people who have an interest in this are blocked by not really knowing what worked and didn’t work in the FTX case, and so probably will have trouble arguing compellingly for any alternative, and also lack some of the most crucial data. My guess is you might have the relevant information from informal conversations, but most don’t.
I do think also just directly looking for an alternative seems good. I am not saying that doing an FTX investigation is literally the very best thing to do in the world, it just seems better than what I see EA leadership spending their time on instead. If you had the choice between “figure out a mechanism detecting and propagating information about future adversarial behavior” and “do an FTX investigation”, I would feel pretty great about both, and honestly don’t really know which one I would prefer. As far as I can tell neither of these things is seeing much effort invested into it.
Okay, that seems reasonable. But I want to repeat my claim[1] that people are not blocked by “not really knowing what worked and didn’t work in the FTX case” – even if e.g. there was some type of rumor which was effective in the FTX case, I still think we shouldn’t rely on that type of rumor being effective in the future, so knowing whether or not this type of rumor was effective in the FTX case is largely irrelevant.[2]
I think the blockers are more like: fraud management is a complex and niche area that very few people in EA have experience with, and getting up to speed with it is time-consuming, and also ~all of the practices are based under assumptions like “the risk manager has some amount of formal authority” which aren’t true in EA.
(And to be clear: I think these are very big blockers! They just aren’t resolved by doing an investigation.)
Or maybe more specifically: would like people to explicitly refute my claim. If someone does think that rumor mills are a robust defense against fraud but were just implemented poorly last time, I would love to hear that!
Again, under the assumption that your goal is fraud detection.
It seems like a goal of ~”fraud detection” not further specified may be near the nadir of utility for an investigation.
If you go significantly narrower, then how EA managed (or didn’t manage) SBF fraud seems rather important to figuring out how to deal with the risk of similar fraudulent schemes in the future.[1]
If you go significantly broader (cf. Oli’s reference to “detecting and propagating information about future adversarial behavior”), the blockers you identify seem significantly less relevant, which may increase the expected value of an investigation.
My tentative guess is that it would be best to analyze potential courses of action in terms of their effects on the “EA immune system” at multiple points of specificity, not just close relations of a specific known pathogen (e.g., SBF-like schemes), a class of pathogens (e.g., “fraud”), or pathogens writ large (e.g., “future adversarial behavior”).
Given past EA involvement with crypto, and the base rate of not-too-subtle fraud in crypto, the risk of similar fraudulent schemes seems more than theoretical to me.
I have played around with the idea of some voluntary pledge for earning to give companies where they could opt into additional risk management and transparency policies (e.g. selecting some processes from Sarbanes-Oxley). My sense is that these policies do actually substantially reduce the risk of fraud (albeit at great expense), and might be worth doing.
I think that would be worth exploring. I suspect you are correct that full Sarbanes-Oxley treatment would be onerous.
On the other hand, I don’t see how a reasonably competent forensic accountant or auditor could have spent more than a few days at FTX (or at Madoff) without having a stroke. Seeing the commingled bank accounts would have sent alarm bells racing through my head, at least. (One of the core rules of legal ethics is that you do not commingle your money with that of your clients because experience teaches all sorts of horrible things can and often do happen.)
I certainly don’t mean to imply that fraud against sophisticated investors and lenders is okay, but there is something particularly bad about straight-up conversion of client funds like at FTX/Madoff. At least where hedge funds and big banks are concerned, they have the tools and access to protect themselves if they so wish. Moreover, the link between the fraud and the receipt of funds is particularly strong in those cases—Enron was awash in fraud, but it wouldn’t be fair to say that a charity that received a grant from Enron at certain points in time was approximately and unknowingly in possession of stolen funds.
Thankfully, procedures meant to ferret out sophisticated Enron-style fraud shouldn’t be necessary to rule out most straight-up conversion schemes. Because of the risk that someone will rat the fraudsters out, my understanding is that the conspiracy usually is kept pretty small in these sorts of frauds. That imposes a real limit on how well the scheme will withstand even moderate levels of probing with auditor-level access.
If you want a reference class of similar frauds, here is the prosecution’s list of cases (after the Booker decision in 2005) with losses > $100MM and fraud type of Ponzi scheme, misappropriation, or embezzlement:
For example, one might be really skeptical if auditing red flags associated with prior frauds are present. Madoff famously had his audits done by a two-person firm that reported not conducting audits. FTX was better, but apparently still used “questionable” third-tier firms that “do audit a few public companies but none of the size or complexity of FTX.” Neither “the Armanino nor the Prager Metis audit reports for 2021 provides an opinion on the FTX US or FTX Trading internal controls over accounting and financial reporting”—and the audit reports tell the reader as much (same source). The article, written by an accounting lecturer at Wharton, goes on to describe other weirdness in the audit reports. Of course, that’s not foolproof—Enron had one of the then-Big Five accounting firms, for instance.
Catching all fraud is not realistic . . . for anyone, much less a charitable social movement. But it seems like some basic checks to make fairly sure the major or whole basis for the company / for the individual’s wealth is not a fraudulent house of cards seems potentially attainable at a reasonable burden level.
I guess the question I have is, if the fraud wasn’t noticed by SBF’s investors, who had much better access to information and incentives to find fraud, why would anyone expect the recipients of his charitable donations to notice it? If it was a failure of the EA movement not to know that FTX was fraudulent, isn’t it many times more of a failure that the fraud was unnoticed by the major sophisticated investment firms that were large FTX shareholders?
I think investing in FTX was genuinely a good idea, if you were a profit maximizer, even if you strongly suspected the fraud. As Jason says, as an investor losing money due to fraud isn’t any worse than losing money because a company fails to otherwise be profitable, so even assigning 20%-30% probability to fraud for a high-risk investment like FTX where you are expecting >2x returns in a short number of years will not make a huge difference to your bottomline.
In many ways you should expect being the kind of person who is willing to commit fraud to be positively associated with returns, because doing illegal and fradulent things means that the people who run the organization take on massive risk where you are not exposed to the downside, but you are exposed to the upside. It’s not worth it to literally invest in fraud, but it is worth it to invest into the kind of company where the CEO is willing to go to prison, since you don’t really have any risk of going to prison, but you get the upside of the legal risk they take on (think of Uber blatantly violating laws until they established a new market, which probably exposed leadership to substantial legal risk, but investors just got to reap the profits).
I wasn’t suggesting we should expect this fraud to have been found in this case with the access that was available to EA sources. (Perhaps the FTXFF folks might have caught the scent if they were forensic accountants—but they weren’t. And I’m not at all confident on that in any event.) I’m suggesting that, in response to this scandal, EA organizations could insist on certain third-party assurances in the future before taking significant amounts of money from certain sources.
Why the big money was willing to fork over nine figures each to FTX without those assurances is unclear to me. But one observation: as far as a hedge fund or lender is concerned, a loss due to fraud is no worse than a loss due to the invested-in firm being outcompeted, making bad business decisions, experiencing a general crypto collapse, getting shut down for regulatory issues, or any number of scenarios that were probably more likely ex ante than a massive conversion scheme. In fact, such a scheme might even be less bad to the extent that the firm thought it might get more money back in a fraud loss than from some ordinarily-business failure modes. Given my understanding that these deals often move very quickly, and the presence of higher-probability failure modes, it is understandable that investors and lenders wouldn’t have prioritized fraud detection.
In contrast, charitable grantees are much more focused in their concern about fraud; taking money from a solvent, non-fraudulent business that later collapses doesn’t raise remotely the same ethical, legal, operational, and reputational concerns. Their potential exposure in that failure mode are likely several times larger than those of the investors/lenders after all non-financial exposures are considered. They are also not on a tight time schedule.
legal counsels were generally strongly advising people against talking about FTX stuff in general
Will MacAskill waited until April to speak fully and openly on the extra cautious advice of legal counsel. If that period ended to the point Will spoke to the matter of the FTX collapse, and the before and after, has he had ever wanted to, surely almost everyone could do the same now. The barrier or objection of not talking according to the strong advice of legal counsel seems like it’d be null for most people at this point.
Edit: in the 2 hours since I first made this comment, I’ve read most of the comments with arguments both for and against why someone should begin pursuing at least some parts of what could constitute an overall investigation as has been suggested. Finding the arguments for doing so far better than the arguments against, I have now decided to personally begin pursuing the below project. Anyone interested in helping or supporting me in that vein, please reply to this comment, or contact me privately. Any number of messages I receive along the lines of “I think this is a bad idea, I disagree with what you intend to do, I think this will be net negative, please don’t do this”, etc., absent other arguments, are very unlikely to deter me. On the contrary, if anything, such substanceless objections may motivate me to pursue this end with more vigour.
I’m not extremely confident I could complete an investigation of the whole of the EA community’s role in this regard at the highest level all by myself, though I am now offering to investigate or research parts of this myself. Here’s some of what I could bring to the table.
I’d be willing to do some relatively thorough investigation from a starting point of being relatively high-context. For those who wouldn’t think I’d be someone who knows a lot of context here, this short form post I made a while ago could serve as proof of concept I have more context than you might expect. I could offer more information, or answer more questions others have, in an attempt to genuinely demonstrate how much context I have.
I have very little time constraints compared to perhaps most individuals in the EA community who might be willing or able to contribute to some aspect of such an investigation. Already on my own time, I occasionally investigate issues in and around EA by myself. I intend to do so more in the future. I’d be willing to research more specific issues on my own time if others were to provide some direction. Some of what I might pursue further may be related to FTX anyway without urging from others.
I’d be willing to volunteer a significant amount of time doing so, as I’m not currently working full-time and may not be working full-time in the foreseeable future. If the endeavour required a certain amount of work or progress achieved within a certain time frame, I may need to be hired in some capacity to complete some of the research or investigating. I’d be willing to accept such an opportunity as well.
Having virtually no conflict of interests, there’s almost nothing anyone powerful in or around EA could hold over me to attempt to stop me from trying to investigate.
I’m champing at the bit to make this happen probably about as much as anyone.
I would personally find the contents of any aspect of such an investigation to be extremely interesting and motivating.
I wouldn’t fear any retaliation whatsoever. Some attempts or threats to retaliate against me could be indeed be advantageous for me, as I am confident they would fail to achieve their desired goals, and thus serve as evidence to others that any further such attempts would be futile wastes of efforts.
I am personally in semi-regular contact or have decent rapport with some whistleblowers or individuals who retain private information about events related to the whole saga of FTX dating back to 2018. They, or their other peers who’ve also exited the EA community in the last several years, may not be willing to talk freely with most individuals in EA who might participate in such an investigation. I am very confident at least some of them would be willing to talk to me.
I’m probably less nervous personally, i.e., being willing to be radically transparent and honest, about speaking up or out about anything EA-related than most people who have continuously participated in the EA community for over a decade. I suspect that includes even you and Oliver Habryka, who have already been noted in other comments here as among those in that cohort who are the least nervous. Notably that may at this point be a set of no more than a few hundred people.
To produce common-knowledge documents to help as large a subset of the EA community, if not the whole community, to learn what happened, and what could be done differently in the future, would be a goal of any such investigation that I could be most motivated to accomplish. I’d be much more willing to share such a document more widely than most other people who might be willing or able to produce one.
I haven’t heard any arguments against doing an investigation yet, and I could imagine folks might be nervous about speaking up here. So I’ll try to break the ice by writing an imaginary dialogue between myself and someone who disagrees with me.
Obviously this argument may not be compelling compared to what an actual proponent would say, and I’d guess I’m missing at least one key consideration here, so treat this as a mere conversation-starter.
Hypothetical EA: Why isn’t EV’s 2023 investigation enough? You want us to investigate; well, we investigated.
Rob: That investigation was only investigating legal risk to EV. Everything I’ve read (and everything I’ve heard privately) suggests that it wasn’t at all trying to answer the question of whether the EA community made any moral or prudential errors in how we handled SBF over the years. Nor was it trying to produce common-knowledge documents (either private or public) to help any subset of EA understand what happened. Nor was it trying to come up with any proposal for what we should do differently (if anything) in the future.
I take it as fairly obvious that those are all useful activities to carry out after a crisis, especially when there was sharp disagreement, within EA leadership, long before the FTX implosion, about how we should handle SBF.
Hypothetical EA: Look, I know there’s been no capital-I “Investigation”, but plenty of established EAs have poked around at dinner parties and learned a lot of the messy complicated details of what happened. My own informal poking around has convinced me that no EAs outside FTX leadership did anything super evil or Machiavellian. The worst you can say is that they muddled along and had miscommunications and brain farts like any big disorganized group of humans, and were a bit naively over-trusting.
Me: Maybe! But scattered dinner conversation with random friends and colleagues, with minimal following up or cross-checking of facts, isn’t the best medium for getting an unbiased picture of what happened. People skew the truth, withhold info, pass the blame ball around. And you like your friends, so you’re eager to latch on to whatever story shows they did an OK job.
Perhaps your story is true, but we shouldn’t be scared of checking, applying the same level of rigor we readily apply to everything else we’re doing.
The utility of this doesn’t require that any EAs be Evil. A postmortem is plenty useful in a world where we were “too trusting” or were otherwise subject to biases in how we thought, or how we shared information and made group decisions — so we can learn from our mistakes and do better next time.
And if we’ve historically been “too trusting”, it seems doubly foolish to err on the side of trusting every individual, institution, and process involved in the EA-SBF debacle, and write them a preemptive waiver for all the errors we’re studiously avoiding checking whether they’ve made.
Hypothetical EA: Look, there’s just no reason to use SBF in particular for your social experiment in radical honesty and perfect transparency. It was to some extent a matter of luck that SBF succeeded as well as he did, and that he therefore had an opportunity to cause so much harm. If there were systemic biases in EA that caused us to err here, then those same biases should show up in tons of other cases too.
The only reason to single out the SBF case in particular and give it 1000x more attention than everything else is that it’s the most newsworthy EA error.
But the main effect of this is to inflate and distort minor missteps random EA decision-makers made, bolstered by the public’s hindsight bias and cancel culture and by journalists’ axe-grinding, so that the smallest misjudgments an EA makes look like horrific unforgivable sins.
SBF is no more useful for learning about EA’s causal dynamics than any other case (and in fact SBF is an unusually bad place to try to learn generalizable lessons, because the sky-high stakes will cause people to withhold key evidence and/or bend the truth toward social desirability); it’s only useful as a bludgeon, if you came into all this already sure that EA is deeply corrupt (or that particular individuals or orgs are), and you want to summon a mob to punish those people and drive them from the community.
(Or, alternatively, if you’re sad about EA’s bad reputation and you want to find scapegoats: find the specific Bad EAs and drive them out, to prove to the world that you’re a Good EA and that EA-writ-large is now pure.)
Me: I find that argument somewhat compelling, but I still think an investigation would make sense.
First, extreme cases can often illustrate important causal dynamics that are harder to see in normal cases. E.g., if EA has a problem like “we fudge the truth too much”, this might be hard to detect in low-stakes cases where people have less incentive to lie. People’s behavior when push comes to shove is important, given the huge impact EA is trying to have on the world; and SBF is one huge instance where push came to shove and our character was really tested.
And, yes, some people may withhold information more because of the high stakes. But others will be much more willing to spend time on this question because they recognize it as important. If nothing else, SBF is a Schelling point for us all to direct our eyes at the same thing simultaneously, and see if we can converge on some new truths about the world.
Second, and moving away from abstractions to talk about the specifics of this case: My understanding is that a bunch of EAs tried to warn the community that SBF was extremely shady, and a bunch of other EAs apparently didn’t believe the warnings, or didn’t want those warnings widely shared even though they believed them.
“SBF is extremely shady” isn’t knowledge that FTX was committing financial fraud, and shouting “SBF is extremely shady” from the hills wouldn’t necessarily have prevented the fraud from happening. But there’s some probability it might have been the tipping point at various important junctures, as potential employees and funders and customers weighed their options. And even if it wouldn’t have helped at all in this case, it’s good to share that kind of information in case it helps the next time around.
I think it would be directly useful to know what happened to those warnings about SBF, so we can do better next time. And I think it would also help restore a lot of trust in EA (and a lot of internal ability for EAs to coordinate with each other) if people knew what happened — if we knew which thought leaders or orgs did better or worse, how processes failed, how people plan to do better next time.
I recognize that this will be harder in some ways with journalists and twitter users breathing down your necks. And I recognize that some people may suffer unfair scrutiny and criticism because they were in the wrong place at the wrong time. To some extent I just think we need to eat that cost; when you’re playing chess with the world and making massively impactful decisions, that comes with some extra responsibility to take a rare bit of unfair flack for the sake of being able to fact-find and orient at all about what happened. Hopefully the fact that some time has passed, and that we’re looking at a wide variety of people and orgs rather than a specific singled-out individual, will mitigate this problem.
If FTX were a total bolt out of the blue, that would be one thing. But apparently there were rather a lot of EAs who thought SBF was untrustworthy and evil, and had lots of evidence on hand to cite, at the exact same time 80K and Will and others were using their megaphones to broadcast that SBF is an awesome EA hero. I don’t know that 80K or Will in particular are the ones who fucked up here, but it seems like somebody fucked up in order for this perception gap to exist and go undiscussed.
I understand people having disagreements about someone’s character. Hindsight bias is a thing, and I’m sure people had reasons at the time to be skeptical of some of the bad rumors about SBF. But I tend to think those disagreements should be things that are argued about rather than kept secret. Especially if the secret conversations empirically have not resulted in the best outcomes.
Hypothetical EA: I dunno, this whole “we need a public airing out of our micro-sins in order to restore trust” thing sounds an awful lot like the exact “you’re looking for scapegoats” thing I was warning about.
You’re fixated on this idea that EAs did something Wrong and need to be chastised and corrected, like we’re perpetrators alongside SBF. On the contrary, I claim that the non-FTX EAs who interacted the most with Sam should mostly be thought of as additional victims of Sam: people who were manipulated and mistreated, who often saw their livelihoods threatened as a result and their life’s work badly damaged or destroyed.
The policies you’re calling for amount to singling out and re-victimizing many of Sam’s primary victims, in the name of pleasant-sounding abstractions like Accountability — abstractions that have little actual consequentialist value in this case, just a veneer of “that sounds nice on paper”.
Me: It’s unfortunately hard for me to assess the consequentialist value in this case, because no investigation has taken place. I’ve gestured at some questions I have above, but I’m missing most of the pieces about what actually happened, and some of the unknown unknowns here might turn out to swamp the importance of what I know about. It’s not clear to me that you know much more than me, either. Rather than pitting your speculation against mine, I’d rather do some actual inquiry.
Hypothetical EA: I think we already know enough, including from the legal investigation into Sam Bankman-Fried and who was involved in his conspiracy, to make a good guess that re-victimizing random EAs is not a useful way for this movement to spend its time and energy. The world has many huge problems that need fixing, and it’s not as though EA’s critics are going to suddenly conclude that EAs are Good After All if we spill all of our dirty laundry. What will actually happen is that they’ll cherry-pick and distort the worst-sounding tidbits, while ignoring all the parts you hoped would be “trust-restoring”.
Me: Some EA critics will do that, sure. But there are plenty of people, both within EA and outside of it, who legitimately just want to know what happened, and will be very reassured to have a clearer picture of the basic sequence of events, which orgs did a better or worse job, which processes failed or succeeded. They’ll also be reassured to know that we know what happened, vs. blinding ourselves to the facts and to any lessons they might contain.
Or maybe they’ll be horrified because the details are actually awful (ethically, not legally). Part of being honest is taking on the risk that this could happen too. That’s just not avoidable. If we’re not the sort of community that would share bad stuff if it were true, then people are forced to be that much more worried that we’re in fact hiding a bunch of bad stuff.
Hypothetical EA: I just don’t think there’s that much crucial information EA leaders are missing, from their informal poking around. You can doubt that, but I don’t think a formal investigation would help much, since people who don’t want to speak now will (if anything) probably be even more tight-lipped in the face of what looks like a witch-hunt.
You say that EAs have a responsibility to jump through a bunch of transparency hoops. But whether or not you agree with my “EAs are victims” frame: EAs don’t owe the community their lives. If you’re someone who made personal sacrifices to try to make the world a better place, that doesn’t somehow come with a gotcha clause where you now have incurred a huge additional responsibility that we’d never impose on ordinary private citizens, to dump your personal life into the public Internet.
Me: I don’t necessarily disagree with that, as stated. But I think particular EAs are signing up for some extra responsibility, e.g., when they become EA leaders and ask for a lot of trust on the part of their community.
I wouldn’t necessarily describe that responsibility as “huge”, because I don’t actually think a basic investigation into the SBF thing is that unusual or onerous.
I don’t see myself as proposing anything all that radical here. I’m even open to the idea that we might want to redact some names and events in the public recounting of what happened, to protect the innocent. I don’t see anything weird about that; what strikes me as puzzling is the complete absence of any basic fact-finding effort (beyond the narrow-scope EV legal inquiry).
And what strike me as doubly puzzling is that there hasn’t even been a public write-up that CEA and others are not planning to look into this at all, nor has there been any public argument for this policy — whence this dialogue. As though EAs are just hoping we’ll quietly forget about this pretty major omission, so they don’t have to say anything potentially controversial. That I don’t really respect; if you think this investigation is a bad idea, do the EA thing and make your case!
Hypothetical EA: Well, hopefully my arguments have given you some clues about (non-nefarious reasons why) EAs might want to quietly let this thing die, rather than giving a big public argument for letting it die. In addition to the obvious fact that folks are just very busy, and more time spent on this means less time spent on a hundred other things.
Me: And hopefully my arguments have helped remind some folks that things are sometimes worth doing even when they’re hard.
All the arguments in the world don’t erase the fact that at the end of the day, we have a choice between taking risks for the sake of righting our wrongs and helping people understand what happened, versus hiding from the light of day and quietly hoping that no one calls us out for retreating from our idealistic-sounding principles.
We have a choice between following the path of least resistance into ever-murkier, ever-more-confusing, ever-less-trusting waters; or taking a bold stand and doing whatever we can to give EAs and non-EAs alike real insight into what happened, and a real capacity to adjust course if and only if some course-changing is warranted.
There are certainly times when the boring, practical, un-virtuous-sounding option really is the right option. I don’t think this is one of those times; I think we need to be better than that this one time, or we risk losing by a thousand cuts some extremely precious things that used to be central to what made EA EA.
… And if you disagree with me about all that, well, tell me why I’m wrong.
I think I agree with Hypothetical EA that we basically know the broad picture.
Probably nobody was actually complicit or knew there was fraud; and
Various people made bad judgement calls and/or didn’t listen to useful rumours about Sam
I guess I’m just… satisfied with that? You say:
But there are plenty of people, both within EA and outside of it, who legitimately just want to know what happened, and will be very reassured to have a clearer picture of the basic sequence of events, which orgs did a better or worse job, which processes failed or succeeded.
.. why? None of this seems that important to me? Most of it seems like a matter for the person/org in question to reflect/improve on. Why is it important for “plenty of people” to learn this stuff, given we already know the broad picture above?
I would sum up my personal position as:
We got taken for a ride, so we should take the general lesson to be more cautious of charismatic people with low scruples, especially bearing large sums of money.
If you or your org were specifically taken for a ride you should reflect on why that happened to you and why you didn’t listen to the people who did spot what was going on.
EA is compelling insofar as it is about genuinely making the world a better place, ie we care about the actual consequences. Just because there are probably no specific people/processes to blame, doesn’t mean we should be satisfied with how things are.
There is now decent evidence that EA might cause considerable harm in the world, so we should be strongly motivated to figure out how to change that. Maybe EA’s failures are just the cost of ambition and agency, and come along with the good it does, but I think that’s both untrue and worryingly defeatist.
I care about the end result of all of this, and the fact that we’re okay with some serious Ls happening (and not being willing to fix the root cause of those errors) is concerning.
Maybe we should—after this question of investigation or not has been discussed in more detail—organize community-wide vote on whether there should be an investigation or not?
Knowing what people think is useful, especially if it’s a non-anonymous poll aimed at sparking conversations, questions, etc. (One thing that might help here is to include a field for people to leave a brief explanation of their vote, if the polling software allows for it.)
Anonymous polls are a bit trickier, since random people on the Internet can easily brigade such a poll. And I wouldn’t want to assume that something’s a good idea just because most EAs agree with it; I’d rather focus on the arguments for and against.
“Just focus on the arguments” isn’t a decision-making algorithm, but I think informal processes like “just talk about it and individually do what makes sense” perform better than rigid algorithms in cases like this.
If we want something more formal, I tend to prefer approaches like “delegate the question to someone trustworthy who can spend a bunch of time carefully weighing the arguments” or “subsidize a prediction market to resolve the question” over “just run an opinion poll and do whatever the majority of people-who-see-the-poll vote for, without checking how informed or wise the respondents are”.
The question of a community-wide vote, on any level, about whether there should be such an investigation might at this point be moot. I have personally offered to begin conducting significant parts of such an investigation myself. Since I made that initial comment, I’ve now read several more providing arguments against the need or desirability for such an investigation. Having found them unconvincing, I now intend privately contact at least several private individuals—both in and around the EA movement, as well as some outside of or who no longer participate in the EA community—to pursue that end. Something like a community-wide vote, or some proxy like even dozens of effective altruists trying to talk me out of that, would be unlikely to convice me to not do so.
I disagree, and in this case I don’t think the forum team should have a say in the matter. Each user has their own interpretation of the upvote/downvote button and that’s ok. Personally I don’t use it as “I disagree” but rather as “this comment shouldn’t have been written”, but there’s certainly a correlation. For instance, I both disagree-voted and downvoted your comment (since I dislike the attempt to police this).
Update Apr. 15: I talked to a CEA employee and got some more context on why CEA hasn’t done an SBF investigation and postmortem. In addition to the ‘this might be really difficult and it might not be very useful’ concern, they mentioned that the Charity Commission investigation into EV UK is still ongoing a year and a half later. (Google suggests that statutory inquiries by the Charity Commission take an average of 1.2 years to complete, so the super long wait here is sadly normal.)
Although the Commission has said “there is no indication of wrongdoing by the trustees at this time”, and the risk of anything crazy happening is lower now than it was a year and a half ago, I gather that it’s still at least possible that the Commission could take some drastic action like “we think EV did bad stuff, so we’re going to take over the legal entity that includes the UK components of CEA, 80K, GWWC, GovAI, etc.”, which may make it harder for CEA to usefully hold the steering wheel on an SBF investigation at this stage.
Example scenario: CEA tries to write up some lessons learned from the SBF thing, with an EA audience in mind; EAs tend to have unusually high standards, and a CEA staffer writes a comment that assumes this context, without running the comment by lawyers because it seemed innocent enough; because of those high standards, the Charity Commission misreads the CEA employee as implying a way worse thing happened than is actually the case.
This particular scenario may not be a big risk, but the sum of the risk of all possible scenarios like that (including scenarios that might not currently be on their radar) seems non-negligible to the CEA person I spoke to, even though they don’t think there’s any info out there that should rationally cause the Charity Commission to do anything wild here. And trying to do serious public reflection or soul-searching while also carefully nitpicking every sentence for possible ways the Charity Commission could misinterpret something, doesn’t seem like an optimal set-up for deep, authentic, and productive soul-searching.
The CEA employee said that they thought this is one reason (but not the only reason) EV is unlikely to run a postmortem of this kind.
My initial thoughts on all this: This is very useful info! I had no idea the Charity Commission investigation was still ongoing, and if there are significant worries about that, that does indeed help make CEA and EV’s actions over the last year feel a lot less weird-and-mysterious to me.
I’m not sure I agree with CEA or EV’s choices here, but I no longer feel like there’s a mystery to be explained here; this seems like a place where reasonable people can easily disagree about what the right strategy is. I don’t expect the Charity Commission to in fact take over those organizations, since as far as I know there’s no reason to do that, but I can see how this would make it harder for CEA to do a soul-searching postmortem.
I do suspect that EV and/or CEA may be underestimating the costs of silence here. I could imagine a frog-boiling problem arising here, where it made sense to delay a postmortem for a few months based on a relatively small risk of disaster (and a hope that the Charity Commission investigation in this case might turn out to be brief), but it may not make sense to continue to delay in this situation for years on end. Both options are risky; I suspect the risks of inaction and silence may be getting systematically under-weighted here. (But it’s hard to be confident when I don’t know the specifics of how these decisions are being made.)
I ran the above by Oliver Habryka, who said:
“I talked to a CEA employee and got some more context on why CEA hasn’t done an SBF investigation and postmortem.”
Seems like it wouldn’t be too hard for them to just advocate for someone else doing it?
Or to just have whoever is leading the investigation leave the organization.
In general it seems to me that an investigation is probably best done in a relatively independent vehicle anyways, for many reasons.
“My thoughts on all this: This is very useful info! I had no idea the Charity Commission investigation was still ongoing, and that does indeed help make CEA and EV’s actions over the last year feel a lot less weird-and-mysterious to me.”
Agree that this is an important component (and a major component for my models).
I have some information suggesting that maybe Oliver and/or the CEA employee’s account is wrong, or missing part of the story?? But I’m confused about the details, so I’ll look into things more and post an update here if I learn more.
The pendency of the CC statutory inquiry would explain hesitancy on the part of EVF UK or its projects to conduct or cooperate with an “EA” inquiry. A third-party inquiry is unlikely to be protected by any sort of privilege, and the CC may have means to require or persuade EVF UK to turn over anything it produced in connection with a third-party “EA” inquiry. However, it doesn’t seem that this should be an impediment to proceeding with other parts of an “EA inquiry,” especially to the extent this would be done outside the UK.
However, in the abstract—if any charity’s rationale for not being at least moderately open and transparent with relevant constituencies and the public is “we are afraid the CC will shut us down,” that is a charity most people would run away from fast, and for good reason. If the choice is between having a less-than “soul-searching postmortem” or none at all, I’ll take the former. Also, I strongly suspect everything EVF has said about the whole FTX situation has been vetted by lawyers, so the idea that someone is going to write an “official” postmortem without legal vetting is doubtful. Finally, I worry the can is going to continue being kicked down the road until EVF is far into the process of being dismantled, at which time the rationale may evolve into “we’re disbanding anyway, what’s the point?”
if any charity’s rationale for not being at least moderately open and transparent with relevant constituencies and the public is “we are afraid the CC will shut us down,” that is a charity most people would run away from fast, and for good reason
I do think a subtext of the reported discussion above is that the CC is not considered to be a necessarily trustworthy or fair arbiter here. “If we do this investigation then the CC may see things and take them the wrong way” means you don’t trust the CC to take them the right way. Now, I have no idea whether that is justified in this case, but it’s pretty consistent with my impression of government bureaucracies in general.
So it perhaps comes down to whether you previously considered the charity or the CC more trustworthy. In this case I think I trust EVF more.
I trust EV more than the charity commission about many things, but whether EV behaved badly over SBF is definitely not one of them. One judgment here is incredibly liable to distortion through self-interest and ego preservation, and it’s not the charity commission’s. (That’s not a prediction that the charity commission will in fact harshly criticize EV. I wouldn’t be surprised either way on that.)
When I looked at past CC actions, I didn’t get the impression that they were in the habit of blowing things out of proportion. But of course I didn’t have the full facts of each investigation.
One reason I don’t put much stock in the CC may not “necessarily [be a] trustworthy or fair arbiter” possibility is that it has to act with reasoning transparency because it is accountable to a public process. Its actions with substance (as opposed to issuing warnings) are reviewable in the UK courts, in proceedings where the charity—a party with the right knowledge and incentives—can call them out on dubious findings. The CC may not fear litigation in the same sense that a private entity might, but an agency’s budget/resources don’t generally go up because it is sued and agencies tend not to seek to create extra work for themselves for the thrill of it.
Moreover, the rationale of non-disclosure due to CC concerns operates at the margin. There are particular things we shouldn’t disclose in public because the CC might badly misinterpret those statements is one thing. There is nothing else useful we can disclose because all of those statements pose an unacceptable risk of the CC badly misinterpreting any further detail is another.
I have already personally decided to begin pursuing myself inquiries and research that would constitute at least some aspects of the sort of investigation in question. Much of what I generally have in mind, and in particular what I’d be most capable of doing myself, would be unrelated to EVF UK. If it’d make it easier, I’m amenable to perhaps avoiding probing in ways that intersect with EVF UK until the CC inquiry has ended. (This probably wouldn’t include EVF USA). That EVF is in the process of disbanding, which would complicate any part of such an investigation, as well as the fact another major EA organization is likely in the process of launching an earning to give incubator/training organization, are two reasons I will be expediting this project.
Not to state the obvious but the ‘criticism of EA’ posts didn’t pose a real risk to the power structure. It is uhhhhh quite common for ‘criticism’ to be a lot more encouraged/tolerated when it isnt threatening.
I mostly agree with this, and upvoted strongly, but I don’t think the scare quotes around “criticism” is warranted. Improving ideas and projects through constructive criticism is not the same thing as speaking truth to power, but it is still good and useful, it’s just a different good and useful thing.
I’m against doing further investigation. I expressed why I think we have already spent too much time on this here.
I also think your comments are falling into the trap of referring to “EA” like it was an entity. Who specifically should do an investigation, and who specifically should they be investigating? (This less monolithic view of EA is also part of why I don’t feel as bothered by the the whole thing: so maybe some people in “senior” positions made some bad judgement calls about Sam. They should maybe feel bad. I’m not sure we should feel much collective guilt about that.)
While recognizing the benefits of the anti-”EA should” taboo, I also think it has some substantial downsides and needs to be invoked after consideration of the specific circumstances at hand.
One downside is that the taboo can impose significant additional burdens on a would-be poster, discouraging them from posting in the first place. If it takes significant time investment to write “X should be done,” it is far from certain others will agree, and then additional significant time to figure out/write “and it should be done by Y,” then the taboo would require someone who wants to write the former to invest in writing the latter before knowing if the former will get any traction. Being okay with the would-be poster deferring certain subquestions (like “who”) means that effort can be saved if there’s not enough traction on the basic merits.
Another downside is that a would-be poster may have expertise, knowledge, or resources relevant to part of a complex question. If we taboo efforts by those who can only answer some of the issues effectively, we will lose the benefit of their insight.
I also think your comments are falling into the trap of referring to “EA” like it was an entity. Who specifically should do an investigation,
I don’t think that is an appropriate burden to place on someone writing a post or comment calling for an investigation. I think that would be a blocker anyone without a fair deal of certain “insider-ish” knowledge from ever making the case for an investigation:
This isn’t a do-ocracy project. Doing it properly is not going to be cheap (e.g., hiring an investigative firm), and so ability to get funded for this is a prerequisite. Expecting a Forum commenter to know who could plausibly get funding is a bit much. To the extent that that is a reasonable expectation, we would also expect the reader to know that—so it is a minor defect. To the extent that who could get funded is a null set, then bemoaning a perceived lack of willingness to invest in a perceived important issue in ecosystem health is a valid post.
Even apart from this, whoever was running the investigator would need to secure the cooperation of organizations and individuals one way or another. That could either flow through the investigation sponsor’s own standing in the community (e.g., that ~everyone trusted them to give them a fair shake), and/or through funders/other powers putting their heft behind the investigation (e.g., that documented refusal to cooperate would likely have material adverse consequences).
and who specifically should they be investigating?
Many good investigations do not have a specific list of people/entities who are the target of investigatory concern at the outset. They have a list of questions, and a good sense of the starting points for inquiry (and figuring out where other useful information lies). If I were trying to gain a better understanding of EA-aligned people/orgs’ interactions with SBF, I think some of the starting points are obvious.
Moreover, a higher level of specificity strikes me as potentially infohazardous for the Forum. Whatever might be said of the costs and benefits of circulating ~rumors to a publicly-accessible Forum to guard the community against future misconduct and non-malicious problematic conduct, the cost/benefit assessment feels more doubtful when the focus is more on certain forms of past problematic conduct. Even if Rob had solid hunches as to whose actions should be probed more significantly, it’s not clear that it would net-positive for him to name names here. Given that, I am very hesitant to endorse any norm that puts a thumb on the scale by creating an expectation that a poster will publicly release information whose public disclosure may well have a net negative impact.
Thanks, I think this is all right. I think I didn’t write what I meant. I want more specificity, but I do agree with you that it’s wrong to expect full specificity (and that’s what I sounded like I was asking for).
What I want something more like “CEA should investigate the staff of EVF for whether they knew about X and Y”, not “Alice should investigate Bob and Carol for whether they knew about X and Y”.
I do think that specificity raises questions, and that this can be a good thing. I agree that it’s not reasonable for someone to work out e.g. exactly where the funding comes from, but I do think it’s reasonable for them to think in enough detail about what they are proposing to realise that a) it will need funding, b) possibly quite a lot of funding, c) this trades off against other uses of the money, so d) what does that mean for whether this is a good idea. Whereas if “EA” is going to do it, then we don’t need to worry about any of those things. I’m sure someone can just do it, right?
I agree that it’s not reasonable for someone to work out e.g. exactly where the funding comes from, but I do think it’s reasonable for them to think in enough detail about what they are proposing to realise that a) it will need funding, b) possibly quite a lot of funding, c) this trades off against other uses of the money, so d) what does that mean for whether this is a good idea. Whereas if “EA” is going to do it, then we don’t need to worry about any of those things. I’m sure someone can just do it, right?
I am at least one someone who not only can, but already has decided that I will, at least begin doing it. To that end, for myself or perhaps even others, there are already some individuals I have in mind to begin contacting who may be willing to provide at least a modicum of funding, or would know others who might be willing to do so. In fact, I have already begun that process.
There wouldn’t be a tradeoff with other uses of at least some of that money, given I’m confident at least some of those individuals would not donate or otherwise use that money to support, e.g., some organization affiliated with, or charity largely supported by, the EA community. (That would be due to some of the individual funders in question not being effective altruists.) While I agree it may not be a good idea for EA as a whole to go about this in some quasi-official way, I’ve concluded there aren’t any particularly strong arguments made yet against the sort of “someone” you had in mind doing so.
bem of the While recognizing the benefits of the anti-”EA should” taboo, I also think it has some substantial downsides and needs to be invoked after consideration of the specific circumstances at hand.
One downside is that the taboo can impose significant additional burdens on a would-be poster, discouraging them from posting in the first place. If it takes significant time investment to write “X should be done,” it is far from certain others will agree, and then additional significant time to figure out/write “and it should be done by Y,” then the taboo would require someone who wants to write the former to invest in writing the latter before knowing if the former will get any traction. Being okay with the would-be poster deferring certain subquestions (like “who”) means that effort can be saved if there’s not enough traction on the basic merits.
As I’ve already mentioned in other comments, I have myself already decided to begin pursuing a greater degree of inquiry, with haste. I’ve publicly notified others who’d offer pushback solely on the basis of reinforcing or enforcing such a taboo is likely to only motivate to do so with more gusto.
knowledge, or resources relevant to part of a complex question
I have some knowledge and access to resources that would be relevant to solving at least a minor but still significant part of that complex question. I refer to the details in question in my comment that I linked to above.
This isn’t a do-ocracy project. Doing it properly is not going to be cheap (e.g., hiring an investigative firm), and so ability to get funded for this is a prerequisite. Expecting a Forum commenter to know who could plausibly get funding is a bit much. To the extent that that is a reasonable expectation, we would also expect the reader to know that—so it is a minor defect. To the extent that who could get funded is a null set, then bemoaning a perceived lack of willingness to invest in a perceived important issue in ecosystem health is a valid post.
To the extent I can begin laying the groundwork for a more thorough investigation to follow what is beyond the capacity of myself and prospective collaborators further, such an investigation will now at least start snowballing as a do-ocracy project. I know multiple people who could plausibly begin funding this, who themselves in turn may know several other people who’d be willing to do it. Some of the funders in question may be willing to uniquely fund myself, or a team I could (co-)lead, to begin doing the investigation in at least a semi-formal manner.
That would be some quieter critics in the background of EA, or others who are no longer effective altruists but have definitely long wanted such an investigation to like has begun to proceed. Why they might trust me in particular is due to my reputation in EA community for years now as being one effective altruist who is more irreverent towards the pecking orders or hiearchies, both formal and informal, of any organized network or section of the EA movement. At any rate, at least to some extent, a lack of much willingness from within the EA to fund the first steps of an inquiry is no longer a relevant concern. I don’t recall if we’ve interacted much before, though as you may soon learn, I am someone in the orbit of effective altruism who sometimes has an uncanny knack for meeting unusual or unreasonable expectations.
Many good investigations do not have a specific list of people/entities who are the target of investigatory concern at the outset. They have a list of questions, and a good sense of the starting points for inquiry (and figuring out where other useful information lies).
Having begun several months ago thinking of pursuing what I can contribute to such a nascent investigation, I already have in mind a list of several people in mind, as well as some questions, starting points for inquiry, and an approach for how to further identify potentially useful information. I intend to begin drafting a document to organize the process I have in mind, and I may be willing to privately share it in confidence with some individuals. You would be included, if you would be interested.
Overall I feel relatively supportive of more investigation and (especially) postmortem work. I also don’t fully understand why more wasn’t shared from the EV investigation[1].
However, I think it’s all a bit more fraught and less obvious than you imply. The main reasons are:
Professional external investigations are expensive
Especially if they’re meaningfully fact-finding and not just interviewing a few people, I think this could easily run into hundreds of thousands of dollars
Who is to pay for this? If a charity is doing it, I think it’s important that their donors are on board with that use of funds
I kind of think someone should fundraise for this specifically; I’m genuinely unsure about donor appetite to support it
I’m somewhat worried about the “re-victimizing” effect you allude to of just sharing everything transparently
Worry that it would cause in-my-view-unjust headaches for people is perhaps the main inhibitory force on my just publicly sharing the pieces of what I know (there’s also sometimes feeling like something isn’t mine to share)
If there were an investigation which was going to make all its factual findings public, I’d expect this to be an inhibitory force on people choosing to share information with them
The possible mistakes we’re talking about are all nuanced
It’s going to be a judgement call what was or wasn’t a mistake
(This is compatible with mistakes being large)
So if we’re hoping for an investigation which doesn’t make all its factual findings public, then we’re trusting in the judgement of the investigators to make a fair assessment
This makes me not want independent lawyers (who may most naturally be drawn to assess things from a perspective of “was this reasonably minimizing of legal exposure”)
But then who?
If this was just a question about conduct at one org, the natural answer might be “some sensible but uninvolved EA”, but if the whole of EA might somehow be called into question, what’s even appropriate?
At the end of this I would be most interested in multiple people who seemed very sensible giving their own post-mortems. I think that this would ideally include a mix of folks in EA and outsiders. I think some fact-finding should inform these people’s takes, without all of the facts themselves necessarily being made public (in order to facilitate the facts actually being shared, as well as to mitigate possible re-victimizing). I’m not certain how much it’s good for this to be via some centralized fact-finding exercise which is then privately shared, vs giving them the opportunity to interview people directly (as you get some more granular data that way). Perhaps ideally a mix. (But that’s making it more time-expensive as an exercise.)
I think there are people close enough to what happened that they can meaningfully give post-mortems without a fact-finding investigation. And I am interested in their views and supportive of them sharing those. But they’re also the people whose judgement is most likely to be distorted by being close to things. So even among EAs I’d prefer to have very sensible people who were further from what happened.
(That’s all where-I-stand-right-now. I can certainly imagine being moved on this.)
I guess that there would have been downsides for EV in doing so, but think these might well have been outweighed by the benefits to the community. However, I want to stress that I think the boards are sensible people making sometimes-difficult trade-offs; I don’t know for sure what I’d have thought with full context; I have some deference to them.
We’re happy to sink hundreds of hours into fun “criticism of EA” contests, but when the biggest disaster in EA’s history manifests, we aren’t willing to pay even one investigator to review what happened so we can get the facts straight, begin to rebuild trust, and see if there’s anything we should change in response?
I disagree with this framing.
Something that I believe I got wrong pre-FTX was base rates/priors: I had assumed that if a company was making billions of dollars, had received investment from top-tier firms, complied with a bunch of regulations, etc. then the chance of serious misconduct was fairly low.
It’s hard to measure this, but at least anecdotally some other people (including in “EA leadership” positions) tell me that they were updated by this work and think that they similarly had incorrect priors.
I think what you are calling an “investigation” is fine/good, but it is not the only way to “get the facts straight” or “see if there’s anything we should change in response”.
Fair! I definitely don’t want to imply that there’s been zero reflection or inquiry in the wake of FTX. I just think “what actually happened within EA networks, and could we have done better with different processes or norms?” is a really large and central piece of the puzzle.
I’d highlight that I found taking quite a structured approach helpful: breaking things down chronologically, and trying to answer specific questions like what’s the mechanism, how much did this contribute, and what’s a concrete recommendation?
“I’ll suggest a framework for how that broader review might be conducted: for each topic the review could:
Establish the details of EA involvement,
Indicate a mechanism for how this could have indirectly contributed to the eventual financial crime,
Provide some assessment of to what extent that mechanism may have indirectly contributed, and
Provide a concrete recommendation for what the EA community could do differently to prevent any recurrence.
I’ll also provide some preliminary thoughts below, as an indication of what could be done in the full review. One way to approach a review is chronological, covering eight touchstone or ‘gate’ moments:
Bankman-Fried starts earning to give
Alameda founding
FTX founding
Early political donations through Bankman-Fried’s family
FTX Foundation and Future Fund founding
Bankman-Fried becomes a public face of EA
Whistleblowing
This may be too exhaustive or “self-flagellating” for some, but I think it can identify areas to improve and fix. As will become clear, I think that step 5, the founding of the FTX Foundation and Future Fund, is where the biggest questions are raised and where I make the most recommendations.”
To be fair, this could trigger lawsuits. I hope someone is reflecting on FTX, but I wouldn’t expect anyone to be keen on discussing their own involvement with FTX publicly and in great detail.
I think that’s right, although I would distinguish between corporate and personal exposure here to some extent:
I’m most hesitant to criticize people for not personally taking actions that could increase their personal legal exposure.
I’m most willing to criticize people and organizations for not taking actions that could increase organizational legal exposure. Non-profit organizations are supposed to exist in the public interest, while individuals do not carry any above-average obligations in that way. Organizations are not moral persons whose welfare is important to me. Moreover, organizations are better able to manage risk than individuals. For purposes of the norm that s/he who benefits from an action should also generally expect to bear the attendant costs, I am more willing to ascribe the benefits of action to an organization than to an individual doing their job.[1]
Organizational decisions to remain silent to avoid risk to individuals pose thornier questions for me. I’d have to think more about that intuition after my lunch break, but some of it relate to reasonable expectations of privacy. For example, disclosure of the contents of an organizational e-mail account (where the employee had notice that it belonged to the employer without a reasonable expectation of privacy) strikes me as less problematic than asking people to divulge their personal records, information about off-work activities, and the like.
Personal liability regimes are often pernicious to people doing their jobs in a socially desirable and optimal way. The reason is that the benefit of doing the job properly / taking risks is socialized, while the costs / risks are privatized. Thus, the actor fearful of personal liability will undervalue the social benefits of proper performance / risk acceptance.
Who would be able to sue? Would it really be possible for FTX customers/investors to sue someone for not making public “I heard Sam lies a lot and once misplaced money at Alameda early on it and didn’t seem too concerned, and reneged on a verbal agreement to share ownership”. Just because someone worked at the Future Fund? Or even someone who worked at EV?
I’d note that Nick Beckstead was in active litigation with the Alameda bankruptcy estate until that was dismissed last month (Docket No. 93). I think it would be very reasonable for anyone who worked at FTXFF to be concerned about their personal legal exposure here. (I am not opining as to whether exposure exists, only that I would find it extremely hard to fault anyone who worked at FTXFF for believing that they were at risk. After all, Nick already got sued!)
It’s harder to assess exposure for other groups of people. To your question, there may be a difference between mere silence in the face of knowledge/suspicion and somewhat supportive statements/actions in the face of the same knowledge. As a reference point, there was that suit against Tom Brady et al. (haven’t seen a recent status update). Obviously, the promotional activity is more explicit there than anything I expect an EA-associated person did. However, the theory against Brady et al. may rely more on generic failure to investigate, while one could perhaps dig for a stronger case against certain EA-related persons on actual knowledge of suspicious facts. I can only encourage people with concerns to consult their own personal legal counsel.
But at the general community level, I would be hesitant to fault various other individuals for being concerned about potential personal legal exposure. Remember, the pain of legal involvement isn’t limited to actual liability. Merely getting sued is itself painful; discovery is even more painful. Public statements could give someone motivation to try and/or ammo to get past a motion to dismiss for failure to state a viable claim.
As am aside, this isn’t really action relevant, but insofar as being involved with the legal system is a massive punishment even when the legal system itself is very likely going to eventually come to the conclusion you’ve done nothing legally wrong, that seems bad? Here it also seems to be having a knock on effect of making it harder to find out what actually happened, rather than being painful but producing useful information.
The suit against Brady also sounds like a complete waste of society’s time and money to me.
the legal system itself is very likely going to eventually come to the conclusion you’ve done nothing legally wrong,
The legal system doesn’t know ex ante whether you’ve done anything wrong, though. It’s really hard to set up a system that balances out all the different ways a legal system can be imbalanced. If you don’t give plaintiffs enough leeway to discover evidence for their claims, then tortfeasors will be insufficiently deterred from committing torts. If you go too far (the current U.S. system), you incentivize lawfare, harassment, and legalized extortion of some defendants. Imposing litigation costs / attorney fees on the losers often harms the little guy due to lower ability to shoulder risk & the marginal utility of money. Having parties bear their own costs / fees (generally, the U.S. system) encourages tactics that run up the bill for the other guy. And defendants are more vulnerable to that than plaintiffs as a general rule.
Here it also seems to be having a knock on effect of making it harder to find out what actually happened, rather than being painful but producing useful information.
Maybe. Maybe people would talk but for litigation exposure. Or maybe people are using litigation exposure as a convenient excuse to cover the fact that they don’t want to (and wouldn’t) talk anyway. I will generally take individuals at face value given the difficulty of discerning between the two, though.
Would it be possible to set up a fund that pays people for the damages they incurred for a lawsuit where they end up being innocent? That way the EA community could make it less risky for those who haven’t spoken up, and also signal how valuable their information is to them.
Yes, although it is likely cheaper (in expected costs) and otherwise superior to make a ~unconditional offer to cover at least the legal fees for would-be speakers. The reason is that an externally legible, credible guarantee of legal-expense coverage ordinarily acts as a strong deterrent to bringing a weak lawsuit in the first place. As implied by my prior comment, one of the main tools in the plaintiff’s arsenal is to bully a defendant in a weak case to settle by threatening them with liability for massive legal bills. If you take that tactic way by making the defendant ~insensitive to the size of their legal bills, you should stop a lot of suits from ever being brought in the first place. Rather, one would expect would-be plaintiffs to sue only if the expected value of their suit (e.g., the odds of winning and collecting on a judgment multiplied by judgment size) exceed the expected costs of litigating to trial (or to a point at which the defendant decides to settle without factoring in legal bills). If you think the odds of plaintiff success at trial are low and/or that the would-be individual defendant doesn’t have a ton of assets to collect from, then the most likely number of lawsuits is zero.[1]
That does tip the balance of abstract fairness toward defendants and away from plaintiffs. But that can be appropriate in some cases. As noted in an earlier comment of mine, personal-liability regimes underproduce public goods because the public goods are enjoyed by the public while the risk is borne by the individual. Litigation immunities (especially “qualified immunity” in the US) can be a controversial topic, but they reflect that kind of rationale. In some cases, society would rather limit or foreclose someone’s ability to collect damages for torts they suffered than squelch the willingness to provide public goods.
One might not want to extend this offer to those for whom you have a higher degree of suspicion that they did something they really should be sued for, or to those who you think face a high probability of being sued even without speaking up.
This is why you wouldn’t want to bind yourself to indemnify defendants who lost for their judgments. Doing so would create a much larger target on their backs, as the upside from litigation would no longer be limited to what the plaintiff could collect from the defendant. In the worst-case scenario in which a defendant loses unjustly, there are ways for third parties to protect the defendant without further enriching the plaintiff (e.g., making gifts after bankruptcy discharge, well-designed trusts).
How big is the legal risk for a high profile EA person who, say:
knew SBF was an asshole, incautious, and lived in a luxury villa, but had no knowledge of any specific fraud
publicly promoted him as a moral and frugal person
?
Is this automatically tort-worthy, but hard to prove? Laughed out of court no matter what? Does speaking about it publicly extend the court case, so it’s more expensive even if the promoter will ultimately win?
If I am betting $5 of play money on Manifold (meaning off-the-cuff gut check with no research) I would generally bet low as long as the person did not ~specifically promote FTX. If there was specific promotion of FTX, you could see claims like these which would be beyond my willingness to speculate $5 of play money at this time.
Here are some off-the-cuff questions I might want to ask (again, no research) if I were thinking about a specific case:
Could anyone potentially show that they actually and reasonably relied on the statements that were made to transact business with FTX?
How relevant were the statements to a reasonable person who might be considering transacting business with FTX? For example, one might think “Joe told me SBF was frugal despite knowing that was a quarter-truth at best, I wouldn’t have opened an FTX account had he not said that, and it was reasonable for me to rely on SBF’s frugality to decide whether to open an account” sounds like a stretch. On the other hand, reliance on “Jane had very good reason to believe SBF had done shady and illegal stuff, yet forcefully presented him as a trustworthy paragon of moral virtue on her podcast” starts feeling a little more realistic.
How much of the speaker’s content (not just the allegedly false/misleading statements about SBF) was about FTX? If it talked a lot about the advantages of doing business with FTX, etc., then the nexus between the speech and reliance seems stronger. If the context is SBF as a role model for EA EtGers, that would seem a real stretch.
Was there a direct or indirect financial benefit to the speaker or a related entity? If SBF gave the speaker (or more likely, their organization) tons of money, this starts looking more like a ~paid endorsement. And we are generally more willing to put duties on ~paid endorsers than on (say) on you and my comments on this Forum.
Also questions 3 and 4 get into potential causes of action for assisting with the sales of unregistered securities (cf. page 36 here). It’s unclear to me how an EA leader speaking out would increase their exposure to such a lawsuit.
There’s also the more realist answer to your question, which goes like this: the greater your income and assets, the greater your risk. My parents (on Social Security which can’t be garnished, only significant asset is the marital home which is difficult for creditors to access) probably wouldn’t need to worry. Unless you’re doing it for ideological reasons, why sue if you can’t collect more than what litigation costs?
(understanding you are a guy betting $5 on manifold)
re: #3. Does this get blurred if the company made an explicit marketing push about what a great guy their CEO was? I imagine that still wouldn’t affect statements on him as a role model[1] , but might matter if they said many positive statements about him on a platform aimed at the general public.
Not a crypto-focused platform (e.g., Joe’s Crypto Podcast?) No particular reason to know or believe that the company (had / was going to) use something Person said as part of their marketing campaign? If negative to both, it doesn’t affect my $5 Manifold bet.
I’m a pretty big fan of Nate’s public write-up on his relationship to Sam and FTX. Though, sure, this is going to be scarier for people who were way more involved and who did stuff that twitter mobs can more easily get mad about.
This is part of why the main thing I’m asking for is a professional investigation, not a tell-all blog post by every person involved in this mess (though the latter are great too). An investigation can discover useful facts and share them privately, and its public write-up can accurately convey the broad strokes of what happened, and a large number of the details, while taking basic steps to protect the innocent.
Here’s a post with me asking the question flat out: Why hasn’t EA done an SBF investigation and postmortem?
This seems like an incredibly obvious first step from my perspective, not something I’d have expected a community like EA to be dragging its heels on years after the fact.
We’re happy to sink hundreds of hours into fun “criticism of EA” contests, but when the biggest disaster in EA’s history manifests, we aren’t willing to pay even one investigator to review what happened so we can get the facts straight, begin to rebuild trust, and see if there’s anything we should change in response? I feel like I’m in crazytown; what the heck is going on?
Update Apr. 4: I’ve now spoken with another EA who was involved in EA’s response to the FTX implosion. To summarize what they said to me:
They thought that the lack of an investigation was primarily due to general time constraints and various exogenous logistical difficulties. At the time, they thought that setting up a team who could overcome the various difficulties would be extremely hard for mundane reasons such as:
thorough, even-handed investigations into sensitive topics are very hard to do (especially if you start out low-context);
this is especially true when they are vaguely scoped and potentially involve a large number of people across a number of different organizations;
“professional investigators” (like law firms) aren’t very well-suited to do the kind of investigation that would actually be helpful;
legal counsels were generally strongly advising people against talking about FTX stuff in general;
various old confidentiality agreements would make it difficult to discuss what happened in some relevant instances (e.g. meetings that had Chatham House Rules);
it would be hard to guarantee confidentiality in the investigation when info might be subpoenaed or something like that;
and a general plethora of individually-surmountable but collectively-highly-challenging obstacles.
They flagged that at the time, most people involved were already in an exceptionally busy and difficult time, and so had less bandwidth for additional projects than usual.
A caveat here is that the EV board did block some people from speaking publicly during the initial investigation into EV’s legal situation. That investigation ended back in the summer of 2023.
Julia Wise and Ozzie Gooen wrote on the EA Forum that this is a potentially useful project for someone to take on, which as far as this person knew isn’t something any EA leadership did or would try to stop, and the impression of the person I spoke to was that Julia and Ozzie indeed tried to investigate what reforms should happen, though the person I spoke to didn’t follow that situation closely.
The person I spoke to didn’t want to put words in the mouth of EA leaders, and their information is mostly from ~1 year ago and might be out of date. But to the extent some people aren’t currently champing at the bit to make this happen, their impression (with respect to the EA leaders they have interacted with relatively extensively) is that this has little to do with a desire to protect the reputation of EA or of individual EAs.
Rather, their impression is that for a lot of top EA leaders, this whole thing is a lot less interesting, because those EAs think they know what happened (and that it’s not that interesting). So the choice is like “should I pour in a ton of energy to try to set up this investigation that will struggle to get off the ground to learn kinda boring stuff I already know?” And maybe they are underrating how interesting others would find it, but that made the whole idea not so important-seeming (at least in the early days after FTX’s collapse, relative to all the other urgent and confusing things swirling around in the wake of the collapse) from their perspective.
I vouch for this person as generally honest and well-intentioned. I update from the above that community leaders are probably less resistant to doing some kind of fact-finding inquiry than I thought. I’m hoping that this take is correct, since it suggests to me that it might not be too hard to get an SBF postmortem to happen now that the trial and the EV legal investigation are both over (and now that we’re all talking about the subject in the first place).
If the take above isn’t correct, then hopefully my sharing it will cause others to chime in with further objections, and I can zigzag my way to understanding what actually happened!
I shared the above summary with Oliver Habryka, and he said:
I’ll also share Ozzie Gooen’s Twitter take from a few days ago:
And, some corrections to my earlier posts about this:
I said that “there was a narrow investigation into legal risk to Effective Ventures last year”, which I think may have overstated the narrowness of the investigation a bit. My understanding is that the investigation’s main goal was to reduce EV’s legal exposure, but to that end the investigation covered a somewhat wider range of topics (possibly including things like COI policies), including things that might touch on broader EA mistakes and possible improvements. But it’s hard to be sure about any of this because details of the investigation’s scope and outcomes weren’t shared, and it doesn’t sound like they will be.
I said that Julia Wise had “been calling for the existence of such an investigation”; Julia clarifies on social media, “I would say I listed it as a possible project rather than calling for it exactly.”
Specifically, Julia Wise, Ozzie Gooen, and Sam Donald co-wrote a November 2023 blog post that listed “comprehensive investigation into FTX<>EA connections / problems” as one of four “projects and programs we’d like to see”, saying “these projects are promising, but they’re sizable or ongoing projects that we don’t have the capacity to carry out”. They also included this idea in a list of Further Possible Projects on EA Reform.
(I’m going to wrap up a few disparate threads together here, and will probably be my last comment on this post ~modulo a reply for clarification’s sake. happy to discuss further with you Rob or anyone via DMs/Forum Dialogue/whatever)
(to Rob & Oli—there is a lot of inferential distance between us and that’s ok, the world is wide enough to handle that! I don’t mean to come off as rude/hostile and apologies if I did get the tone wrong)
Thanks for the update Rob, I appreciate you tying this information together in a single place. And yet… I can’t help but still feel some of the frustrations of my original comment. Why does this person not want to share their thoughts publicly? Is it because they don’t like the EA Forum? Because their scared of retaliation? It feels like this would be useful and important information for the community to know.
I’m also not sure what to make of Habryka’s response here and elsewhere. I think there is a lot of inferential distance between myself and Oli, but it does seem to me to come off as a “social experiment in radical honesty and perfect transparency” , which is a vibe I often get from the Lightcone-adjacent world. And like, with all due respect, I’m not really interested in that whole scene. I’m more interested in questions like:
Were any senior EAs directly involved in the criminal actions at FTX/Alameda?
What warnings were given about SBF to senior EAs before the FTX blowup, particularly around the 2018 Alameda blowup, as recounted here.
If these warnings were ignored, what prevented people from deducing that SBF was a bad actor?[1]
Critically, if these warnings were accepted as true, who decided to keep this a secret and to supress it from the community at large, and not act on it?
Why did SBF end up with such a dangerous set of beliefs about the world? (I think they’re best described as ‘risky beneficentrism’ - see my comment here and Ryan’s original post here)
Why have the results of these investigations, or some legally-cleared version, not been shared with the community at large?
Do senior EAs have any plan to respond to the hit to EA-morale as a result of FTX and the aftermath, along with the intensely negative social reaction to EA, apart from ‘quietly hope it goes away’?
Writing it down, 2.b. strikes me as what I mean by ‘naive consequentialism’ if it happened. People had information that SBF was a bad character who had done harm, but calculated (or assumed) that he’d do more good being part of/tied to EA than otherwise. The kind of signalling you described as naive consequentialism doesn’t really seem pertinent to me here, as interesting as the philosophical discussion can be.
tl’dr—I think there can be a difference between a discussion about what norms EA ‘should’ have, or senior EAs should act by, especially in the post-FTX and influencing-AI-policy world, but I think that’s different from the ‘minimal viable information-sharing’ that can help the community heal, hold people to account, and help make the world a better place. It does feel like the lack of communication is harming that, and I applaud you/Oli pushing for it, but sometimes I wish you would both also be less vague too. Some of us don’t have the EA history and context that you both do!
epilogue: I hope Rebecca is doing well. But this post & all the comments makes me feel more pessimistic about the state of EA (as a set of institutions/organisations, not ideas) post FTX. Wounds might have faded, but they haven’t healed 😞
Not that people should have guessed the scale of his wrongdoing ex-ante, but was there enough to start to downplay and disassociate?
I’m not the person quoted, but I agree with this part, and some of the reasons for why I expect the results of an investigation like this to be boring aren’t based on any private or confidential information, so perhaps worth sharing.
One key reason: I think rumor mills are not very effective fraud detection mechanisms.
(This seems almost definitionally true: if something was clear evidence of fraud then it would just be described as “clear evidence of fraud”; describing something as a “rumor” seems to almost definitionally imply a substantial probability that the rumor is false or at least unclear or hard to update on.[1])
E.g. If I imagine a bank whose primary fraud detection mechanism was “hope the executives hear rumors of malfeasance,” I would not feel very satisfied with their risk management. If fraud did occur, I wouldn’t expect that their primary process improvement to be “see if the executives could have updated from rumors better.” I am therefore somewhat confused by how much interest there seems to be in investigating how well the rumor mill worked for FTX.[2]
To be clear: I assume that the rumor mill could function more efficiently, and that there’s probably someone who heard “SBF is often overconfident” or whatever and could have updated from that information more accurately than they did. (If you’re interested in my experience, you can read my comments here.) I’m just very skeptical that a new and improved rumor mill is substantial protection against fraud, and don’t understand what an investigation could show me that would change my mind.[3] Moreover, even if I somehow became convinced that rumors could have been effective in the specific case of FTX, I will still likely be skeptical of their efficacy in the future.
Relatedly, I’ve heard people suggest that 80k shouldn’t have put SBF on their website given some rumors that were floating around. My take is that the base rate of criminality among large donors is high, having a rumor mill does not do very much to lower that rate, and so I expect to believe that the risk will be relatively high for high net worth people 80k puts on the front page in the future, and I don’t need an investigation to tell me that.
To make some positive suggestions about things I could imagine learning from/finding useful:
I have played around with the idea of some voluntary pledge for earning to give companies where they could opt into additional risk management and transparency policies (e.g. selecting some processes from Sarbanes-Oxley). My sense is that these policies do actually substantially reduce the risk of fraud (albeit at great expense), and might be worth doing.[4]
At least, it seems like this should be our first port of call. Maybe we can’t actually implement industry best practices around risk management, but it feels like we should at least try before giving up and doing the rumor mill thing.
My understanding is that a bunch of work has gone into making regulations so that publicly traded companies are less likely to commit fraud, and these regulations are somewhat effective, but they are so onerous that many companies are willing to stay private and forgo billions of dollars in investment just to not have to deal with them. I suspect that EA might find itself in a similarly unfortunate situation where reducing risks from “prominent individuals” requires the individuals in question to do something so onerous that no one is willing to become “prominent.” I would be excited about research into a) whether this is in fact the case, and b) what to do about it, if so.
Some people probably disagree with my claim that rumor mills are ineffective. If so, research into this would be useful. E.g. it’s been on my backlog for a while to write up a summary of Why They Do It, or a fraud management textbook.
Why They Do It is perhaps particularly useful, given that one of its key claims is that, unlike with blue-collar crime, character traits don’t correlate well with propensity to commit white-collar crimes crimes, and I think this may be a crux between me and people who disagree with me.
All that being said, I think I’m weakly in favor of someone more famous than me[5] doing some sort of write up about what rumors they heard, largely because I don’t expect the above to convince many people, and I think such a write up will mostly result in people realizing that the rumors were not very motivating.
Thanks to Chana Messinger for this point
One possible reason for this is that people are aiming for goals other than detecting fraud, e.g. they are hoping that rumors could also be used to identify other types of misconduct. I have opinions about this, but this comment is already too long so I’m not going to address it here.
e.g. I appreciate Nate writing this, but if in the future I learned that a certain person has spoken to Nate, I’m not going to update my beliefs about the likelihood of them committing financial misconduct very much (and I believe that Nate would agree with this assessment)
Part of why I haven’t prioritized this is that there aren’t a lot of earning to give companies anymore, but I think it’s still potentially worth someone spending time on this
I have done my own version of this, but my sense is that people (very reasonably) would prefer to hear from someone like Will
I feel like “people who worked with Sam told people about specific instances of quite serious dishonesty they had personally observed” is being classed as “rumour” here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word “rumour” conjures. Also, even if people only did receive stuff that was more centrally rumour, I feel like we still want to know if any one in leadership argued “oh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upside”. That’s a signal someone is a bad leader in my view, which is useful knowledge going forward. (I’m not saying it is instant proof they should never hold leadership positions ever again: I think quite a lot of people might have said something like that in similar circumstances. But it is a bad sign.)
I agree with this.
I don’t really agree with this. Everyone has some probability of turning out to be dodgy; it matters exactly how strong the available evidence was. “This EA leader writes people off immediately when they have even a tiny probability of being untrustworthy” would be a negative update about the person’s decision-making too!
I took that second quote to mean ‘even if Sam is dodgy it’s still good to publicly back him’
I meant something in between “is” and “has a non-zero chance of being”, like assigning significant probability to it (obviously I didn’t have an exact number in mind), and not just for base rate reasons about believing all rich people to be dodgy.
Huh, the same reason you cite for why you are not interested in doing an investigation is one of the key reasons why I want an investigation.
It seems to me that current EA leadership is basically planning to continue a “our primary defense against bad actors is the rumor mill” strategy. Having an analysis of how that strategy did not work, and in some sense can’t work for things like this seems like it’s one of the things that would have the most potential to give rise to something better here.
Interesting! I’m glad I wrote this then.
Do you think “[doing an investigation is] one of the things that would have the most potential to give rise to something better here” because you believe it is very hard to find alternatives to the rumor mill strategy? Or because you expect alternatives to not be adopted, even if found?
My current sense is that there is no motivation to find an alternative because people mistakenly think it works fine enough and so there is no need to try to find something better (and also in the absence of an investigation and clear arguments about why the rumor thing doesn’t work, people probably think they can’t really be blamed if the strategy fails again)
Suppose I want to devote some amount of resources towards finding alternatives to a rumor mill. I had been interpreting you as claiming that, instead of directly investing these resources towards finding an alternative, I should invest these resources towards an investigation (which will then in turn motivate other people to find alternatives).
Is that correct? If so, I’m interested in understanding why – usually if you want to do a thing, the best approach is to just do that thing.
It seems to me that a case study of how exactly FTX occurred, and where things failed, would be among one of the best things to use to figure out what thing to do instead.
Currently the majority of people who have an interest in this are blocked by not really knowing what worked and didn’t work in the FTX case, and so probably will have trouble arguing compellingly for any alternative, and also lack some of the most crucial data. My guess is you might have the relevant information from informal conversations, but most don’t.
I do think also just directly looking for an alternative seems good. I am not saying that doing an FTX investigation is literally the very best thing to do in the world, it just seems better than what I see EA leadership spending their time on instead. If you had the choice between “figure out a mechanism detecting and propagating information about future adversarial behavior” and “do an FTX investigation”, I would feel pretty great about both, and honestly don’t really know which one I would prefer. As far as I can tell neither of these things is seeing much effort invested into it.
Okay, that seems reasonable. But I want to repeat my claim[1] that people are not blocked by “not really knowing what worked and didn’t work in the FTX case” – even if e.g. there was some type of rumor which was effective in the FTX case, I still think we shouldn’t rely on that type of rumor being effective in the future, so knowing whether or not this type of rumor was effective in the FTX case is largely irrelevant.[2]
I think the blockers are more like: fraud management is a complex and niche area that very few people in EA have experience with, and getting up to speed with it is time-consuming, and also ~all of the practices are based under assumptions like “the risk manager has some amount of formal authority” which aren’t true in EA.
(And to be clear: I think these are very big blockers! They just aren’t resolved by doing an investigation.)
Or maybe more specifically: would like people to explicitly refute my claim. If someone does think that rumor mills are a robust defense against fraud but were just implemented poorly last time, I would love to hear that!
Again, under the assumption that your goal is fraud detection. Investigations may be more or less useful for other goals.
It seems like a goal of ~”fraud detection” not further specified may be near the nadir of utility for an investigation.
If you go significantly narrower, then how EA managed (or didn’t manage) SBF fraud seems rather important to figuring out how to deal with the risk of similar fraudulent schemes in the future.[1]
If you go significantly broader (cf. Oli’s reference to “detecting and propagating information about future adversarial behavior”), the blockers you identify seem significantly less relevant, which may increase the expected value of an investigation.
My tentative guess is that it would be best to analyze potential courses of action in terms of their effects on the “EA immune system” at multiple points of specificity, not just close relations of a specific known pathogen (e.g., SBF-like schemes), a class of pathogens (e.g., “fraud”), or pathogens writ large (e.g., “future adversarial behavior”).
Given past EA involvement with crypto, and the base rate of not-too-subtle fraud in crypto, the risk of similar fraudulent schemes seems more than theoretical to me.
I think that would be worth exploring. I suspect you are correct that full Sarbanes-Oxley treatment would be onerous.
On the other hand, I don’t see how a reasonably competent forensic accountant or auditor could have spent more than a few days at FTX (or at Madoff) without having a stroke. Seeing the commingled bank accounts would have sent alarm bells racing through my head, at least. (One of the core rules of legal ethics is that you do not commingle your money with that of your clients because experience teaches all sorts of horrible things can and often do happen.)
I certainly don’t mean to imply that fraud against sophisticated investors and lenders is okay, but there is something particularly bad about straight-up conversion of client funds like at FTX/Madoff. At least where hedge funds and big banks are concerned, they have the tools and access to protect themselves if they so wish. Moreover, the link between the fraud and the receipt of funds is particularly strong in those cases—Enron was awash in fraud, but it wouldn’t be fair to say that a charity that received a grant from Enron at certain points in time was approximately and unknowingly in possession of stolen funds.
Thankfully, procedures meant to ferret out sophisticated Enron-style fraud shouldn’t be necessary to rule out most straight-up conversion schemes. Because of the risk that someone will rat the fraudsters out, my understanding is that the conspiracy usually is kept pretty small in these sorts of frauds. That imposes a real limit on how well the scheme will withstand even moderate levels of probing with auditor-level access.
If you want a reference class of similar frauds, here is the prosecution’s list of cases (after the Booker decision in 2005) with losses > $100MM and fraud type of Ponzi scheme, misappropriation, or embezzlement:
For example, one might be really skeptical if auditing red flags associated with prior frauds are present. Madoff famously had his audits done by a two-person firm that reported not conducting audits. FTX was better, but apparently still used “questionable” third-tier firms that “do audit a few public companies but none of the size or complexity of FTX.” Neither “the Armanino nor the Prager Metis audit reports for 2021 provides an opinion on the FTX US or FTX Trading internal controls over accounting and financial reporting”—and the audit reports tell the reader as much (same source). The article, written by an accounting lecturer at Wharton, goes on to describe other weirdness in the audit reports. Of course, that’s not foolproof—Enron had one of the then-Big Five accounting firms, for instance.
Catching all fraud is not realistic . . . for anyone, much less a charitable social movement. But it seems like some basic checks to make fairly sure the major or whole basis for the company / for the individual’s wealth is not a fraudulent house of cards seems potentially attainable at a reasonable burden level.
I guess the question I have is, if the fraud wasn’t noticed by SBF’s investors, who had much better access to information and incentives to find fraud, why would anyone expect the recipients of his charitable donations to notice it? If it was a failure of the EA movement not to know that FTX was fraudulent, isn’t it many times more of a failure that the fraud was unnoticed by the major sophisticated investment firms that were large FTX shareholders?
I think investing in FTX was genuinely a good idea, if you were a profit maximizer, even if you strongly suspected the fraud. As Jason says, as an investor losing money due to fraud isn’t any worse than losing money because a company fails to otherwise be profitable, so even assigning 20%-30% probability to fraud for a high-risk investment like FTX where you are expecting >2x returns in a short number of years will not make a huge difference to your bottomline.
In many ways you should expect being the kind of person who is willing to commit fraud to be positively associated with returns, because doing illegal and fradulent things means that the people who run the organization take on massive risk where you are not exposed to the downside, but you are exposed to the upside. It’s not worth it to literally invest in fraud, but it is worth it to invest into the kind of company where the CEO is willing to go to prison, since you don’t really have any risk of going to prison, but you get the upside of the legal risk they take on (think of Uber blatantly violating laws until they established a new market, which probably exposed leadership to substantial legal risk, but investors just got to reap the profits).
I wasn’t suggesting we should expect this fraud to have been found in this case with the access that was available to EA sources. (Perhaps the FTXFF folks might have caught the scent if they were forensic accountants—but they weren’t. And I’m not at all confident on that in any event.) I’m suggesting that, in response to this scandal, EA organizations could insist on certain third-party assurances in the future before taking significant amounts of money from certain sources.
Why the big money was willing to fork over nine figures each to FTX without those assurances is unclear to me. But one observation: as far as a hedge fund or lender is concerned, a loss due to fraud is no worse than a loss due to the invested-in firm being outcompeted, making bad business decisions, experiencing a general crypto collapse, getting shut down for regulatory issues, or any number of scenarios that were probably more likely ex ante than a massive conversion scheme. In fact, such a scheme might even be less bad to the extent that the firm thought it might get more money back in a fraud loss than from some ordinarily-business failure modes. Given my understanding that these deals often move very quickly, and the presence of higher-probability failure modes, it is understandable that investors and lenders wouldn’t have prioritized fraud detection.
In contrast, charitable grantees are much more focused in their concern about fraud; taking money from a solvent, non-fraudulent business that later collapses doesn’t raise remotely the same ethical, legal, operational, and reputational concerns. Their potential exposure in that failure mode are likely several times larger than those of the investors/lenders after all non-financial exposures are considered. They are also not on a tight time schedule.
Re your footnote 4, CE/AIM are starting an earning-to-give incubation program, so that is likely to change pretty soon
Oh good point! That does seem to increase the urgency of this. I’d be interested to hear if CE/AIM had any thoughts on the subject.
Will MacAskill waited until April to speak fully and openly on the extra cautious advice of legal counsel. If that period ended to the point Will spoke to the matter of the FTX collapse, and the before and after, has he had ever wanted to, surely almost everyone could do the same now. The barrier or objection of not talking according to the strong advice of legal counsel seems like it’d be null for most people at this point.
Edit: in the 2 hours since I first made this comment, I’ve read most of the comments with arguments both for and against why someone should begin pursuing at least some parts of what could constitute an overall investigation as has been suggested. Finding the arguments for doing so far better than the arguments against, I have now decided to personally begin pursuing the below project. Anyone interested in helping or supporting me in that vein, please reply to this comment, or contact me privately. Any number of messages I receive along the lines of “I think this is a bad idea, I disagree with what you intend to do, I think this will be net negative, please don’t do this”, etc., absent other arguments, are very unlikely to deter me. On the contrary, if anything, such substanceless objections may motivate me to pursue this end with more vigour.
I’m not extremely confident I could complete an investigation of the whole of the EA community’s role in this regard at the highest level all by myself, though I am now offering to investigate or research parts of this myself. Here’s some of what I could bring to the table.
I’d be willing to do some relatively thorough investigation from a starting point of being relatively high-context. For those who wouldn’t think I’d be someone who knows a lot of context here, this short form post I made a while ago could serve as proof of concept I have more context than you might expect. I could offer more information, or answer more questions others have, in an attempt to genuinely demonstrate how much context I have.
I have very little time constraints compared to perhaps most individuals in the EA community who might be willing or able to contribute to some aspect of such an investigation. Already on my own time, I occasionally investigate issues in and around EA by myself. I intend to do so more in the future. I’d be willing to research more specific issues on my own time if others were to provide some direction. Some of what I might pursue further may be related to FTX anyway without urging from others.
I’d be willing to volunteer a significant amount of time doing so, as I’m not currently working full-time and may not be working full-time in the foreseeable future. If the endeavour required a certain amount of work or progress achieved within a certain time frame, I may need to be hired in some capacity to complete some of the research or investigating. I’d be willing to accept such an opportunity as well.
Having virtually no conflict of interests, there’s almost nothing anyone powerful in or around EA could hold over me to attempt to stop me from trying to investigate.
I’m champing at the bit to make this happen probably about as much as anyone.
I would personally find the contents of any aspect of such an investigation to be extremely interesting and motivating.
I wouldn’t fear any retaliation whatsoever. Some attempts or threats to retaliate against me could be indeed be advantageous for me, as I am confident they would fail to achieve their desired goals, and thus serve as evidence to others that any further such attempts would be futile wastes of efforts.
I am personally in semi-regular contact or have decent rapport with some whistleblowers or individuals who retain private information about events related to the whole saga of FTX dating back to 2018. They, or their other peers who’ve also exited the EA community in the last several years, may not be willing to talk freely with most individuals in EA who might participate in such an investigation. I am very confident at least some of them would be willing to talk to me.
I’m probably less nervous personally, i.e., being willing to be radically transparent and honest, about speaking up or out about anything EA-related than most people who have continuously participated in the EA community for over a decade. I suspect that includes even you and Oliver Habryka, who have already been noted in other comments here as among those in that cohort who are the least nervous. Notably that may at this point be a set of no more than a few hundred people.
To produce common-knowledge documents to help as large a subset of the EA community, if not the whole community, to learn what happened, and what could be done differently in the future, would be a goal of any such investigation that I could be most motivated to accomplish. I’d be much more willing to share such a document more widely than most other people who might be willing or able to produce one.
I haven’t heard any arguments against doing an investigation yet, and I could imagine folks might be nervous about speaking up here. So I’ll try to break the ice by writing an imaginary dialogue between myself and someone who disagrees with me.
Obviously this argument may not be compelling compared to what an actual proponent would say, and I’d guess I’m missing at least one key consideration here, so treat this as a mere conversation-starter.
Hypothetical EA: Why isn’t EV’s 2023 investigation enough? You want us to investigate; well, we investigated.
Rob: That investigation was only investigating legal risk to EV. Everything I’ve read (and everything I’ve heard privately) suggests that it wasn’t at all trying to answer the question of whether the EA community made any moral or prudential errors in how we handled SBF over the years. Nor was it trying to produce common-knowledge documents (either private or public) to help any subset of EA understand what happened. Nor was it trying to come up with any proposal for what we should do differently (if anything) in the future.
I take it as fairly obvious that those are all useful activities to carry out after a crisis, especially when there was sharp disagreement, within EA leadership, long before the FTX implosion, about how we should handle SBF.
Hypothetical EA: Look, I know there’s been no capital-I “Investigation”, but plenty of established EAs have poked around at dinner parties and learned a lot of the messy complicated details of what happened. My own informal poking around has convinced me that no EAs outside FTX leadership did anything super evil or Machiavellian. The worst you can say is that they muddled along and had miscommunications and brain farts like any big disorganized group of humans, and were a bit naively over-trusting.
Me: Maybe! But scattered dinner conversation with random friends and colleagues, with minimal following up or cross-checking of facts, isn’t the best medium for getting an unbiased picture of what happened. People skew the truth, withhold info, pass the blame ball around. And you like your friends, so you’re eager to latch on to whatever story shows they did an OK job.
Perhaps your story is true, but we shouldn’t be scared of checking, applying the same level of rigor we readily apply to everything else we’re doing.
The utility of this doesn’t require that any EAs be Evil. A postmortem is plenty useful in a world where we were “too trusting” or were otherwise subject to biases in how we thought, or how we shared information and made group decisions — so we can learn from our mistakes and do better next time.
And if we’ve historically been “too trusting”, it seems doubly foolish to err on the side of trusting every individual, institution, and process involved in the EA-SBF debacle, and write them a preemptive waiver for all the errors we’re studiously avoiding checking whether they’ve made.
Hypothetical EA: Look, there’s just no reason to use SBF in particular for your social experiment in radical honesty and perfect transparency. It was to some extent a matter of luck that SBF succeeded as well as he did, and that he therefore had an opportunity to cause so much harm. If there were systemic biases in EA that caused us to err here, then those same biases should show up in tons of other cases too.
The only reason to single out the SBF case in particular and give it 1000x more attention than everything else is that it’s the most newsworthy EA error.
But the main effect of this is to inflate and distort minor missteps random EA decision-makers made, bolstered by the public’s hindsight bias and cancel culture and by journalists’ axe-grinding, so that the smallest misjudgments an EA makes look like horrific unforgivable sins.
SBF is no more useful for learning about EA’s causal dynamics than any other case (and in fact SBF is an unusually bad place to try to learn generalizable lessons, because the sky-high stakes will cause people to withhold key evidence and/or bend the truth toward social desirability); it’s only useful as a bludgeon, if you came into all this already sure that EA is deeply corrupt (or that particular individuals or orgs are), and you want to summon a mob to punish those people and drive them from the community.
(Or, alternatively, if you’re sad about EA’s bad reputation and you want to find scapegoats: find the specific Bad EAs and drive them out, to prove to the world that you’re a Good EA and that EA-writ-large is now pure.)
Me: I find that argument somewhat compelling, but I still think an investigation would make sense.
First, extreme cases can often illustrate important causal dynamics that are harder to see in normal cases. E.g., if EA has a problem like “we fudge the truth too much”, this might be hard to detect in low-stakes cases where people have less incentive to lie. People’s behavior when push comes to shove is important, given the huge impact EA is trying to have on the world; and SBF is one huge instance where push came to shove and our character was really tested.
And, yes, some people may withhold information more because of the high stakes. But others will be much more willing to spend time on this question because they recognize it as important. If nothing else, SBF is a Schelling point for us all to direct our eyes at the same thing simultaneously, and see if we can converge on some new truths about the world.
Second, and moving away from abstractions to talk about the specifics of this case: My understanding is that a bunch of EAs tried to warn the community that SBF was extremely shady, and a bunch of other EAs apparently didn’t believe the warnings, or didn’t want those warnings widely shared even though they believed them.
“SBF is extremely shady” isn’t knowledge that FTX was committing financial fraud, and shouting “SBF is extremely shady” from the hills wouldn’t necessarily have prevented the fraud from happening. But there’s some probability it might have been the tipping point at various important junctures, as potential employees and funders and customers weighed their options. And even if it wouldn’t have helped at all in this case, it’s good to share that kind of information in case it helps the next time around.
I think it would be directly useful to know what happened to those warnings about SBF, so we can do better next time. And I think it would also help restore a lot of trust in EA (and a lot of internal ability for EAs to coordinate with each other) if people knew what happened — if we knew which thought leaders or orgs did better or worse, how processes failed, how people plan to do better next time.
I recognize that this will be harder in some ways with journalists and twitter users breathing down your necks. And I recognize that some people may suffer unfair scrutiny and criticism because they were in the wrong place at the wrong time. To some extent I just think we need to eat that cost; when you’re playing chess with the world and making massively impactful decisions, that comes with some extra responsibility to take a rare bit of unfair flack for the sake of being able to fact-find and orient at all about what happened. Hopefully the fact that some time has passed, and that we’re looking at a wide variety of people and orgs rather than a specific singled-out individual, will mitigate this problem.
If FTX were a total bolt out of the blue, that would be one thing. But apparently there were rather a lot of EAs who thought SBF was untrustworthy and evil, and had lots of evidence on hand to cite, at the exact same time 80K and Will and others were using their megaphones to broadcast that SBF is an awesome EA hero. I don’t know that 80K or Will in particular are the ones who fucked up here, but it seems like somebody fucked up in order for this perception gap to exist and go undiscussed.
I understand people having disagreements about someone’s character. Hindsight bias is a thing, and I’m sure people had reasons at the time to be skeptical of some of the bad rumors about SBF. But I tend to think those disagreements should be things that are argued about rather than kept secret. Especially if the secret conversations empirically have not resulted in the best outcomes.
Hypothetical EA: I dunno, this whole “we need a public airing out of our micro-sins in order to restore trust” thing sounds an awful lot like the exact “you’re looking for scapegoats” thing I was warning about.
You’re fixated on this idea that EAs did something Wrong and need to be chastised and corrected, like we’re perpetrators alongside SBF. On the contrary, I claim that the non-FTX EAs who interacted the most with Sam should mostly be thought of as additional victims of Sam: people who were manipulated and mistreated, who often saw their livelihoods threatened as a result and their life’s work badly damaged or destroyed.
The policies you’re calling for amount to singling out and re-victimizing many of Sam’s primary victims, in the name of pleasant-sounding abstractions like Accountability — abstractions that have little actual consequentialist value in this case, just a veneer of “that sounds nice on paper”.
Me: It’s unfortunately hard for me to assess the consequentialist value in this case, because no investigation has taken place. I’ve gestured at some questions I have above, but I’m missing most of the pieces about what actually happened, and some of the unknown unknowns here might turn out to swamp the importance of what I know about. It’s not clear to me that you know much more than me, either. Rather than pitting your speculation against mine, I’d rather do some actual inquiry.
Hypothetical EA: I think we already know enough, including from the legal investigation into Sam Bankman-Fried and who was involved in his conspiracy, to make a good guess that re-victimizing random EAs is not a useful way for this movement to spend its time and energy. The world has many huge problems that need fixing, and it’s not as though EA’s critics are going to suddenly conclude that EAs are Good After All if we spill all of our dirty laundry. What will actually happen is that they’ll cherry-pick and distort the worst-sounding tidbits, while ignoring all the parts you hoped would be “trust-restoring”.
Me: Some EA critics will do that, sure. But there are plenty of people, both within EA and outside of it, who legitimately just want to know what happened, and will be very reassured to have a clearer picture of the basic sequence of events, which orgs did a better or worse job, which processes failed or succeeded. They’ll also be reassured to know that we know what happened, vs. blinding ourselves to the facts and to any lessons they might contain.
Or maybe they’ll be horrified because the details are actually awful (ethically, not legally). Part of being honest is taking on the risk that this could happen too. That’s just not avoidable. If we’re not the sort of community that would share bad stuff if it were true, then people are forced to be that much more worried that we’re in fact hiding a bunch of bad stuff.
Hypothetical EA: I just don’t think there’s that much crucial information EA leaders are missing, from their informal poking around. You can doubt that, but I don’t think a formal investigation would help much, since people who don’t want to speak now will (if anything) probably be even more tight-lipped in the face of what looks like a witch-hunt.
You say that EAs have a responsibility to jump through a bunch of transparency hoops. But whether or not you agree with my “EAs are victims” frame: EAs don’t owe the community their lives. If you’re someone who made personal sacrifices to try to make the world a better place, that doesn’t somehow come with a gotcha clause where you now have incurred a huge additional responsibility that we’d never impose on ordinary private citizens, to dump your personal life into the public Internet.
Me: I don’t necessarily disagree with that, as stated. But I think particular EAs are signing up for some extra responsibility, e.g., when they become EA leaders and ask for a lot of trust on the part of their community.
I wouldn’t necessarily describe that responsibility as “huge”, because I don’t actually think a basic investigation into the SBF thing is that unusual or onerous.
I don’t see myself as proposing anything all that radical here. I’m even open to the idea that we might want to redact some names and events in the public recounting of what happened, to protect the innocent. I don’t see anything weird about that; what strikes me as puzzling is the complete absence of any basic fact-finding effort (beyond the narrow-scope EV legal inquiry).
And what strike me as doubly puzzling is that there hasn’t even been a public write-up that CEA and others are not planning to look into this at all, nor has there been any public argument for this policy — whence this dialogue. As though EAs are just hoping we’ll quietly forget about this pretty major omission, so they don’t have to say anything potentially controversial. That I don’t really respect; if you think this investigation is a bad idea, do the EA thing and make your case!
Hypothetical EA: Well, hopefully my arguments have given you some clues about (non-nefarious reasons why) EAs might want to quietly let this thing die, rather than giving a big public argument for letting it die. In addition to the obvious fact that folks are just very busy, and more time spent on this means less time spent on a hundred other things.
Me: And hopefully my arguments have helped remind some folks that things are sometimes worth doing even when they’re hard.
All the arguments in the world don’t erase the fact that at the end of the day, we have a choice between taking risks for the sake of righting our wrongs and helping people understand what happened, versus hiding from the light of day and quietly hoping that no one calls us out for retreating from our idealistic-sounding principles.
We have a choice between following the path of least resistance into ever-murkier, ever-more-confusing, ever-less-trusting waters; or taking a bold stand and doing whatever we can to give EAs and non-EAs alike real insight into what happened, and a real capacity to adjust course if and only if some course-changing is warranted.
There are certainly times when the boring, practical, un-virtuous-sounding option really is the right option. I don’t think this is one of those times; I think we need to be better than that this one time, or we risk losing by a thousand cuts some extremely precious things that used to be central to what made EA EA.
… And if you disagree with me about all that, well, tell me why I’m wrong.
I think I agree with Hypothetical EA that we basically know the broad picture.
Probably nobody was actually complicit or knew there was fraud; and
Various people made bad judgement calls and/or didn’t listen to useful rumours about Sam
I guess I’m just… satisfied with that? You say:
.. why? None of this seems that important to me? Most of it seems like a matter for the person/org in question to reflect/improve on. Why is it important for “plenty of people” to learn this stuff, given we already know the broad picture above?
I would sum up my personal position as:
We got taken for a ride, so we should take the general lesson to be more cautious of charismatic people with low scruples, especially bearing large sums of money.
If you or your org were specifically taken for a ride you should reflect on why that happened to you and why you didn’t listen to the people who did spot what was going on.
EA is compelling insofar as it is about genuinely making the world a better place, ie we care about the actual consequences. Just because there are probably no specific people/processes to blame, doesn’t mean we should be satisfied with how things are.
There is now decent evidence that EA might cause considerable harm in the world, so we should be strongly motivated to figure out how to change that. Maybe EA’s failures are just the cost of ambition and agency, and come along with the good it does, but I think that’s both untrue and worryingly defeatist.
I care about the end result of all of this, and the fact that we’re okay with some serious Ls happening (and not being willing to fix the root cause of those errors) is concerning.
Random idea:
Maybe we should—after this question of investigation or not has been discussed in more detail—organize community-wide vote on whether there should be an investigation or not?
It’s easy to vote for something you don’t have to pay for. If we do anything like this, an additional fundraiser to pay for it might be appropriate.
Knowing what people think is useful, especially if it’s a non-anonymous poll aimed at sparking conversations, questions, etc. (One thing that might help here is to include a field for people to leave a brief explanation of their vote, if the polling software allows for it.)
Anonymous polls are a bit trickier, since random people on the Internet can easily brigade such a poll. And I wouldn’t want to assume that something’s a good idea just because most EAs agree with it; I’d rather focus on the arguments for and against.
“Just focus on the arguments” isn’t a decision-making algorithm, but I think informal processes like “just talk about it and individually do what makes sense” perform better than rigid algorithms in cases like this.
If we want something more formal, I tend to prefer approaches like “delegate the question to someone trustworthy who can spend a bunch of time carefully weighing the arguments” or “subsidize a prediction market to resolve the question” over “just run an opinion poll and do whatever the majority of people-who-see-the-poll vote for, without checking how informed or wise the respondents are”.
The question of a community-wide vote, on any level, about whether there should be such an investigation might at this point be moot. I have personally offered to begin conducting significant parts of such an investigation myself. Since I made that initial comment, I’ve now read several more providing arguments against the need or desirability for such an investigation. Having found them unconvincing, I now intend privately contact at least several private individuals—both in and around the EA movement, as well as some outside of or who no longer participate in the EA community—to pursue that end. Something like a community-wide vote, or some proxy like even dozens of effective altruists trying to talk me out of that, would be unlikely to convice me to not do so.
People, the downvote button is not a disagree button. That’s not really what it should be used for.
Thanks
Maybe quite some people don’t like random ideas being shared on the Forum?
I disagree, and in this case I don’t think the forum team should have a say in the matter. Each user has their own interpretation of the upvote/downvote button and that’s ok. Personally I don’t use it as “I disagree” but rather as “this comment shouldn’t have been written”, but there’s certainly a correlation. For instance, I both disagree-voted and downvoted your comment (since I dislike the attempt to police this).
Update Apr. 15: I talked to a CEA employee and got some more context on why CEA hasn’t done an SBF investigation and postmortem. In addition to the ‘this might be really difficult and it might not be very useful’ concern, they mentioned that the Charity Commission investigation into EV UK is still ongoing a year and a half later. (Google suggests that statutory inquiries by the Charity Commission take an average of 1.2 years to complete, so the super long wait here is sadly normal.)
Although the Commission has said “there is no indication of wrongdoing by the trustees at this time”, and the risk of anything crazy happening is lower now than it was a year and a half ago, I gather that it’s still at least possible that the Commission could take some drastic action like “we think EV did bad stuff, so we’re going to take over the legal entity that includes the UK components of CEA, 80K, GWWC, GovAI, etc.”, which may make it harder for CEA to usefully hold the steering wheel on an SBF investigation at this stage.
Example scenario: CEA tries to write up some lessons learned from the SBF thing, with an EA audience in mind; EAs tend to have unusually high standards, and a CEA staffer writes a comment that assumes this context, without running the comment by lawyers because it seemed innocent enough; because of those high standards, the Charity Commission misreads the CEA employee as implying a way worse thing happened than is actually the case.
This particular scenario may not be a big risk, but the sum of the risk of all possible scenarios like that (including scenarios that might not currently be on their radar) seems non-negligible to the CEA person I spoke to, even though they don’t think there’s any info out there that should rationally cause the Charity Commission to do anything wild here. And trying to do serious public reflection or soul-searching while also carefully nitpicking every sentence for possible ways the Charity Commission could misinterpret something, doesn’t seem like an optimal set-up for deep, authentic, and productive soul-searching.
The CEA employee said that they thought this is one reason (but not the only reason) EV is unlikely to run a postmortem of this kind.
My initial thoughts on all this: This is very useful info! I had no idea the Charity Commission investigation was still ongoing, and if there are significant worries about that, that does indeed help make CEA and EV’s actions over the last year feel a lot less weird-and-mysterious to me.
I’m not sure I agree with CEA or EV’s choices here, but I no longer feel like there’s a mystery to be explained here; this seems like a place where reasonable people can easily disagree about what the right strategy is. I don’t expect the Charity Commission to in fact take over those organizations, since as far as I know there’s no reason to do that, but I can see how this would make it harder for CEA to do a soul-searching postmortem.
I do suspect that EV and/or CEA may be underestimating the costs of silence here. I could imagine a frog-boiling problem arising here, where it made sense to delay a postmortem for a few months based on a relatively small risk of disaster (and a hope that the Charity Commission investigation in this case might turn out to be brief), but it may not make sense to continue to delay in this situation for years on end. Both options are risky; I suspect the risks of inaction and silence may be getting systematically under-weighted here. (But it’s hard to be confident when I don’t know the specifics of how these decisions are being made.)
I ran the above by Oliver Habryka, who said:
I have some information suggesting that maybe Oliver and/or the CEA employee’s account is wrong, or missing part of the story?? But I’m confused about the details, so I’ll look into things more and post an update here if I learn more.
The pendency of the CC statutory inquiry would explain hesitancy on the part of EVF UK or its projects to conduct or cooperate with an “EA” inquiry. A third-party inquiry is unlikely to be protected by any sort of privilege, and the CC may have means to require or persuade EVF UK to turn over anything it produced in connection with a third-party “EA” inquiry. However, it doesn’t seem that this should be an impediment to proceeding with other parts of an “EA inquiry,” especially to the extent this would be done outside the UK.
However, in the abstract—if any charity’s rationale for not being at least moderately open and transparent with relevant constituencies and the public is “we are afraid the CC will shut us down,” that is a charity most people would run away from fast, and for good reason. If the choice is between having a less-than “soul-searching postmortem” or none at all, I’ll take the former. Also, I strongly suspect everything EVF has said about the whole FTX situation has been vetted by lawyers, so the idea that someone is going to write an “official” postmortem without legal vetting is doubtful. Finally, I worry the can is going to continue being kicked down the road until EVF is far into the process of being dismantled, at which time the rationale may evolve into “we’re disbanding anyway, what’s the point?”
I do think a subtext of the reported discussion above is that the CC is not considered to be a necessarily trustworthy or fair arbiter here. “If we do this investigation then the CC may see things and take them the wrong way” means you don’t trust the CC to take them the right way. Now, I have no idea whether that is justified in this case, but it’s pretty consistent with my impression of government bureaucracies in general.
So it perhaps comes down to whether you previously considered the charity or the CC more trustworthy. In this case I think I trust EVF more.
I trust EV more than the charity commission about many things, but whether EV behaved badly over SBF is definitely not one of them. One judgment here is incredibly liable to distortion through self-interest and ego preservation, and it’s not the charity commission’s. (That’s not a prediction that the charity commission will in fact harshly criticize EV. I wouldn’t be surprised either way on that.)
When I looked at past CC actions, I didn’t get the impression that they were in the habit of blowing things out of proportion. But of course I didn’t have the full facts of each investigation.
One reason I don’t put much stock in the CC may not “necessarily [be a] trustworthy or fair arbiter” possibility is that it has to act with reasoning transparency because it is accountable to a public process. Its actions with substance (as opposed to issuing warnings) are reviewable in the UK courts, in proceedings where the charity—a party with the right knowledge and incentives—can call them out on dubious findings. The CC may not fear litigation in the same sense that a private entity might, but an agency’s budget/resources don’t generally go up because it is sued and agencies tend not to seek to create extra work for themselves for the thrill of it.
Moreover, the rationale of non-disclosure due to CC concerns operates at the margin. There are particular things we shouldn’t disclose in public because the CC might badly misinterpret those statements is one thing. There is nothing else useful we can disclose because all of those statements pose an unacceptable risk of the CC badly misinterpreting any further detail is another.
I have already personally decided to begin pursuing myself inquiries and research that would constitute at least some aspects of the sort of investigation in question. Much of what I generally have in mind, and in particular what I’d be most capable of doing myself, would be unrelated to EVF UK. If it’d make it easier, I’m amenable to perhaps avoiding probing in ways that intersect with EVF UK until the CC inquiry has ended. (This probably wouldn’t include EVF USA). That EVF is in the process of disbanding, which would complicate any part of such an investigation, as well as the fact another major EA organization is likely in the process of launching an earning to give incubator/training organization, are two reasons I will be expediting this project.
Not to state the obvious but the ‘criticism of EA’ posts didn’t pose a real risk to the power structure. It is uhhhhh quite common for ‘criticism’ to be a lot more encouraged/tolerated when it isnt threatening.
I mostly agree with this, and upvoted strongly, but I don’t think the scare quotes around “criticism” is warranted. Improving ideas and projects through constructive criticism is not the same thing as speaking truth to power, but it is still good and useful, it’s just a different good and useful thing.
I’m against doing further investigation. I expressed why I think we have already spent too much time on this here.
I also think your comments are falling into the trap of referring to “EA” like it was an entity. Who specifically should do an investigation, and who specifically should they be investigating? (This less monolithic view of EA is also part of why I don’t feel as bothered by the the whole thing: so maybe some people in “senior” positions made some bad judgement calls about Sam. They should maybe feel bad. I’m not sure we should feel much collective guilt about that.)
While recognizing the benefits of the anti-”EA should” taboo, I also think it has some substantial downsides and needs to be invoked after consideration of the specific circumstances at hand.
One downside is that the taboo can impose significant additional burdens on a would-be poster, discouraging them from posting in the first place. If it takes significant time investment to write “X should be done,” it is far from certain others will agree, and then additional significant time to figure out/write “and it should be done by Y,” then the taboo would require someone who wants to write the former to invest in writing the latter before knowing if the former will get any traction. Being okay with the would-be poster deferring certain subquestions (like “who”) means that effort can be saved if there’s not enough traction on the basic merits.
Another downside is that a would-be poster may have expertise, knowledge, or resources relevant to part of a complex question. If we taboo efforts by those who can only answer some of the issues effectively, we will lose the benefit of their insight.
I don’t think that is an appropriate burden to place on someone writing a post or comment calling for an investigation. I think that would be a blocker anyone without a fair deal of certain “insider-ish” knowledge from ever making the case for an investigation:
This isn’t a do-ocracy project. Doing it properly is not going to be cheap (e.g., hiring an investigative firm), and so ability to get funded for this is a prerequisite. Expecting a Forum commenter to know who could plausibly get funding is a bit much. To the extent that that is a reasonable expectation, we would also expect the reader to know that—so it is a minor defect. To the extent that who could get funded is a null set, then bemoaning a perceived lack of willingness to invest in a perceived important issue in ecosystem health is a valid post.
Even apart from this, whoever was running the investigator would need to secure the cooperation of organizations and individuals one way or another. That could either flow through the investigation sponsor’s own standing in the community (e.g., that ~everyone trusted them to give them a fair shake), and/or through funders/other powers putting their heft behind the investigation (e.g., that documented refusal to cooperate would likely have material adverse consequences).
Many good investigations do not have a specific list of people/entities who are the target of investigatory concern at the outset. They have a list of questions, and a good sense of the starting points for inquiry (and figuring out where other useful information lies). If I were trying to gain a better understanding of EA-aligned people/orgs’ interactions with SBF, I think some of the starting points are obvious.
Moreover, a higher level of specificity strikes me as potentially infohazardous for the Forum. Whatever might be said of the costs and benefits of circulating ~rumors to a publicly-accessible Forum to guard the community against future misconduct and non-malicious problematic conduct, the cost/benefit assessment feels more doubtful when the focus is more on certain forms of past problematic conduct. Even if Rob had solid hunches as to whose actions should be probed more significantly, it’s not clear that it would net-positive for him to name names here. Given that, I am very hesitant to endorse any norm that puts a thumb on the scale by creating an expectation that a poster will publicly release information whose public disclosure may well have a net negative impact.
Thanks, I think this is all right. I think I didn’t write what I meant. I want more specificity, but I do agree with you that it’s wrong to expect full specificity (and that’s what I sounded like I was asking for).
What I want something more like “CEA should investigate the staff of EVF for whether they knew about X and Y”, not “Alice should investigate Bob and Carol for whether they knew about X and Y”.
I do think that specificity raises questions, and that this can be a good thing. I agree that it’s not reasonable for someone to work out e.g. exactly where the funding comes from, but I do think it’s reasonable for them to think in enough detail about what they are proposing to realise that a) it will need funding, b) possibly quite a lot of funding, c) this trades off against other uses of the money, so d) what does that mean for whether this is a good idea. Whereas if “EA” is going to do it, then we don’t need to worry about any of those things. I’m sure someone can just do it, right?
I am at least one someone who not only can, but already has decided that I will, at least begin doing it. To that end, for myself or perhaps even others, there are already some individuals I have in mind to begin contacting who may be willing to provide at least a modicum of funding, or would know others who might be willing to do so. In fact, I have already begun that process.
There wouldn’t be a tradeoff with other uses of at least some of that money, given I’m confident at least some of those individuals would not donate or otherwise use that money to support, e.g., some organization affiliated with, or charity largely supported by, the EA community. (That would be due to some of the individual funders in question not being effective altruists.) While I agree it may not be a good idea for EA as a whole to go about this in some quasi-official way, I’ve concluded there aren’t any particularly strong arguments made yet against the sort of “someone” you had in mind doing so.
As I’ve already mentioned in other comments, I have myself already decided to begin pursuing a greater degree of inquiry, with haste. I’ve publicly notified others who’d offer pushback solely on the basis of reinforcing or enforcing such a taboo is likely to only motivate to do so with more gusto.
I have some knowledge and access to resources that would be relevant to solving at least a minor but still significant part of that complex question. I refer to the details in question in my comment that I linked to above.
To the extent I can begin laying the groundwork for a more thorough investigation to follow what is beyond the capacity of myself and prospective collaborators further, such an investigation will now at least start snowballing as a do-ocracy project. I know multiple people who could plausibly begin funding this, who themselves in turn may know several other people who’d be willing to do it. Some of the funders in question may be willing to uniquely fund myself, or a team I could (co-)lead, to begin doing the investigation in at least a semi-formal manner.
That would be some quieter critics in the background of EA, or others who are no longer effective altruists but have definitely long wanted such an investigation to like has begun to proceed. Why they might trust me in particular is due to my reputation in EA community for years now as being one effective altruist who is more irreverent towards the pecking orders or hiearchies, both formal and informal, of any organized network or section of the EA movement. At any rate, at least to some extent, a lack of much willingness from within the EA to fund the first steps of an inquiry is no longer a relevant concern. I don’t recall if we’ve interacted much before, though as you may soon learn, I am someone in the orbit of effective altruism who sometimes has an uncanny knack for meeting unusual or unreasonable expectations.
Having begun several months ago thinking of pursuing what I can contribute to such a nascent investigation, I already have in mind a list of several people in mind, as well as some questions, starting points for inquiry, and an approach for how to further identify potentially useful information. I intend to begin drafting a document to organize the process I have in mind, and I may be willing to privately share it in confidence with some individuals. You would be included, if you would be interested.
I get a ‘comment not found’ response to your link.
Overall I feel relatively supportive of more investigation and (especially) postmortem work. I also don’t fully understand why more wasn’t shared from the EV investigation[1].
However, I think it’s all a bit more fraught and less obvious than you imply. The main reasons are:
Professional external investigations are expensive
Especially if they’re meaningfully fact-finding and not just interviewing a few people, I think this could easily run into hundreds of thousands of dollars
Who is to pay for this? If a charity is doing it, I think it’s important that their donors are on board with that use of funds
I kind of think someone should fundraise for this specifically; I’m genuinely unsure about donor appetite to support it
I’m somewhat worried about the “re-victimizing” effect you allude to of just sharing everything transparently
Worry that it would cause in-my-view-unjust headaches for people is perhaps the main inhibitory force on my just publicly sharing the pieces of what I know (there’s also sometimes feeling like something isn’t mine to share)
If there were an investigation which was going to make all its factual findings public, I’d expect this to be an inhibitory force on people choosing to share information with them
The possible mistakes we’re talking about are all nuanced
It’s going to be a judgement call what was or wasn’t a mistake
(This is compatible with mistakes being large)
So if we’re hoping for an investigation which doesn’t make all its factual findings public, then we’re trusting in the judgement of the investigators to make a fair assessment
This makes me not want independent lawyers (who may most naturally be drawn to assess things from a perspective of “was this reasonably minimizing of legal exposure”)
But then who?
If this was just a question about conduct at one org, the natural answer might be “some sensible but uninvolved EA”, but if the whole of EA might somehow be called into question, what’s even appropriate?
At the end of this I would be most interested in multiple people who seemed very sensible giving their own post-mortems. I think that this would ideally include a mix of folks in EA and outsiders. I think some fact-finding should inform these people’s takes, without all of the facts themselves necessarily being made public (in order to facilitate the facts actually being shared, as well as to mitigate possible re-victimizing). I’m not certain how much it’s good for this to be via some centralized fact-finding exercise which is then privately shared, vs giving them the opportunity to interview people directly (as you get some more granular data that way). Perhaps ideally a mix. (But that’s making it more time-expensive as an exercise.)
I think there are people close enough to what happened that they can meaningfully give post-mortems without a fact-finding investigation. And I am interested in their views and supportive of them sharing those. But they’re also the people whose judgement is most likely to be distorted by being close to things. So even among EAs I’d prefer to have very sensible people who were further from what happened.
(That’s all where-I-stand-right-now. I can certainly imagine being moved on this.)
I guess that there would have been downsides for EV in doing so, but think these might well have been outweighed by the benefits to the community. However, I want to stress that I think the boards are sensible people making sometimes-difficult trade-offs; I don’t know for sure what I’d have thought with full context; I have some deference to them.
I disagree with this framing.
Something that I believe I got wrong pre-FTX was base rates/priors: I had assumed that if a company was making billions of dollars, had received investment from top-tier firms, complied with a bunch of regulations, etc. then the chance of serious misconduct was fairly low.
I have now spent a fair amount of time documenting that this is not true, in data sets of YCombinator companies and major philanthropists.
It’s hard to measure this, but at least anecdotally some other people (including in “EA leadership” positions) tell me that they were updated by this work and think that they similarly had incorrect priors.
I think what you are calling an “investigation” is fine/good, but it is not the only way to “get the facts straight” or “see if there’s anything we should change in response”.
Fair! I definitely don’t want to imply that there’s been zero reflection or inquiry in the wake of FTX. I just think “what actually happened within EA networks, and could we have done better with different processes or norms?” is a really large and central piece of the puzzle.
I’ve made a first attempt at this here: To what extent & how did EA indirectly contribute to financial crime—and what can be done now? One attempt at a review
I’d highlight that I found taking quite a structured approach helpful: breaking things down chronologically, and trying to answer specific questions like what’s the mechanism, how much did this contribute, and what’s a concrete recommendation?
To be fair, this could trigger lawsuits. I hope someone is reflecting on FTX, but I wouldn’t expect anyone to be keen on discussing their own involvement with FTX publicly and in great detail.
I think that’s right, although I would distinguish between corporate and personal exposure here to some extent:
I’m most hesitant to criticize people for not personally taking actions that could increase their personal legal exposure.
I’m most willing to criticize people and organizations for not taking actions that could increase organizational legal exposure. Non-profit organizations are supposed to exist in the public interest, while individuals do not carry any above-average obligations in that way. Organizations are not moral persons whose welfare is important to me. Moreover, organizations are better able to manage risk than individuals. For purposes of the norm that s/he who benefits from an action should also generally expect to bear the attendant costs, I am more willing to ascribe the benefits of action to an organization than to an individual doing their job.[1]
Organizational decisions to remain silent to avoid risk to individuals pose thornier questions for me. I’d have to think more about that intuition after my lunch break, but some of it relate to reasonable expectations of privacy. For example, disclosure of the contents of an organizational e-mail account (where the employee had notice that it belonged to the employer without a reasonable expectation of privacy) strikes me as less problematic than asking people to divulge their personal records, information about off-work activities, and the like.
Personal liability regimes are often pernicious to people doing their jobs in a socially desirable and optimal way. The reason is that the benefit of doing the job properly / taking risks is socialized, while the costs / risks are privatized. Thus, the actor fearful of personal liability will undervalue the social benefits of proper performance / risk acceptance.
Who would be able to sue? Would it really be possible for FTX customers/investors to sue someone for not making public “I heard Sam lies a lot and once misplaced money at Alameda early on it and didn’t seem too concerned, and reneged on a verbal agreement to share ownership”. Just because someone worked at the Future Fund? Or even someone who worked at EV?
I’d note that Nick Beckstead was in active litigation with the Alameda bankruptcy estate until that was dismissed last month (Docket No. 93). I think it would be very reasonable for anyone who worked at FTXFF to be concerned about their personal legal exposure here. (I am not opining as to whether exposure exists, only that I would find it extremely hard to fault anyone who worked at FTXFF for believing that they were at risk. After all, Nick already got sued!)
It’s harder to assess exposure for other groups of people. To your question, there may be a difference between mere silence in the face of knowledge/suspicion and somewhat supportive statements/actions in the face of the same knowledge. As a reference point, there was that suit against Tom Brady et al. (haven’t seen a recent status update). Obviously, the promotional activity is more explicit there than anything I expect an EA-associated person did. However, the theory against Brady et al. may rely more on generic failure to investigate, while one could perhaps dig for a stronger case against certain EA-related persons on actual knowledge of suspicious facts. I can only encourage people with concerns to consult their own personal legal counsel.
But at the general community level, I would be hesitant to fault various other individuals for being concerned about potential personal legal exposure. Remember, the pain of legal involvement isn’t limited to actual liability. Merely getting sued is itself painful; discovery is even more painful. Public statements could give someone motivation to try and/or ammo to get past a motion to dismiss for failure to state a viable claim.
As am aside, this isn’t really action relevant, but insofar as being involved with the legal system is a massive punishment even when the legal system itself is very likely going to eventually come to the conclusion you’ve done nothing legally wrong, that seems bad? Here it also seems to be having a knock on effect of making it harder to find out what actually happened, rather than being painful but producing useful information.
The suit against Brady also sounds like a complete waste of society’s time and money to me.
The legal system doesn’t know ex ante whether you’ve done anything wrong, though. It’s really hard to set up a system that balances out all the different ways a legal system can be imbalanced. If you don’t give plaintiffs enough leeway to discover evidence for their claims, then tortfeasors will be insufficiently deterred from committing torts. If you go too far (the current U.S. system), you incentivize lawfare, harassment, and legalized extortion of some defendants. Imposing litigation costs / attorney fees on the losers often harms the little guy due to lower ability to shoulder risk & the marginal utility of money. Having parties bear their own costs / fees (generally, the U.S. system) encourages tactics that run up the bill for the other guy. And defendants are more vulnerable to that than plaintiffs as a general rule.
Maybe. Maybe people would talk but for litigation exposure. Or maybe people are using litigation exposure as a convenient excuse to cover the fact that they don’t want to (and wouldn’t) talk anyway. I will generally take individuals at face value given the difficulty of discerning between the two, though.
Would it be possible to set up a fund that pays people for the damages they incurred for a lawsuit where they end up being innocent? That way the EA community could make it less risky for those who haven’t spoken up, and also signal how valuable their information is to them.
Yes, although it is likely cheaper (in expected costs) and otherwise superior to make a ~unconditional offer to cover at least the legal fees for would-be speakers. The reason is that an externally legible, credible guarantee of legal-expense coverage ordinarily acts as a strong deterrent to bringing a weak lawsuit in the first place. As implied by my prior comment, one of the main tools in the plaintiff’s arsenal is to bully a defendant in a weak case to settle by threatening them with liability for massive legal bills. If you take that tactic way by making the defendant ~insensitive to the size of their legal bills, you should stop a lot of suits from ever being brought in the first place. Rather, one would expect would-be plaintiffs to sue only if the expected value of their suit (e.g., the odds of winning and collecting on a judgment multiplied by judgment size) exceed the expected costs of litigating to trial (or to a point at which the defendant decides to settle without factoring in legal bills). If you think the odds of plaintiff success at trial are low and/or that the would-be individual defendant doesn’t have a ton of assets to collect from, then the most likely number of lawsuits is zero.[1]
That does tip the balance of abstract fairness toward defendants and away from plaintiffs. But that can be appropriate in some cases. As noted in an earlier comment of mine, personal-liability regimes underproduce public goods because the public goods are enjoyed by the public while the risk is borne by the individual. Litigation immunities (especially “qualified immunity” in the US) can be a controversial topic, but they reflect that kind of rationale. In some cases, society would rather limit or foreclose someone’s ability to collect damages for torts they suffered than squelch the willingness to provide public goods.
One might not want to extend this offer to those for whom you have a higher degree of suspicion that they did something they really should be sued for, or to those who you think face a high probability of being sued even without speaking up.
This is why you wouldn’t want to bind yourself to indemnify defendants who lost for their judgments. Doing so would create a much larger target on their backs, as the upside from litigation would no longer be limited to what the plaintiff could collect from the defendant. In the worst-case scenario in which a defendant loses unjustly, there are ways for third parties to protect the defendant without further enriching the plaintiff (e.g., making gifts after bankruptcy discharge, well-designed trusts).
How big is the legal risk for a high profile EA person who, say:
knew SBF was an asshole, incautious, and lived in a luxury villa, but had no knowledge of any specific fraud
publicly promoted him as a moral and frugal person
?
Is this automatically tort-worthy, but hard to prove? Laughed out of court no matter what? Does speaking about it publicly extend the court case, so it’s more expensive even if the promoter will ultimately win?
If I am betting $5 of play money on Manifold (meaning off-the-cuff gut check with no research) I would generally bet low as long as the person did not ~specifically promote FTX. If there was specific promotion of FTX, you could see claims like these which would be beyond my willingness to speculate $5 of play money at this time.
Here are some off-the-cuff questions I might want to ask (again, no research) if I were thinking about a specific case:
Could anyone potentially show that they actually and reasonably relied on the statements that were made to transact business with FTX?
How relevant were the statements to a reasonable person who might be considering transacting business with FTX? For example, one might think “Joe told me SBF was frugal despite knowing that was a quarter-truth at best, I wouldn’t have opened an FTX account had he not said that, and it was reasonable for me to rely on SBF’s frugality to decide whether to open an account” sounds like a stretch. On the other hand, reliance on “Jane had very good reason to believe SBF had done shady and illegal stuff, yet forcefully presented him as a trustworthy paragon of moral virtue on her podcast” starts feeling a little more realistic.
How much of the speaker’s content (not just the allegedly false/misleading statements about SBF) was about FTX? If it talked a lot about the advantages of doing business with FTX, etc., then the nexus between the speech and reliance seems stronger. If the context is SBF as a role model for EA EtGers, that would seem a real stretch.
Was there a direct or indirect financial benefit to the speaker or a related entity? If SBF gave the speaker (or more likely, their organization) tons of money, this starts looking more like a ~paid endorsement. And we are generally more willing to put duties on ~paid endorsers than on (say) on you and my comments on this Forum.
Also questions 3 and 4 get into potential causes of action for assisting with the sales of unregistered securities (cf. page 36 here). It’s unclear to me how an EA leader speaking out would increase their exposure to such a lawsuit.
There’s also the more realist answer to your question, which goes like this: the greater your income and assets, the greater your risk. My parents (on Social Security which can’t be garnished, only significant asset is the marital home which is difficult for creditors to access) probably wouldn’t need to worry. Unless you’re doing it for ideological reasons, why sue if you can’t collect more than what litigation costs?
(understanding you are a guy betting $5 on manifold)
re: #3. Does this get blurred if the company made an explicit marketing push about what a great guy their CEO was? I imagine that still wouldn’t affect statements on him as a role model[1] , but might matter if they said many positive statements about him on a platform aimed at the general public.
legally
Not a crypto-focused platform (e.g., Joe’s Crypto Podcast?) No particular reason to know or believe that the company (had / was going to) use something Person said as part of their marketing campaign? If negative to both, it doesn’t affect my $5 Manifold bet.
thanks, I appreciate all this info.
I guess I kinda want to say fiat justitia ruat caelum here 🤷
You folks impress me! But seriously, that’s a big ask.
I’m a pretty big fan of Nate’s public write-up on his relationship to Sam and FTX. Though, sure, this is going to be scarier for people who were way more involved and who did stuff that twitter mobs can more easily get mad about.
This is part of why the main thing I’m asking for is a professional investigation, not a tell-all blog post by every person involved in this mess (though the latter are great too). An investigation can discover useful facts and share them privately, and its public write-up can accurately convey the broad strokes of what happened, and a large number of the details, while taking basic steps to protect the innocent.