Quick Update on Leaving the Board of EV
A brief and belated update: When I resigned from the board of EV US last year, I was planning on writing about that decision. But I ultimately decided against doing that for a variety of reasons, including that it was very costly to me, and I believed it wouldn’t make a difference. However, I want to make it clear that I resigned last year due to significant disagreements with the board of EV and EA leadership, particularly concerning their actions leading up to and after the FTX crisis.
While I certainly support the boards’ decision to pay back the FTX estate, spin out the projects as separate organizations, and essentially disband EV, I continue to be worried that the EA community is not on track to learn the relevant lessons from its relationship with FTX. Two things that I think would help (though I am not planning to work on either myself):
EA needs an investigation, done externally and shared publicly, on mistakes made in the EA community’s relationship with FTX.[1] I believe there were extensive and significant mistakes made which have not been addressed. (In particular, some EA leaders had warning signs about SBF that they ignored, and instead promoted him as a good person, tied the EA community to FTX, and then were uninterested in reforms or investigations after the fraud was revealed). These mistakes make me very concerned about the amount of harm EA might do in the future.
EA also needs significantly more clarity on who, if anyone, “leads” EA and what they are responsible for. I agree with many of Will MacAskill’s points here and think confusion on this issue has indirectly resulted in a lot of harm.
CEA is a logical place to house both of these projects, though I also think leaders of other EA-affiliated orgs, attendees of the Meta Coordination Forum, and some people at Open Philanthropy would also be well-suited to do this work. I continue to be available to discuss my thoughts on why I left the board, or on EA’s response to FTX, individually as needed.
- ^
Although EV conducted a narrow investigation, the scope was far more limited than what I’m describing here, primarily pertaining to EV’s legal exposure, and most results were not shared publicly.
- Reflections and lessons from Effective Ventures by 28 Oct 2024 16:01 UTC; 186 points) (
- To what extent & how did EA indirectly contribute to financial crime—and what can be done now? One attempt at a review by 13 Apr 2024 5:55 UTC; 63 points) (
- 3 Apr 2024 3:40 UTC; 34 points) 's comment on Why hasn’t EA done an SBF investigation and postmortem? by (
- 23 Apr 2024 14:29 UTC; 22 points) 's comment on Personal reflections on FTX by (
- 9 Apr 2024 9:07 UTC; 14 points) 's comment on Hauke Hillebrandt’s Quick takes by (
Here’s a post with me asking the question flat out: Why hasn’t EA done an SBF investigation and postmortem?
This seems like an incredibly obvious first step from my perspective, not something I’d have expected a community like EA to be dragging its heels on years after the fact.
We’re happy to sink hundreds of hours into fun “criticism of EA” contests, but when the biggest disaster in EA’s history manifests, we aren’t willing to pay even one investigator to review what happened so we can get the facts straight, begin to rebuild trust, and see if there’s anything we should change in response? I feel like I’m in crazytown; what the heck is going on?
Update Apr. 4: I’ve now spoken with another EA who was involved in EA’s response to the FTX implosion. To summarize what they said to me:
They thought that the lack of an investigation was primarily due to general time constraints and various exogenous logistical difficulties. At the time, they thought that setting up a team who could overcome the various difficulties would be extremely hard for mundane reasons such as:
thorough, even-handed investigations into sensitive topics are very hard to do (especially if you start out low-context);
this is especially true when they are vaguely scoped and potentially involve a large number of people across a number of different organizations;
“professional investigators” (like law firms) aren’t very well-suited to do the kind of investigation that would actually be helpful;
legal counsels were generally strongly advising people against talking about FTX stuff in general;
various old confidentiality agreements would make it difficult to discuss what happened in some relevant instances (e.g. meetings that had Chatham House Rules);
it would be hard to guarantee confidentiality in the investigation when info might be subpoenaed or something like that;
and a general plethora of individually-surmountable but collectively-highly-challenging obstacles.
They flagged that at the time, most people involved were already in an exceptionally busy and difficult time, and so had less bandwidth for additional projects than usual.
A caveat here is that the EV board did block some people from speaking publicly during the initial investigation into EV’s legal situation. That investigation ended back in the summer of 2023.
Julia Wise and Ozzie Gooen wrote on the EA Forum that this is a potentially useful project for someone to take on, which as far as this person knew isn’t something any EA leadership did or would try to stop, and the impression of the person I spoke to was that Julia and Ozzie indeed tried to investigate what reforms should happen, though the person I spoke to didn’t follow that situation closely.
The person I spoke to didn’t want to put words in the mouth of EA leaders, and their information is mostly from ~1 year ago and might be out of date. But to the extent some people aren’t currently champing at the bit to make this happen, their impression (with respect to the EA leaders they have interacted with relatively extensively) is that this has little to do with a desire to protect the reputation of EA or of individual EAs.
Rather, their impression is that for a lot of top EA leaders, this whole thing is a lot less interesting, because those EAs think they know what happened (and that it’s not that interesting). So the choice is like “should I pour in a ton of energy to try to set up this investigation that will struggle to get off the ground to learn kinda boring stuff I already know?” And maybe they are underrating how interesting others would find it, but that made the whole idea not so important-seeming (at least in the early days after FTX’s collapse, relative to all the other urgent and confusing things swirling around in the wake of the collapse) from their perspective.
I vouch for this person as generally honest and well-intentioned. I update from the above that community leaders are probably less resistant to doing some kind of fact-finding inquiry than I thought. I’m hoping that this take is correct, since it suggests to me that it might not be too hard to get an SBF postmortem to happen now that the trial and the EV legal investigation are both over (and now that we’re all talking about the subject in the first place).
If the take above isn’t correct, then hopefully my sharing it will cause others to chime in with further objections, and I can zigzag my way to understanding what actually happened!
I shared the above summary with Oliver Habryka, and he said:
I’ll also share Ozzie Gooen’s Twitter take from a few days ago:
And, some corrections to my earlier posts about this:
I said that “there was a narrow investigation into legal risk to Effective Ventures last year”, which I think may have overstated the narrowness of the investigation a bit. My understanding is that the investigation’s main goal was to reduce EV’s legal exposure, but to that end the investigation covered a somewhat wider range of topics (possibly including things like COI policies), including things that might touch on broader EA mistakes and possible improvements. But it’s hard to be sure about any of this because details of the investigation’s scope and outcomes weren’t shared, and it doesn’t sound like they will be.
I said that Julia Wise had “been calling for the existence of such an investigation”; Julia clarifies on social media, “I would say I listed it as a possible project rather than calling for it exactly.”
Specifically, Julia Wise, Ozzie Gooen, and Sam Donald co-wrote a November 2023 blog post that listed “comprehensive investigation into FTX<>EA connections / problems” as one of four “projects and programs we’d like to see”, saying “these projects are promising, but they’re sizable or ongoing projects that we don’t have the capacity to carry out”. They also included this idea in a list of Further Possible Projects on EA Reform.
(I’m going to wrap up a few disparate threads together here, and will probably be my last comment on this post ~modulo a reply for clarification’s sake. happy to discuss further with you Rob or anyone via DMs/Forum Dialogue/whatever)
(to Rob & Oli—there is a lot of inferential distance between us and that’s ok, the world is wide enough to handle that! I don’t mean to come off as rude/hostile and apologies if I did get the tone wrong)
Thanks for the update Rob, I appreciate you tying this information together in a single place. And yet… I can’t help but still feel some of the frustrations of my original comment. Why does this person not want to share their thoughts publicly? Is it because they don’t like the EA Forum? Because their scared of retaliation? It feels like this would be useful and important information for the community to know.
I’m also not sure what to make of Habryka’s response here and elsewhere. I think there is a lot of inferential distance between myself and Oli, but it does seem to me to come off as a “social experiment in radical honesty and perfect transparency” , which is a vibe I often get from the Lightcone-adjacent world. And like, with all due respect, I’m not really interested in that whole scene. I’m more interested in questions like:
Were any senior EAs directly involved in the criminal actions at FTX/Alameda?
What warnings were given about SBF to senior EAs before the FTX blowup, particularly around the 2018 Alameda blowup, as recounted here.
If these warnings were ignored, what prevented people from deducing that SBF was a bad actor?[1]
Critically, if these warnings were accepted as true, who decided to keep this a secret and to supress it from the community at large, and not act on it?
Why did SBF end up with such a dangerous set of beliefs about the world? (I think they’re best described as ‘risky beneficentrism’ - see my comment here and Ryan’s original post here)
Why have the results of these investigations, or some legally-cleared version, not been shared with the community at large?
Do senior EAs have any plan to respond to the hit to EA-morale as a result of FTX and the aftermath, along with the intensely negative social reaction to EA, apart from ‘quietly hope it goes away’?
Writing it down, 2.b. strikes me as what I mean by ‘naive consequentialism’ if it happened. People had information that SBF was a bad character who had done harm, but calculated (or assumed) that he’d do more good being part of/tied to EA than otherwise. The kind of signalling you described as naive consequentialism doesn’t really seem pertinent to me here, as interesting as the philosophical discussion can be.
tl’dr—I think there can be a difference between a discussion about what norms EA ‘should’ have, or senior EAs should act by, especially in the post-FTX and influencing-AI-policy world, but I think that’s different from the ‘minimal viable information-sharing’ that can help the community heal, hold people to account, and help make the world a better place. It does feel like the lack of communication is harming that, and I applaud you/Oli pushing for it, but sometimes I wish you would both also be less vague too. Some of us don’t have the EA history and context that you both do!
epilogue: I hope Rebecca is doing well. But this post & all the comments makes me feel more pessimistic about the state of EA (as a set of institutions/organisations, not ideas) post FTX. Wounds might have faded, but they haven’t healed 😞
Not that people should have guessed the scale of his wrongdoing ex-ante, but was there enough to start to downplay and disassociate?
I’m not the person quoted, but I agree with this part, and some of the reasons for why I expect the results of an investigation like this to be boring aren’t based on any private or confidential information, so perhaps worth sharing.
One key reason: I think rumor mills are not very effective fraud detection mechanisms.
(This seems almost definitionally true: if something was clear evidence of fraud then it would just be described as “clear evidence of fraud”; describing something as a “rumor” seems to almost definitionally imply a substantial probability that the rumor is false or at least unclear or hard to update on.[1])
E.g. If I imagine a bank whose primary fraud detection mechanism was “hope the executives hear rumors of malfeasance,” I would not feel very satisfied with their risk management. If fraud did occur, I wouldn’t expect that their primary process improvement to be “see if the executives could have updated from rumors better.” I am therefore somewhat confused by how much interest there seems to be in investigating how well the rumor mill worked for FTX.[2]
To be clear: I assume that the rumor mill could function more efficiently, and that there’s probably someone who heard “SBF is often overconfident” or whatever and could have updated from that information more accurately than they did. (If you’re interested in my experience, you can read my comments here.) I’m just very skeptical that a new and improved rumor mill is substantial protection against fraud, and don’t understand what an investigation could show me that would change my mind.[3] Moreover, even if I somehow became convinced that rumors could have been effective in the specific case of FTX, I will still likely be skeptical of their efficacy in the future.
Relatedly, I’ve heard people suggest that 80k shouldn’t have put SBF on their website given some rumors that were floating around. My take is that the base rate of criminality among large donors is high, having a rumor mill does not do very much to lower that rate, and so I expect to believe that the risk will be relatively high for high net worth people 80k puts on the front page in the future, and I don’t need an investigation to tell me that.
To make some positive suggestions about things I could imagine learning from/finding useful:
I have played around with the idea of some voluntary pledge for earning to give companies where they could opt into additional risk management and transparency policies (e.g. selecting some processes from Sarbanes-Oxley). My sense is that these policies do actually substantially reduce the risk of fraud (albeit at great expense), and might be worth doing.[4]
At least, it seems like this should be our first port of call. Maybe we can’t actually implement industry best practices around risk management, but it feels like we should at least try before giving up and doing the rumor mill thing.
My understanding is that a bunch of work has gone into making regulations so that publicly traded companies are less likely to commit fraud, and these regulations are somewhat effective, but they are so onerous that many companies are willing to stay private and forgo billions of dollars in investment just to not have to deal with them. I suspect that EA might find itself in a similarly unfortunate situation where reducing risks from “prominent individuals” requires the individuals in question to do something so onerous that no one is willing to become “prominent.” I would be excited about research into a) whether this is in fact the case, and b) what to do about it, if so.
Some people probably disagree with my claim that rumor mills are ineffective. If so, research into this would be useful. E.g. it’s been on my backlog for a while to write up a summary of Why They Do It, or a fraud management textbook.
Why They Do It is perhaps particularly useful, given that one of its key claims is that, unlike with blue-collar crime, character traits don’t correlate well with propensity to commit white-collar crimes crimes, and I think this may be a crux between me and people who disagree with me.
All that being said, I think I’m weakly in favor of someone more famous than me[5] doing some sort of write up about what rumors they heard, largely because I don’t expect the above to convince many people, and I think such a write up will mostly result in people realizing that the rumors were not very motivating.
Thanks to Chana Messinger for this point
One possible reason for this is that people are aiming for goals other than detecting fraud, e.g. they are hoping that rumors could also be used to identify other types of misconduct. I have opinions about this, but this comment is already too long so I’m not going to address it here.
e.g. I appreciate Nate writing this, but if in the future I learned that a certain person has spoken to Nate, I’m not going to update my beliefs about the likelihood of them committing financial misconduct very much (and I believe that Nate would agree with this assessment)
Part of why I haven’t prioritized this is that there aren’t a lot of earning to give companies anymore, but I think it’s still potentially worth someone spending time on this
I have done my own version of this, but my sense is that people (very reasonably) would prefer to hear from someone like Will
I feel like “people who worked with Sam told people about specific instances of quite serious dishonesty they had personally observed” is being classed as “rumour” here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word “rumour” conjures. Also, even if people only did receive stuff that was more centrally rumour, I feel like we still want to know if any one in leadership argued “oh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upside”. That’s a signal someone is a bad leader in my view, which is useful knowledge going forward. (I’m not saying it is instant proof they should never hold leadership positions ever again: I think quite a lot of people might have said something like that in similar circumstances. But it is a bad sign.)
I agree with this.
I don’t really agree with this. Everyone has some probability of turning out to be dodgy; it matters exactly how strong the available evidence was. “This EA leader writes people off immediately when they have even a tiny probability of being untrustworthy” would be a negative update about the person’s decision-making too!
I took that second quote to mean ‘even if Sam is dodgy it’s still good to publicly back him’
I meant something in between “is” and “has a non-zero chance of being”, like assigning significant probability to it (obviously I didn’t have an exact number in mind), and not just for base rate reasons about believing all rich people to be dodgy.
Huh, the same reason you cite for why you are not interested in doing an investigation is one of the key reasons why I want an investigation.
It seems to me that current EA leadership is basically planning to continue a “our primary defense against bad actors is the rumor mill” strategy. Having an analysis of how that strategy did not work, and in some sense can’t work for things like this seems like it’s one of the things that would have the most potential to give rise to something better here.
Interesting! I’m glad I wrote this then.
Do you think “[doing an investigation is] one of the things that would have the most potential to give rise to something better here” because you believe it is very hard to find alternatives to the rumor mill strategy? Or because you expect alternatives to not be adopted, even if found?
My current sense is that there is no motivation to find an alternative because people mistakenly think it works fine enough and so there is no need to try to find something better (and also in the absence of an investigation and clear arguments about why the rumor thing doesn’t work, people probably think they can’t really be blamed if the strategy fails again)
Suppose I want to devote some amount of resources towards finding alternatives to a rumor mill. I had been interpreting you as claiming that, instead of directly investing these resources towards finding an alternative, I should invest these resources towards an investigation (which will then in turn motivate other people to find alternatives).
Is that correct? If so, I’m interested in understanding why – usually if you want to do a thing, the best approach is to just do that thing.
It seems to me that a case study of how exactly FTX occurred, and where things failed, would be among one of the best things to use to figure out what thing to do instead.
Currently the majority of people who have an interest in this are blocked by not really knowing what worked and didn’t work in the FTX case, and so probably will have trouble arguing compellingly for any alternative, and also lack some of the most crucial data. My guess is you might have the relevant information from informal conversations, but most don’t.
I do think also just directly looking for an alternative seems good. I am not saying that doing an FTX investigation is literally the very best thing to do in the world, it just seems better than what I see EA leadership spending their time on instead. If you had the choice between “figure out a mechanism detecting and propagating information about future adversarial behavior” and “do an FTX investigation”, I would feel pretty great about both, and honestly don’t really know which one I would prefer. As far as I can tell neither of these things is seeing much effort invested into it.
Okay, that seems reasonable. But I want to repeat my claim[1] that people are not blocked by “not really knowing what worked and didn’t work in the FTX case” – even if e.g. there was some type of rumor which was effective in the FTX case, I still think we shouldn’t rely on that type of rumor being effective in the future, so knowing whether or not this type of rumor was effective in the FTX case is largely irrelevant.[2]
I think the blockers are more like: fraud management is a complex and niche area that very few people in EA have experience with, and getting up to speed with it is time-consuming, and also ~all of the practices are based under assumptions like “the risk manager has some amount of formal authority” which aren’t true in EA.
(And to be clear: I think these are very big blockers! They just aren’t resolved by doing an investigation.)
Or maybe more specifically: would like people to explicitly refute my claim. If someone does think that rumor mills are a robust defense against fraud but were just implemented poorly last time, I would love to hear that!
Again, under the assumption that your goal is fraud detection. Investigations may be more or less useful for other goals.
It seems like a goal of ~”fraud detection” not further specified may be near the nadir of utility for an investigation.
If you go significantly narrower, then how EA managed (or didn’t manage) SBF fraud seems rather important to figuring out how to deal with the risk of similar fraudulent schemes in the future.[1]
If you go significantly broader (cf. Oli’s reference to “detecting and propagating information about future adversarial behavior”), the blockers you identify seem significantly less relevant, which may increase the expected value of an investigation.
My tentative guess is that it would be best to analyze potential courses of action in terms of their effects on the “EA immune system” at multiple points of specificity, not just close relations of a specific known pathogen (e.g., SBF-like schemes), a class of pathogens (e.g., “fraud”), or pathogens writ large (e.g., “future adversarial behavior”).
Given past EA involvement with crypto, and the base rate of not-too-subtle fraud in crypto, the risk of similar fraudulent schemes seems more than theoretical to me.
I think that would be worth exploring. I suspect you are correct that full Sarbanes-Oxley treatment would be onerous.
On the other hand, I don’t see how a reasonably competent forensic accountant or auditor could have spent more than a few days at FTX (or at Madoff) without having a stroke. Seeing the commingled bank accounts would have sent alarm bells racing through my head, at least. (One of the core rules of legal ethics is that you do not commingle your money with that of your clients because experience teaches all sorts of horrible things can and often do happen.)
I certainly don’t mean to imply that fraud against sophisticated investors and lenders is okay, but there is something particularly bad about straight-up conversion of client funds like at FTX/Madoff. At least where hedge funds and big banks are concerned, they have the tools and access to protect themselves if they so wish. Moreover, the link between the fraud and the receipt of funds is particularly strong in those cases—Enron was awash in fraud, but it wouldn’t be fair to say that a charity that received a grant from Enron at certain points in time was approximately and unknowingly in possession of stolen funds.
Thankfully, procedures meant to ferret out sophisticated Enron-style fraud shouldn’t be necessary to rule out most straight-up conversion schemes. Because of the risk that someone will rat the fraudsters out, my understanding is that the conspiracy usually is kept pretty small in these sorts of frauds. That imposes a real limit on how well the scheme will withstand even moderate levels of probing with auditor-level access.
If you want a reference class of similar frauds, here is the prosecution’s list of cases (after the Booker decision in 2005) with losses > $100MM and fraud type of Ponzi scheme, misappropriation, or embezzlement:
For example, one might be really skeptical if auditing red flags associated with prior frauds are present. Madoff famously had his audits done by a two-person firm that reported not conducting audits. FTX was better, but apparently still used “questionable” third-tier firms that “do audit a few public companies but none of the size or complexity of FTX.” Neither “the Armanino nor the Prager Metis audit reports for 2021 provides an opinion on the FTX US or FTX Trading internal controls over accounting and financial reporting”—and the audit reports tell the reader as much (same source). The article, written by an accounting lecturer at Wharton, goes on to describe other weirdness in the audit reports. Of course, that’s not foolproof—Enron had one of the then-Big Five accounting firms, for instance.
Catching all fraud is not realistic . . . for anyone, much less a charitable social movement. But it seems like some basic checks to make fairly sure the major or whole basis for the company / for the individual’s wealth is not a fraudulent house of cards seems potentially attainable at a reasonable burden level.
I guess the question I have is, if the fraud wasn’t noticed by SBF’s investors, who had much better access to information and incentives to find fraud, why would anyone expect the recipients of his charitable donations to notice it? If it was a failure of the EA movement not to know that FTX was fraudulent, isn’t it many times more of a failure that the fraud was unnoticed by the major sophisticated investment firms that were large FTX shareholders?
I think investing in FTX was genuinely a good idea, if you were a profit maximizer, even if you strongly suspected the fraud. As Jason says, as an investor losing money due to fraud isn’t any worse than losing money because a company fails to otherwise be profitable, so even assigning 20%-30% probability to fraud for a high-risk investment like FTX where you are expecting >2x returns in a short number of years will not make a huge difference to your bottomline.
In many ways you should expect being the kind of person who is willing to commit fraud to be positively associated with returns, because doing illegal and fradulent things means that the people who run the organization take on massive risk where you are not exposed to the downside, but you are exposed to the upside. It’s not worth it to literally invest in fraud, but it is worth it to invest into the kind of company where the CEO is willing to go to prison, since you don’t really have any risk of going to prison, but you get the upside of the legal risk they take on (think of Uber blatantly violating laws until they established a new market, which probably exposed leadership to substantial legal risk, but investors just got to reap the profits).
I wasn’t suggesting we should expect this fraud to have been found in this case with the access that was available to EA sources. (Perhaps the FTXFF folks might have caught the scent if they were forensic accountants—but they weren’t. And I’m not at all confident on that in any event.) I’m suggesting that, in response to this scandal, EA organizations could insist on certain third-party assurances in the future before taking significant amounts of money from certain sources.
Why the big money was willing to fork over nine figures each to FTX without those assurances is unclear to me. But one observation: as far as a hedge fund or lender is concerned, a loss due to fraud is no worse than a loss due to the invested-in firm being outcompeted, making bad business decisions, experiencing a general crypto collapse, getting shut down for regulatory issues, or any number of scenarios that were probably more likely ex ante than a massive conversion scheme. In fact, such a scheme might even be less bad to the extent that the firm thought it might get more money back in a fraud loss than from some ordinarily-business failure modes. Given my understanding that these deals often move very quickly, and the presence of higher-probability failure modes, it is understandable that investors and lenders wouldn’t have prioritized fraud detection.
In contrast, charitable grantees are much more focused in their concern about fraud; taking money from a solvent, non-fraudulent business that later collapses doesn’t raise remotely the same ethical, legal, operational, and reputational concerns. Their potential exposure in that failure mode are likely several times larger than those of the investors/lenders after all non-financial exposures are considered. They are also not on a tight time schedule.
Re your footnote 4, CE/AIM are starting an earning-to-give incubation program, so that is likely to change pretty soon
Oh good point! That does seem to increase the urgency of this. I’d be interested to hear if CE/AIM had any thoughts on the subject.
Will MacAskill waited until April to speak fully and openly on the extra cautious advice of legal counsel. If that period ended to the point Will spoke to the matter of the FTX collapse, and the before and after, has he had ever wanted to, surely almost everyone could do the same now. The barrier or objection of not talking according to the strong advice of legal counsel seems like it’d be null for most people at this point.
Edit: in the 2 hours since I first made this comment, I’ve read most of the comments with arguments both for and against why someone should begin pursuing at least some parts of what could constitute an overall investigation as has been suggested. Finding the arguments for doing so far better than the arguments against, I have now decided to personally begin pursuing the below project. Anyone interested in helping or supporting me in that vein, please reply to this comment, or contact me privately. Any number of messages I receive along the lines of “I think this is a bad idea, I disagree with what you intend to do, I think this will be net negative, please don’t do this”, etc., absent other arguments, are very unlikely to deter me. On the contrary, if anything, such substanceless objections may motivate me to pursue this end with more vigour.
I’m not extremely confident I could complete an investigation of the whole of the EA community’s role in this regard at the highest level all by myself, though I am now offering to investigate or research parts of this myself. Here’s some of what I could bring to the table.
I’d be willing to do some relatively thorough investigation from a starting point of being relatively high-context. For those who wouldn’t think I’d be someone who knows a lot of context here, this short form post I made a while ago could serve as proof of concept I have more context than you might expect. I could offer more information, or answer more questions others have, in an attempt to genuinely demonstrate how much context I have.
I have very little time constraints compared to perhaps most individuals in the EA community who might be willing or able to contribute to some aspect of such an investigation. Already on my own time, I occasionally investigate issues in and around EA by myself. I intend to do so more in the future. I’d be willing to research more specific issues on my own time if others were to provide some direction. Some of what I might pursue further may be related to FTX anyway without urging from others.
I’d be willing to volunteer a significant amount of time doing so, as I’m not currently working full-time and may not be working full-time in the foreseeable future. If the endeavour required a certain amount of work or progress achieved within a certain time frame, I may need to be hired in some capacity to complete some of the research or investigating. I’d be willing to accept such an opportunity as well.
Having virtually no conflict of interests, there’s almost nothing anyone powerful in or around EA could hold over me to attempt to stop me from trying to investigate.
I’m champing at the bit to make this happen probably about as much as anyone.
I would personally find the contents of any aspect of such an investigation to be extremely interesting and motivating.
I wouldn’t fear any retaliation whatsoever. Some attempts or threats to retaliate against me could be indeed be advantageous for me, as I am confident they would fail to achieve their desired goals, and thus serve as evidence to others that any further such attempts would be futile wastes of efforts.
I am personally in semi-regular contact or have decent rapport with some whistleblowers or individuals who retain private information about events related to the whole saga of FTX dating back to 2018. They, or their other peers who’ve also exited the EA community in the last several years, may not be willing to talk freely with most individuals in EA who might participate in such an investigation. I am very confident at least some of them would be willing to talk to me.
I’m probably less nervous personally, i.e., being willing to be radically transparent and honest, about speaking up or out about anything EA-related than most people who have continuously participated in the EA community for over a decade. I suspect that includes even you and Oliver Habryka, who have already been noted in other comments here as among those in that cohort who are the least nervous. Notably that may at this point be a set of no more than a few hundred people.
To produce common-knowledge documents to help as large a subset of the EA community, if not the whole community, to learn what happened, and what could be done differently in the future, would be a goal of any such investigation that I could be most motivated to accomplish. I’d be much more willing to share such a document more widely than most other people who might be willing or able to produce one.
I haven’t heard any arguments against doing an investigation yet, and I could imagine folks might be nervous about speaking up here. So I’ll try to break the ice by writing an imaginary dialogue between myself and someone who disagrees with me.
Obviously this argument may not be compelling compared to what an actual proponent would say, and I’d guess I’m missing at least one key consideration here, so treat this as a mere conversation-starter.
Hypothetical EA: Why isn’t EV’s 2023 investigation enough? You want us to investigate; well, we investigated.
Rob: That investigation was only investigating legal risk to EV. Everything I’ve read (and everything I’ve heard privately) suggests that it wasn’t at all trying to answer the question of whether the EA community made any moral or prudential errors in how we handled SBF over the years. Nor was it trying to produce common-knowledge documents (either private or public) to help any subset of EA understand what happened. Nor was it trying to come up with any proposal for what we should do differently (if anything) in the future.
I take it as fairly obvious that those are all useful activities to carry out after a crisis, especially when there was sharp disagreement, within EA leadership, long before the FTX implosion, about how we should handle SBF.
Hypothetical EA: Look, I know there’s been no capital-I “Investigation”, but plenty of established EAs have poked around at dinner parties and learned a lot of the messy complicated details of what happened. My own informal poking around has convinced me that no EAs outside FTX leadership did anything super evil or Machiavellian. The worst you can say is that they muddled along and had miscommunications and brain farts like any big disorganized group of humans, and were a bit naively over-trusting.
Me: Maybe! But scattered dinner conversation with random friends and colleagues, with minimal following up or cross-checking of facts, isn’t the best medium for getting an unbiased picture of what happened. People skew the truth, withhold info, pass the blame ball around. And you like your friends, so you’re eager to latch on to whatever story shows they did an OK job.
Perhaps your story is true, but we shouldn’t be scared of checking, applying the same level of rigor we readily apply to everything else we’re doing.
The utility of this doesn’t require that any EAs be Evil. A postmortem is plenty useful in a world where we were “too trusting” or were otherwise subject to biases in how we thought, or how we shared information and made group decisions — so we can learn from our mistakes and do better next time.
And if we’ve historically been “too trusting”, it seems doubly foolish to err on the side of trusting every individual, institution, and process involved in the EA-SBF debacle, and write them a preemptive waiver for all the errors we’re studiously avoiding checking whether they’ve made.
Hypothetical EA: Look, there’s just no reason to use SBF in particular for your social experiment in radical honesty and perfect transparency. It was to some extent a matter of luck that SBF succeeded as well as he did, and that he therefore had an opportunity to cause so much harm. If there were systemic biases in EA that caused us to err here, then those same biases should show up in tons of other cases too.
The only reason to single out the SBF case in particular and give it 1000x more attention than everything else is that it’s the most newsworthy EA error.
But the main effect of this is to inflate and distort minor missteps random EA decision-makers made, bolstered by the public’s hindsight bias and cancel culture and by journalists’ axe-grinding, so that the smallest misjudgments an EA makes look like horrific unforgivable sins.
SBF is no more useful for learning about EA’s causal dynamics than any other case (and in fact SBF is an unusually bad place to try to learn generalizable lessons, because the sky-high stakes will cause people to withhold key evidence and/or bend the truth toward social desirability); it’s only useful as a bludgeon, if you came into all this already sure that EA is deeply corrupt (or that particular individuals or orgs are), and you want to summon a mob to punish those people and drive them from the community.
(Or, alternatively, if you’re sad about EA’s bad reputation and you want to find scapegoats: find the specific Bad EAs and drive them out, to prove to the world that you’re a Good EA and that EA-writ-large is now pure.)
Me: I find that argument somewhat compelling, but I still think an investigation would make sense.
First, extreme cases can often illustrate important causal dynamics that are harder to see in normal cases. E.g., if EA has a problem like “we fudge the truth too much”, this might be hard to detect in low-stakes cases where people have less incentive to lie. People’s behavior when push comes to shove is important, given the huge impact EA is trying to have on the world; and SBF is one huge instance where push came to shove and our character was really tested.
And, yes, some people may withhold information more because of the high stakes. But others will be much more willing to spend time on this question because they recognize it as important. If nothing else, SBF is a Schelling point for us all to direct our eyes at the same thing simultaneously, and see if we can converge on some new truths about the world.
Second, and moving away from abstractions to talk about the specifics of this case: My understanding is that a bunch of EAs tried to warn the community that SBF was extremely shady, and a bunch of other EAs apparently didn’t believe the warnings, or didn’t want those warnings widely shared even though they believed them.
“SBF is extremely shady” isn’t knowledge that FTX was committing financial fraud, and shouting “SBF is extremely shady” from the hills wouldn’t necessarily have prevented the fraud from happening. But there’s some probability it might have been the tipping point at various important junctures, as potential employees and funders and customers weighed their options. And even if it wouldn’t have helped at all in this case, it’s good to share that kind of information in case it helps the next time around.
I think it would be directly useful to know what happened to those warnings about SBF, so we can do better next time. And I think it would also help restore a lot of trust in EA (and a lot of internal ability for EAs to coordinate with each other) if people knew what happened — if we knew which thought leaders or orgs did better or worse, how processes failed, how people plan to do better next time.
I recognize that this will be harder in some ways with journalists and twitter users breathing down your necks. And I recognize that some people may suffer unfair scrutiny and criticism because they were in the wrong place at the wrong time. To some extent I just think we need to eat that cost; when you’re playing chess with the world and making massively impactful decisions, that comes with some extra responsibility to take a rare bit of unfair flack for the sake of being able to fact-find and orient at all about what happened. Hopefully the fact that some time has passed, and that we’re looking at a wide variety of people and orgs rather than a specific singled-out individual, will mitigate this problem.
If FTX were a total bolt out of the blue, that would be one thing. But apparently there were rather a lot of EAs who thought SBF was untrustworthy and evil, and had lots of evidence on hand to cite, at the exact same time 80K and Will and others were using their megaphones to broadcast that SBF is an awesome EA hero. I don’t know that 80K or Will in particular are the ones who fucked up here, but it seems like somebody fucked up in order for this perception gap to exist and go undiscussed.
I understand people having disagreements about someone’s character. Hindsight bias is a thing, and I’m sure people had reasons at the time to be skeptical of some of the bad rumors about SBF. But I tend to think those disagreements should be things that are argued about rather than kept secret. Especially if the secret conversations empirically have not resulted in the best outcomes.
Hypothetical EA: I dunno, this whole “we need a public airing out of our micro-sins in order to restore trust” thing sounds an awful lot like the exact “you’re looking for scapegoats” thing I was warning about.
You’re fixated on this idea that EAs did something Wrong and need to be chastised and corrected, like we’re perpetrators alongside SBF. On the contrary, I claim that the non-FTX EAs who interacted the most with Sam should mostly be thought of as additional victims of Sam: people who were manipulated and mistreated, who often saw their livelihoods threatened as a result and their life’s work badly damaged or destroyed.
The policies you’re calling for amount to singling out and re-victimizing many of Sam’s primary victims, in the name of pleasant-sounding abstractions like Accountability — abstractions that have little actual consequentialist value in this case, just a veneer of “that sounds nice on paper”.
Me: It’s unfortunately hard for me to assess the consequentialist value in this case, because no investigation has taken place. I’ve gestured at some questions I have above, but I’m missing most of the pieces about what actually happened, and some of the unknown unknowns here might turn out to swamp the importance of what I know about. It’s not clear to me that you know much more than me, either. Rather than pitting your speculation against mine, I’d rather do some actual inquiry.
Hypothetical EA: I think we already know enough, including from the legal investigation into Sam Bankman-Fried and who was involved in his conspiracy, to make a good guess that re-victimizing random EAs is not a useful way for this movement to spend its time and energy. The world has many huge problems that need fixing, and it’s not as though EA’s critics are going to suddenly conclude that EAs are Good After All if we spill all of our dirty laundry. What will actually happen is that they’ll cherry-pick and distort the worst-sounding tidbits, while ignoring all the parts you hoped would be “trust-restoring”.
Me: Some EA critics will do that, sure. But there are plenty of people, both within EA and outside of it, who legitimately just want to know what happened, and will be very reassured to have a clearer picture of the basic sequence of events, which orgs did a better or worse job, which processes failed or succeeded. They’ll also be reassured to know that we know what happened, vs. blinding ourselves to the facts and to any lessons they might contain.
Or maybe they’ll be horrified because the details are actually awful (ethically, not legally). Part of being honest is taking on the risk that this could happen too. That’s just not avoidable. If we’re not the sort of community that would share bad stuff if it were true, then people are forced to be that much more worried that we’re in fact hiding a bunch of bad stuff.
Hypothetical EA: I just don’t think there’s that much crucial information EA leaders are missing, from their informal poking around. You can doubt that, but I don’t think a formal investigation would help much, since people who don’t want to speak now will (if anything) probably be even more tight-lipped in the face of what looks like a witch-hunt.
You say that EAs have a responsibility to jump through a bunch of transparency hoops. But whether or not you agree with my “EAs are victims” frame: EAs don’t owe the community their lives. If you’re someone who made personal sacrifices to try to make the world a better place, that doesn’t somehow come with a gotcha clause where you now have incurred a huge additional responsibility that we’d never impose on ordinary private citizens, to dump your personal life into the public Internet.
Me: I don’t necessarily disagree with that, as stated. But I think particular EAs are signing up for some extra responsibility, e.g., when they become EA leaders and ask for a lot of trust on the part of their community.
I wouldn’t necessarily describe that responsibility as “huge”, because I don’t actually think a basic investigation into the SBF thing is that unusual or onerous.
I don’t see myself as proposing anything all that radical here. I’m even open to the idea that we might want to redact some names and events in the public recounting of what happened, to protect the innocent. I don’t see anything weird about that; what strikes me as puzzling is the complete absence of any basic fact-finding effort (beyond the narrow-scope EV legal inquiry).
And what strike me as doubly puzzling is that there hasn’t even been a public write-up that CEA and others are not planning to look into this at all, nor has there been any public argument for this policy — whence this dialogue. As though EAs are just hoping we’ll quietly forget about this pretty major omission, so they don’t have to say anything potentially controversial. That I don’t really respect; if you think this investigation is a bad idea, do the EA thing and make your case!
Hypothetical EA: Well, hopefully my arguments have given you some clues about (non-nefarious reasons why) EAs might want to quietly let this thing die, rather than giving a big public argument for letting it die. In addition to the obvious fact that folks are just very busy, and more time spent on this means less time spent on a hundred other things.
Me: And hopefully my arguments have helped remind some folks that things are sometimes worth doing even when they’re hard.
All the arguments in the world don’t erase the fact that at the end of the day, we have a choice between taking risks for the sake of righting our wrongs and helping people understand what happened, versus hiding from the light of day and quietly hoping that no one calls us out for retreating from our idealistic-sounding principles.
We have a choice between following the path of least resistance into ever-murkier, ever-more-confusing, ever-less-trusting waters; or taking a bold stand and doing whatever we can to give EAs and non-EAs alike real insight into what happened, and a real capacity to adjust course if and only if some course-changing is warranted.
There are certainly times when the boring, practical, un-virtuous-sounding option really is the right option. I don’t think this is one of those times; I think we need to be better than that this one time, or we risk losing by a thousand cuts some extremely precious things that used to be central to what made EA EA.
… And if you disagree with me about all that, well, tell me why I’m wrong.
I think I agree with Hypothetical EA that we basically know the broad picture.
Probably nobody was actually complicit or knew there was fraud; and
Various people made bad judgement calls and/or didn’t listen to useful rumours about Sam
I guess I’m just… satisfied with that? You say:
.. why? None of this seems that important to me? Most of it seems like a matter for the person/org in question to reflect/improve on. Why is it important for “plenty of people” to learn this stuff, given we already know the broad picture above?
I would sum up my personal position as:
We got taken for a ride, so we should take the general lesson to be more cautious of charismatic people with low scruples, especially bearing large sums of money.
If you or your org were specifically taken for a ride you should reflect on why that happened to you and why you didn’t listen to the people who did spot what was going on.
EA is compelling insofar as it is about genuinely making the world a better place, ie we care about the actual consequences. Just because there are probably no specific people/processes to blame, doesn’t mean we should be satisfied with how things are.
There is now decent evidence that EA might cause considerable harm in the world, so we should be strongly motivated to figure out how to change that. Maybe EA’s failures are just the cost of ambition and agency, and come along with the good it does, but I think that’s both untrue and worryingly defeatist.
I care about the end result of all of this, and the fact that we’re okay with some serious Ls happening (and not being willing to fix the root cause of those errors) is concerning.
Random idea:
Maybe we should—after this question of investigation or not has been discussed in more detail—organize community-wide vote on whether there should be an investigation or not?
It’s easy to vote for something you don’t have to pay for. If we do anything like this, an additional fundraiser to pay for it might be appropriate.
Knowing what people think is useful, especially if it’s a non-anonymous poll aimed at sparking conversations, questions, etc. (One thing that might help here is to include a field for people to leave a brief explanation of their vote, if the polling software allows for it.)
Anonymous polls are a bit trickier, since random people on the Internet can easily brigade such a poll. And I wouldn’t want to assume that something’s a good idea just because most EAs agree with it; I’d rather focus on the arguments for and against.
“Just focus on the arguments” isn’t a decision-making algorithm, but I think informal processes like “just talk about it and individually do what makes sense” perform better than rigid algorithms in cases like this.
If we want something more formal, I tend to prefer approaches like “delegate the question to someone trustworthy who can spend a bunch of time carefully weighing the arguments” or “subsidize a prediction market to resolve the question” over “just run an opinion poll and do whatever the majority of people-who-see-the-poll vote for, without checking how informed or wise the respondents are”.
The question of a community-wide vote, on any level, about whether there should be such an investigation might at this point be moot. I have personally offered to begin conducting significant parts of such an investigation myself. Since I made that initial comment, I’ve now read several more providing arguments against the need or desirability for such an investigation. Having found them unconvincing, I now intend privately contact at least several private individuals—both in and around the EA movement, as well as some outside of or who no longer participate in the EA community—to pursue that end. Something like a community-wide vote, or some proxy like even dozens of effective altruists trying to talk me out of that, would be unlikely to convice me to not do so.
People, the downvote button is not a disagree button. That’s not really what it should be used for.
Thanks
Maybe quite some people don’t like random ideas being shared on the Forum?
I disagree, and in this case I don’t think the forum team should have a say in the matter. Each user has their own interpretation of the upvote/downvote button and that’s ok. Personally I don’t use it as “I disagree” but rather as “this comment shouldn’t have been written”, but there’s certainly a correlation. For instance, I both disagree-voted and downvoted your comment (since I dislike the attempt to police this).
Update Apr. 15: I talked to a CEA employee and got some more context on why CEA hasn’t done an SBF investigation and postmortem. In addition to the ‘this might be really difficult and it might not be very useful’ concern, they mentioned that the Charity Commission investigation into EV UK is still ongoing a year and a half later. (Google suggests that statutory inquiries by the Charity Commission take an average of 1.2 years to complete, so the super long wait here is sadly normal.)
Although the Commission has said “there is no indication of wrongdoing by the trustees at this time”, and the risk of anything crazy happening is lower now than it was a year and a half ago, I gather that it’s still at least possible that the Commission could take some drastic action like “we think EV did bad stuff, so we’re going to take over the legal entity that includes the UK components of CEA, 80K, GWWC, GovAI, etc.”, which may make it harder for CEA to usefully hold the steering wheel on an SBF investigation at this stage.
Example scenario: CEA tries to write up some lessons learned from the SBF thing, with an EA audience in mind; EAs tend to have unusually high standards, and a CEA staffer writes a comment that assumes this context, without running the comment by lawyers because it seemed innocent enough; because of those high standards, the Charity Commission misreads the CEA employee as implying a way worse thing happened than is actually the case.
This particular scenario may not be a big risk, but the sum of the risk of all possible scenarios like that (including scenarios that might not currently be on their radar) seems non-negligible to the CEA person I spoke to, even though they don’t think there’s any info out there that should rationally cause the Charity Commission to do anything wild here. And trying to do serious public reflection or soul-searching while also carefully nitpicking every sentence for possible ways the Charity Commission could misinterpret something, doesn’t seem like an optimal set-up for deep, authentic, and productive soul-searching.
The CEA employee said that they thought this is one reason (but not the only reason) EV is unlikely to run a postmortem of this kind.
My initial thoughts on all this: This is very useful info! I had no idea the Charity Commission investigation was still ongoing, and if there are significant worries about that, that does indeed help make CEA and EV’s actions over the last year feel a lot less weird-and-mysterious to me.
I’m not sure I agree with CEA or EV’s choices here, but I no longer feel like there’s a mystery to be explained here; this seems like a place where reasonable people can easily disagree about what the right strategy is. I don’t expect the Charity Commission to in fact take over those organizations, since as far as I know there’s no reason to do that, but I can see how this would make it harder for CEA to do a soul-searching postmortem.
I do suspect that EV and/or CEA may be underestimating the costs of silence here. I could imagine a frog-boiling problem arising here, where it made sense to delay a postmortem for a few months based on a relatively small risk of disaster (and a hope that the Charity Commission investigation in this case might turn out to be brief), but it may not make sense to continue to delay in this situation for years on end. Both options are risky; I suspect the risks of inaction and silence may be getting systematically under-weighted here. (But it’s hard to be confident when I don’t know the specifics of how these decisions are being made.)
I ran the above by Oliver Habryka, who said:
I have some information suggesting that maybe Oliver and/or the CEA employee’s account is wrong, or missing part of the story?? But I’m confused about the details, so I’ll look into things more and post an update here if I learn more.
The pendency of the CC statutory inquiry would explain hesitancy on the part of EVF UK or its projects to conduct or cooperate with an “EA” inquiry. A third-party inquiry is unlikely to be protected by any sort of privilege, and the CC may have means to require or persuade EVF UK to turn over anything it produced in connection with a third-party “EA” inquiry. However, it doesn’t seem that this should be an impediment to proceeding with other parts of an “EA inquiry,” especially to the extent this would be done outside the UK.
However, in the abstract—if any charity’s rationale for not being at least moderately open and transparent with relevant constituencies and the public is “we are afraid the CC will shut us down,” that is a charity most people would run away from fast, and for good reason. If the choice is between having a less-than “soul-searching postmortem” or none at all, I’ll take the former. Also, I strongly suspect everything EVF has said about the whole FTX situation has been vetted by lawyers, so the idea that someone is going to write an “official” postmortem without legal vetting is doubtful. Finally, I worry the can is going to continue being kicked down the road until EVF is far into the process of being dismantled, at which time the rationale may evolve into “we’re disbanding anyway, what’s the point?”
I do think a subtext of the reported discussion above is that the CC is not considered to be a necessarily trustworthy or fair arbiter here. “If we do this investigation then the CC may see things and take them the wrong way” means you don’t trust the CC to take them the right way. Now, I have no idea whether that is justified in this case, but it’s pretty consistent with my impression of government bureaucracies in general.
So it perhaps comes down to whether you previously considered the charity or the CC more trustworthy. In this case I think I trust EVF more.
I trust EV more than the charity commission about many things, but whether EV behaved badly over SBF is definitely not one of them. One judgment here is incredibly liable to distortion through self-interest and ego preservation, and it’s not the charity commission’s. (That’s not a prediction that the charity commission will in fact harshly criticize EV. I wouldn’t be surprised either way on that.)
When I looked at past CC actions, I didn’t get the impression that they were in the habit of blowing things out of proportion. But of course I didn’t have the full facts of each investigation.
One reason I don’t put much stock in the CC may not “necessarily [be a] trustworthy or fair arbiter” possibility is that it has to act with reasoning transparency because it is accountable to a public process. Its actions with substance (as opposed to issuing warnings) are reviewable in the UK courts, in proceedings where the charity—a party with the right knowledge and incentives—can call them out on dubious findings. The CC may not fear litigation in the same sense that a private entity might, but an agency’s budget/resources don’t generally go up because it is sued and agencies tend not to seek to create extra work for themselves for the thrill of it.
Moreover, the rationale of non-disclosure due to CC concerns operates at the margin. There are particular things we shouldn’t disclose in public because the CC might badly misinterpret those statements is one thing. There is nothing else useful we can disclose because all of those statements pose an unacceptable risk of the CC badly misinterpreting any further detail is another.
I have already personally decided to begin pursuing myself inquiries and research that would constitute at least some aspects of the sort of investigation in question. Much of what I generally have in mind, and in particular what I’d be most capable of doing myself, would be unrelated to EVF UK. If it’d make it easier, I’m amenable to perhaps avoiding probing in ways that intersect with EVF UK until the CC inquiry has ended. (This probably wouldn’t include EVF USA). That EVF is in the process of disbanding, which would complicate any part of such an investigation, as well as the fact another major EA organization is likely in the process of launching an earning to give incubator/training organization, are two reasons I will be expediting this project.
Not to state the obvious but the ‘criticism of EA’ posts didn’t pose a real risk to the power structure. It is uhhhhh quite common for ‘criticism’ to be a lot more encouraged/tolerated when it isnt threatening.
I mostly agree with this, and upvoted strongly, but I don’t think the scare quotes around “criticism” is warranted. Improving ideas and projects through constructive criticism is not the same thing as speaking truth to power, but it is still good and useful, it’s just a different good and useful thing.
I’m against doing further investigation. I expressed why I think we have already spent too much time on this here.
I also think your comments are falling into the trap of referring to “EA” like it was an entity. Who specifically should do an investigation, and who specifically should they be investigating? (This less monolithic view of EA is also part of why I don’t feel as bothered by the the whole thing: so maybe some people in “senior” positions made some bad judgement calls about Sam. They should maybe feel bad. I’m not sure we should feel much collective guilt about that.)
While recognizing the benefits of the anti-”EA should” taboo, I also think it has some substantial downsides and needs to be invoked after consideration of the specific circumstances at hand.
One downside is that the taboo can impose significant additional burdens on a would-be poster, discouraging them from posting in the first place. If it takes significant time investment to write “X should be done,” it is far from certain others will agree, and then additional significant time to figure out/write “and it should be done by Y,” then the taboo would require someone who wants to write the former to invest in writing the latter before knowing if the former will get any traction. Being okay with the would-be poster deferring certain subquestions (like “who”) means that effort can be saved if there’s not enough traction on the basic merits.
Another downside is that a would-be poster may have expertise, knowledge, or resources relevant to part of a complex question. If we taboo efforts by those who can only answer some of the issues effectively, we will lose the benefit of their insight.
I don’t think that is an appropriate burden to place on someone writing a post or comment calling for an investigation. I think that would be a blocker anyone without a fair deal of certain “insider-ish” knowledge from ever making the case for an investigation:
This isn’t a do-ocracy project. Doing it properly is not going to be cheap (e.g., hiring an investigative firm), and so ability to get funded for this is a prerequisite. Expecting a Forum commenter to know who could plausibly get funding is a bit much. To the extent that that is a reasonable expectation, we would also expect the reader to know that—so it is a minor defect. To the extent that who could get funded is a null set, then bemoaning a perceived lack of willingness to invest in a perceived important issue in ecosystem health is a valid post.
Even apart from this, whoever was running the investigator would need to secure the cooperation of organizations and individuals one way or another. That could either flow through the investigation sponsor’s own standing in the community (e.g., that ~everyone trusted them to give them a fair shake), and/or through funders/other powers putting their heft behind the investigation (e.g., that documented refusal to cooperate would likely have material adverse consequences).
Many good investigations do not have a specific list of people/entities who are the target of investigatory concern at the outset. They have a list of questions, and a good sense of the starting points for inquiry (and figuring out where other useful information lies). If I were trying to gain a better understanding of EA-aligned people/orgs’ interactions with SBF, I think some of the starting points are obvious.
Moreover, a higher level of specificity strikes me as potentially infohazardous for the Forum. Whatever might be said of the costs and benefits of circulating ~rumors to a publicly-accessible Forum to guard the community against future misconduct and non-malicious problematic conduct, the cost/benefit assessment feels more doubtful when the focus is more on certain forms of past problematic conduct. Even if Rob had solid hunches as to whose actions should be probed more significantly, it’s not clear that it would net-positive for him to name names here. Given that, I am very hesitant to endorse any norm that puts a thumb on the scale by creating an expectation that a poster will publicly release information whose public disclosure may well have a net negative impact.
Thanks, I think this is all right. I think I didn’t write what I meant. I want more specificity, but I do agree with you that it’s wrong to expect full specificity (and that’s what I sounded like I was asking for).
What I want something more like “CEA should investigate the staff of EVF for whether they knew about X and Y”, not “Alice should investigate Bob and Carol for whether they knew about X and Y”.
I do think that specificity raises questions, and that this can be a good thing. I agree that it’s not reasonable for someone to work out e.g. exactly where the funding comes from, but I do think it’s reasonable for them to think in enough detail about what they are proposing to realise that a) it will need funding, b) possibly quite a lot of funding, c) this trades off against other uses of the money, so d) what does that mean for whether this is a good idea. Whereas if “EA” is going to do it, then we don’t need to worry about any of those things. I’m sure someone can just do it, right?
I am at least one someone who not only can, but already has decided that I will, at least begin doing it. To that end, for myself or perhaps even others, there are already some individuals I have in mind to begin contacting who may be willing to provide at least a modicum of funding, or would know others who might be willing to do so. In fact, I have already begun that process.
There wouldn’t be a tradeoff with other uses of at least some of that money, given I’m confident at least some of those individuals would not donate or otherwise use that money to support, e.g., some organization affiliated with, or charity largely supported by, the EA community. (That would be due to some of the individual funders in question not being effective altruists.) While I agree it may not be a good idea for EA as a whole to go about this in some quasi-official way, I’ve concluded there aren’t any particularly strong arguments made yet against the sort of “someone” you had in mind doing so.
As I’ve already mentioned in other comments, I have myself already decided to begin pursuing a greater degree of inquiry, with haste. I’ve publicly notified others who’d offer pushback solely on the basis of reinforcing or enforcing such a taboo is likely to only motivate to do so with more gusto.
I have some knowledge and access to resources that would be relevant to solving at least a minor but still significant part of that complex question. I refer to the details in question in my comment that I linked to above.
To the extent I can begin laying the groundwork for a more thorough investigation to follow what is beyond the capacity of myself and prospective collaborators further, such an investigation will now at least start snowballing as a do-ocracy project. I know multiple people who could plausibly begin funding this, who themselves in turn may know several other people who’d be willing to do it. Some of the funders in question may be willing to uniquely fund myself, or a team I could (co-)lead, to begin doing the investigation in at least a semi-formal manner.
That would be some quieter critics in the background of EA, or others who are no longer effective altruists but have definitely long wanted such an investigation to like has begun to proceed. Why they might trust me in particular is due to my reputation in EA community for years now as being one effective altruist who is more irreverent towards the pecking orders or hiearchies, both formal and informal, of any organized network or section of the EA movement. At any rate, at least to some extent, a lack of much willingness from within the EA to fund the first steps of an inquiry is no longer a relevant concern. I don’t recall if we’ve interacted much before, though as you may soon learn, I am someone in the orbit of effective altruism who sometimes has an uncanny knack for meeting unusual or unreasonable expectations.
Having begun several months ago thinking of pursuing what I can contribute to such a nascent investigation, I already have in mind a list of several people in mind, as well as some questions, starting points for inquiry, and an approach for how to further identify potentially useful information. I intend to begin drafting a document to organize the process I have in mind, and I may be willing to privately share it in confidence with some individuals. You would be included, if you would be interested.
I get a ‘comment not found’ response to your link.
Overall I feel relatively supportive of more investigation and (especially) postmortem work. I also don’t fully understand why more wasn’t shared from the EV investigation[1].
However, I think it’s all a bit more fraught and less obvious than you imply. The main reasons are:
Professional external investigations are expensive
Especially if they’re meaningfully fact-finding and not just interviewing a few people, I think this could easily run into hundreds of thousands of dollars
Who is to pay for this? If a charity is doing it, I think it’s important that their donors are on board with that use of funds
I kind of think someone should fundraise for this specifically; I’m genuinely unsure about donor appetite to support it
I’m somewhat worried about the “re-victimizing” effect you allude to of just sharing everything transparently
Worry that it would cause in-my-view-unjust headaches for people is perhaps the main inhibitory force on my just publicly sharing the pieces of what I know (there’s also sometimes feeling like something isn’t mine to share)
If there were an investigation which was going to make all its factual findings public, I’d expect this to be an inhibitory force on people choosing to share information with them
The possible mistakes we’re talking about are all nuanced
It’s going to be a judgement call what was or wasn’t a mistake
(This is compatible with mistakes being large)
So if we’re hoping for an investigation which doesn’t make all its factual findings public, then we’re trusting in the judgement of the investigators to make a fair assessment
This makes me not want independent lawyers (who may most naturally be drawn to assess things from a perspective of “was this reasonably minimizing of legal exposure”)
But then who?
If this was just a question about conduct at one org, the natural answer might be “some sensible but uninvolved EA”, but if the whole of EA might somehow be called into question, what’s even appropriate?
At the end of this I would be most interested in multiple people who seemed very sensible giving their own post-mortems. I think that this would ideally include a mix of folks in EA and outsiders. I think some fact-finding should inform these people’s takes, without all of the facts themselves necessarily being made public (in order to facilitate the facts actually being shared, as well as to mitigate possible re-victimizing). I’m not certain how much it’s good for this to be via some centralized fact-finding exercise which is then privately shared, vs giving them the opportunity to interview people directly (as you get some more granular data that way). Perhaps ideally a mix. (But that’s making it more time-expensive as an exercise.)
I think there are people close enough to what happened that they can meaningfully give post-mortems without a fact-finding investigation. And I am interested in their views and supportive of them sharing those. But they’re also the people whose judgement is most likely to be distorted by being close to things. So even among EAs I’d prefer to have very sensible people who were further from what happened.
(That’s all where-I-stand-right-now. I can certainly imagine being moved on this.)
I guess that there would have been downsides for EV in doing so, but think these might well have been outweighed by the benefits to the community. However, I want to stress that I think the boards are sensible people making sometimes-difficult trade-offs; I don’t know for sure what I’d have thought with full context; I have some deference to them.
I disagree with this framing.
Something that I believe I got wrong pre-FTX was base rates/priors: I had assumed that if a company was making billions of dollars, had received investment from top-tier firms, complied with a bunch of regulations, etc. then the chance of serious misconduct was fairly low.
I have now spent a fair amount of time documenting that this is not true, in data sets of YCombinator companies and major philanthropists.
It’s hard to measure this, but at least anecdotally some other people (including in “EA leadership” positions) tell me that they were updated by this work and think that they similarly had incorrect priors.
I think what you are calling an “investigation” is fine/good, but it is not the only way to “get the facts straight” or “see if there’s anything we should change in response”.
Fair! I definitely don’t want to imply that there’s been zero reflection or inquiry in the wake of FTX. I just think “what actually happened within EA networks, and could we have done better with different processes or norms?” is a really large and central piece of the puzzle.
I’ve made a first attempt at this here: To what extent & how did EA indirectly contribute to financial crime—and what can be done now? One attempt at a review
I’d highlight that I found taking quite a structured approach helpful: breaking things down chronologically, and trying to answer specific questions like what’s the mechanism, how much did this contribute, and what’s a concrete recommendation?
To be fair, this could trigger lawsuits. I hope someone is reflecting on FTX, but I wouldn’t expect anyone to be keen on discussing their own involvement with FTX publicly and in great detail.
I think that’s right, although I would distinguish between corporate and personal exposure here to some extent:
I’m most hesitant to criticize people for not personally taking actions that could increase their personal legal exposure.
I’m most willing to criticize people and organizations for not taking actions that could increase organizational legal exposure. Non-profit organizations are supposed to exist in the public interest, while individuals do not carry any above-average obligations in that way. Organizations are not moral persons whose welfare is important to me. Moreover, organizations are better able to manage risk than individuals. For purposes of the norm that s/he who benefits from an action should also generally expect to bear the attendant costs, I am more willing to ascribe the benefits of action to an organization than to an individual doing their job.[1]
Organizational decisions to remain silent to avoid risk to individuals pose thornier questions for me. I’d have to think more about that intuition after my lunch break, but some of it relate to reasonable expectations of privacy. For example, disclosure of the contents of an organizational e-mail account (where the employee had notice that it belonged to the employer without a reasonable expectation of privacy) strikes me as less problematic than asking people to divulge their personal records, information about off-work activities, and the like.
Personal liability regimes are often pernicious to people doing their jobs in a socially desirable and optimal way. The reason is that the benefit of doing the job properly / taking risks is socialized, while the costs / risks are privatized. Thus, the actor fearful of personal liability will undervalue the social benefits of proper performance / risk acceptance.
Who would be able to sue? Would it really be possible for FTX customers/investors to sue someone for not making public “I heard Sam lies a lot and once misplaced money at Alameda early on it and didn’t seem too concerned, and reneged on a verbal agreement to share ownership”. Just because someone worked at the Future Fund? Or even someone who worked at EV?
I’d note that Nick Beckstead was in active litigation with the Alameda bankruptcy estate until that was dismissed last month (Docket No. 93). I think it would be very reasonable for anyone who worked at FTXFF to be concerned about their personal legal exposure here. (I am not opining as to whether exposure exists, only that I would find it extremely hard to fault anyone who worked at FTXFF for believing that they were at risk. After all, Nick already got sued!)
It’s harder to assess exposure for other groups of people. To your question, there may be a difference between mere silence in the face of knowledge/suspicion and somewhat supportive statements/actions in the face of the same knowledge. As a reference point, there was that suit against Tom Brady et al. (haven’t seen a recent status update). Obviously, the promotional activity is more explicit there than anything I expect an EA-associated person did. However, the theory against Brady et al. may rely more on generic failure to investigate, while one could perhaps dig for a stronger case against certain EA-related persons on actual knowledge of suspicious facts. I can only encourage people with concerns to consult their own personal legal counsel.
But at the general community level, I would be hesitant to fault various other individuals for being concerned about potential personal legal exposure. Remember, the pain of legal involvement isn’t limited to actual liability. Merely getting sued is itself painful; discovery is even more painful. Public statements could give someone motivation to try and/or ammo to get past a motion to dismiss for failure to state a viable claim.
As am aside, this isn’t really action relevant, but insofar as being involved with the legal system is a massive punishment even when the legal system itself is very likely going to eventually come to the conclusion you’ve done nothing legally wrong, that seems bad? Here it also seems to be having a knock on effect of making it harder to find out what actually happened, rather than being painful but producing useful information.
The suit against Brady also sounds like a complete waste of society’s time and money to me.
The legal system doesn’t know ex ante whether you’ve done anything wrong, though. It’s really hard to set up a system that balances out all the different ways a legal system can be imbalanced. If you don’t give plaintiffs enough leeway to discover evidence for their claims, then tortfeasors will be insufficiently deterred from committing torts. If you go too far (the current U.S. system), you incentivize lawfare, harassment, and legalized extortion of some defendants. Imposing litigation costs / attorney fees on the losers often harms the little guy due to lower ability to shoulder risk & the marginal utility of money. Having parties bear their own costs / fees (generally, the U.S. system) encourages tactics that run up the bill for the other guy. And defendants are more vulnerable to that than plaintiffs as a general rule.
Maybe. Maybe people would talk but for litigation exposure. Or maybe people are using litigation exposure as a convenient excuse to cover the fact that they don’t want to (and wouldn’t) talk anyway. I will generally take individuals at face value given the difficulty of discerning between the two, though.
Would it be possible to set up a fund that pays people for the damages they incurred for a lawsuit where they end up being innocent? That way the EA community could make it less risky for those who haven’t spoken up, and also signal how valuable their information is to them.
Yes, although it is likely cheaper (in expected costs) and otherwise superior to make a ~unconditional offer to cover at least the legal fees for would-be speakers. The reason is that an externally legible, credible guarantee of legal-expense coverage ordinarily acts as a strong deterrent to bringing a weak lawsuit in the first place. As implied by my prior comment, one of the main tools in the plaintiff’s arsenal is to bully a defendant in a weak case to settle by threatening them with liability for massive legal bills. If you take that tactic way by making the defendant ~insensitive to the size of their legal bills, you should stop a lot of suits from ever being brought in the first place. Rather, one would expect would-be plaintiffs to sue only if the expected value of their suit (e.g., the odds of winning and collecting on a judgment multiplied by judgment size) exceed the expected costs of litigating to trial (or to a point at which the defendant decides to settle without factoring in legal bills). If you think the odds of plaintiff success at trial are low and/or that the would-be individual defendant doesn’t have a ton of assets to collect from, then the most likely number of lawsuits is zero.[1]
That does tip the balance of abstract fairness toward defendants and away from plaintiffs. But that can be appropriate in some cases. As noted in an earlier comment of mine, personal-liability regimes underproduce public goods because the public goods are enjoyed by the public while the risk is borne by the individual. Litigation immunities (especially “qualified immunity” in the US) can be a controversial topic, but they reflect that kind of rationale. In some cases, society would rather limit or foreclose someone’s ability to collect damages for torts they suffered than squelch the willingness to provide public goods.
One might not want to extend this offer to those for whom you have a higher degree of suspicion that they did something they really should be sued for, or to those who you think face a high probability of being sued even without speaking up.
This is why you wouldn’t want to bind yourself to indemnify defendants who lost for their judgments. Doing so would create a much larger target on their backs, as the upside from litigation would no longer be limited to what the plaintiff could collect from the defendant. In the worst-case scenario in which a defendant loses unjustly, there are ways for third parties to protect the defendant without further enriching the plaintiff (e.g., making gifts after bankruptcy discharge, well-designed trusts).
How big is the legal risk for a high profile EA person who, say:
knew SBF was an asshole, incautious, and lived in a luxury villa, but had no knowledge of any specific fraud
publicly promoted him as a moral and frugal person
?
Is this automatically tort-worthy, but hard to prove? Laughed out of court no matter what? Does speaking about it publicly extend the court case, so it’s more expensive even if the promoter will ultimately win?
If I am betting $5 of play money on Manifold (meaning off-the-cuff gut check with no research) I would generally bet low as long as the person did not ~specifically promote FTX. If there was specific promotion of FTX, you could see claims like these which would be beyond my willingness to speculate $5 of play money at this time.
Here are some off-the-cuff questions I might want to ask (again, no research) if I were thinking about a specific case:
Could anyone potentially show that they actually and reasonably relied on the statements that were made to transact business with FTX?
How relevant were the statements to a reasonable person who might be considering transacting business with FTX? For example, one might think “Joe told me SBF was frugal despite knowing that was a quarter-truth at best, I wouldn’t have opened an FTX account had he not said that, and it was reasonable for me to rely on SBF’s frugality to decide whether to open an account” sounds like a stretch. On the other hand, reliance on “Jane had very good reason to believe SBF had done shady and illegal stuff, yet forcefully presented him as a trustworthy paragon of moral virtue on her podcast” starts feeling a little more realistic.
How much of the speaker’s content (not just the allegedly false/misleading statements about SBF) was about FTX? If it talked a lot about the advantages of doing business with FTX, etc., then the nexus between the speech and reliance seems stronger. If the context is SBF as a role model for EA EtGers, that would seem a real stretch.
Was there a direct or indirect financial benefit to the speaker or a related entity? If SBF gave the speaker (or more likely, their organization) tons of money, this starts looking more like a ~paid endorsement. And we are generally more willing to put duties on ~paid endorsers than on (say) on you and my comments on this Forum.
Also questions 3 and 4 get into potential causes of action for assisting with the sales of unregistered securities (cf. page 36 here). It’s unclear to me how an EA leader speaking out would increase their exposure to such a lawsuit.
There’s also the more realist answer to your question, which goes like this: the greater your income and assets, the greater your risk. My parents (on Social Security which can’t be garnished, only significant asset is the marital home which is difficult for creditors to access) probably wouldn’t need to worry. Unless you’re doing it for ideological reasons, why sue if you can’t collect more than what litigation costs?
(understanding you are a guy betting $5 on manifold)
re: #3. Does this get blurred if the company made an explicit marketing push about what a great guy their CEO was? I imagine that still wouldn’t affect statements on him as a role model[1] , but might matter if they said many positive statements about him on a platform aimed at the general public.
legally
Not a crypto-focused platform (e.g., Joe’s Crypto Podcast?) No particular reason to know or believe that the company (had / was going to) use something Person said as part of their marketing campaign? If negative to both, it doesn’t affect my $5 Manifold bet.
thanks, I appreciate all this info.
I guess I kinda want to say fiat justitia ruat caelum here 🤷
You folks impress me! But seriously, that’s a big ask.
I’m a pretty big fan of Nate’s public write-up on his relationship to Sam and FTX. Though, sure, this is going to be scarier for people who were way more involved and who did stuff that twitter mobs can more easily get mad about.
This is part of why the main thing I’m asking for is a professional investigation, not a tell-all blog post by every person involved in this mess (though the latter are great too). An investigation can discover useful facts and share them privately, and its public write-up can accurately convey the broad strokes of what happened, and a large number of the details, while taking basic steps to protect the innocent.
I want to flag for Forum readers that I am aware of this post and the associated issues about FTX, EV/CEA, and EA. I have also reached out to Becca directly.
I started in my new role as CEA’s CEO about six weeks ago, and as of the start of this week I’m taking a pre-planned six-week break after a year sprinting in my role as EV US’s CEO[1]. These unusual circumstances mean our plans and timelines are a work in progress (although CEA’s work continues and I continue to be involved in a reduced capacity).
Serious engagement with and communication about questions and concerns related to these issues is (and was already) something I want to prioritize, but I want to wait to publicly discuss my thoughts on these issues until I have the capacity to do so thoroughly and thoughtfully, rather than attempt to respond on the fly. I appreciate people may want more specific details, but I felt that I’d at least respond to let people know I’ve acknowledged the concerns rather than not responding at all in the short-term.
It’s unusual to take significant time off like this immediately after starting a new role, but this is functionally a substitute for me not taking an extended break between roles. For some banal logistical reasons, it made more sense for me to start and then take time off.
You did speak publicly about them, in a large newspaper nonetheless: https://www.washingtonpost.com/opinions/2024/03/28/sam-bankman-fried-effective-truism-fraud
To be clear, I think it’s still fine to take some time, but it does seem like you made claims that the EA community has engaged in successful investigation and reflection here, and so saying that you want to hold off on engaging unless you can do so “thoroughly and thoughtfully” rings a bit hollow and sounds a bit like avoiding critical conversation while actively trying to spread beliefs this post calls into question (though again, I recognize you have a hard job and I don’t want to be too nitpicky about this, but I do feel like the confluence of releasing an article in a major newspaper combined with saying you want to hold off publicly discussing these issues feels off).
My guess is the timing of Becca’s post is related to your Washington Post article, though that’s really just a random guess.
You don’t deserve negative karma for this comment (was at −1 when I corrected that), but I think it’s fair to recognize that the timing of the op-ed was indirectly dictated by the date Judge Kaplan set for sentencing. Publishing it probably wouldn’t make sense at any other time, so Zach may have been stuck between being rushed into publishing it too early or not responding to the public-interest event at all. Also, it seems unlikely he booked six weeks off right after SBF’s sentencing for that reason.
I’m not opining that I would have published all of the language in the op-ed if I didn’t think I had done enough work to be able to communicate “thoroughly and thoughtfully” to the EA community. But I do feel some sympathy for the position Zach found himself in with respect to a hard external deadline.
Totally, to be clear, I think it’s totally fine for Zack to take time off, and wasn’t intending to comment on that at all. I was just responding to the (what I perceived to be a separate thread) of wanting to hold off on engaging until he had formed considered opinions.
Yeah, agree, that makes sense. I do think it was the wrong call, but I can understand the perceived urgency.
Epistemic status: not fleshed out
(This comment is not specifically directed to Rebecca’s situation, although it does allude to her situation in one point as an example.)
I observe that the powers-that-be could make it less costly for knowledgeable people to come forward and speak out. For example, some people may have legal obligations, such as the duties a board member owes a corporation (extending in some cases to former board members).[1] Organizations may be able to waive those duties by granting consent. Likewise, people may have concerns[2] about libel-law exposure (especially to the extent they have exposure to the world’s libel-tourism capital, the UK). Individuals and organizations can mitigate these concerns by, for instance, agreeing not to sue any community member for libel or any similar tort for FTX/SBF-related speech. (One could imagine an exception for suits brought in the United States in which the individual or organization concedes their status as a public figure, and does not present any other claims that would allow a finding of liability without proof of “actual malice.”)
Other types of costs are harder to legibly and credibly mitigate—such as fear of discrimination by grantmakers. That hews back to prior discussion about providing financial, legal, and other support to whistleblowers, which might require a commitment of fairly serious money (and a credible, independent decisionmaker) depending on the circumstances.
In any event, I would view organizations and individuals who made some specific, public, credible, and legible attempts to reduce the costs of others exposing their potential mistakes or wrongdoing in a positive manner. Among other things, it would be at least a mildly costly signal for someone who had badly erred, and thus would reduce my estimate of the probability that that person or organization had actually done so.
Moreover, in EA, the lines between what one knows in one’s capacity as a board member and what ones knows in their capacity as a private person are probably blurrier than for someone on (e.g.) the board of General Electric.
In my general view, laypersons often overstate these concerns, at least if they only have practical legal exposure to judgments that comply with US law. But the concerns may still silence important speech.
Also, I feel mean for pressing the point against someone who is clearly finding this stressful and is no more responsible for it than anyone else in the know, but I really want someone to properly explain what the warning signs the leadership saw were, who saw them, and what was said internally in response to them. I don’t even know how much that will help with anything, to be honest, so much as I just want to know. But at least in theory, anyone who behaved really badly should be removed from positions of power. (And I do mean just that: positions where they run big orgs: I’m not saying they should be shunned or they can’t be allowed to contribute to the community intellectually any more.) If Rebecca won’t do this, someone else should. But also, depending on how bad the behavior of leaders actually was, in NOT saying more people with inside knowledge are probably either a) helping people escape responsibility for really bad behavior or b) making what were reasonably sympathetic mistakes that many people might have made in the same position sound much worse than they were through vagueness, leading to unfair reputational damage. (EDIT: I should say that sadly, I think a) is much the more likely possibility.) Not to mention that right now it is not clear which leaders are the responsible ones, which is unfair on anyone who actually didn’t do anything wrong. Which could include not just people with no knowledge of the warning signs, but people who knew about them, complained internally, were ignored, and then didn’t take things public for defensible reasons.
ICYMI: I wrote this in response to a previous “EA leaders knew stuff” story. [Although I’m not sure if I’m one of the “leaders” Becca is referring to, or if the signs I mentioned are what she’s concerned about.]
Am I correct in interpreting your comment as something like “Rebecca says it’s costly to say more which might imply she is sitting on not yet disclosed information that might put powerful EAs in a bad light”? I did not really pick up on this when reading the OP but your comment got me worried that maybe there is some information that should be made public?
’Am I correct in interpreting your comment as something like “Rebecca says it’s costly to say more which might imply she is sitting on not yet disclosed information that might put powerful EAs in a bad light”?’
Yes, that’s what I meant. Maybe not “not all ready disclosed” though. It might just be confirmation that the portraited painted here is indeed fair and accurate: https://time.com/6262810/sam-bankman-fried-effective-altruism-alameda-ftx/ EDIT: I don’t doubt that the article is broadly literally accurate, but there’s always a big gap between what claims a piece of journalism like this is making if you take it absolutely 100% literally line-by-line and the general impression you’d get about what happened if you fill in the blanks from those facts in the way the piece encourages you to. It’s the latter that I think it is currently unclear how accurate it is, though after Rebecca’s post I am heavily leaning towards the view that the broad impression painted by the article is indeed accurate.
edit: As always, disagree/downvoters, would be good to hear why you disagree, as I’m not sure what I’ve written below merits either a disagree and especially not a downvote.
Thanks for sharing your thoughts Rebecca.
I do find myself wishing that some of these discussions from the core/leadership of EA[1] were less vague. I noticed this with Habrkya’s reaction to the recent EA column in the Washington Post—where he mentions ‘people he’s talked to at CEA’. Would be good to know who those people at CEA are.
I accept some people are told things informally, and in confidence etc., but it would seem to be useful to have as much as is possible/reasonable in the public domain, especially since these discussions/decisions seem to have such a large impact on the rest of the community in terms of reputational impact, organisational structure and hiring, grantmaking priorities and decisions etc.
For example, I again respect you said that your full thoughts would be ‘highly costly’ to share, but it’d be enlightening to know which members of the EV board you disagreed with so much that you felt you had to resign. If you can’t share that, knowing why you can’t share that. Or if not that, knowing what the concrete issues were. If you allege that there were “extensive and significant mistakes made which have not been addressed” and that these mistakes “make me very concerned about the amount of harm EA might do in the future” then I really want to know what these mistakes were concretely and who made/is making them. I think the vagueness is another sign that EA’s healing process post-FTX still has a way to go.[2]
Above all though, I hope you’re doing well, and would be happy to have an individual conversation if you think that would be useful, or if you aren’t willing to share things on the Forum.
An infamously slippery term, I’m guessing I’m referring to EV, CEA, OpenPhil, the Meta Coordination Forum attendees etc.
Not to imply the vagueness is a fault of yours. It’s probably attributable to people’s concerns of retaliation, legal constraints/NDAs, unequal power structures etc.
At the time of Rebecca’s resignation, how many members did the EVF USA board have? As of January 2023, the board was [Beckstead, Kagan, and Ross] with Beckstead recused from FTX-related matters for obvious reasons. In April, her resignation and the addition of Eli Rose & Zach Robinson were concurrently announced (although it is not clear if she decided to resign prior to these appointments to the board).
My sense is the EV UK board mattered a good amount as well during this period, and Claire Zabel was also on the board during the relevant period (I do not know which board members Becca was thinking about in the above post, if any).
Rebecca’s comments seem consistent with Beckstead being part of her concern, though.
Also, I don’t know if Spencer Greenberg’s podcast with Will is recorded yet, but if it hasn’t been I think he absolutely should ask Will what he thinks the phrase about “extensive and significant mistakes” here actually refers to. EDIT: Having listened (vaguely, while working) to most of the Sam Harris interview with Will, as far as I can tell Harris entirely failed to ask anything about this, which is a huge omission. Another question Spencer could ask Will is: did you specify this topic was off-limits to Harris?
I felt the Sam Harris interview was disappointingly soft and superficial. To be fair to MacAskill, Harris did an unusually bad job of pushing back and taking a harder line, and so MacAskill wasn’t forced to get deeper into it.
And basically nothing about how to avoid a similar situation happening again? Except for a few lines about decentralisation. Quite uninspiring.
Yes, Harris should have asked Will about this: https://time.com/6262810/sam-bankman-fried-effective-altruism-alameda-ftx/
I have not been very closely connected to the EA community the last couple of years, but based on communications, I was expecting:
an independent and broad investigation
reflections by key players that “approved” and collaborated with SBF on EA endeavors, such as Will MacAskill, Nick Beckstead, and 80K.
For example, Will posted in his Quick Takes 9 months ago:
It now turns out that this has changed into podcasts, which is better than nothing, but doesn’t give room to conversation or accountability.
I think 80K has been most open in reflecting on their mistakes and taking responsibility.
I was also implicitly expecting:
a broader conversation in the community (on the Forum and/or at conferences) where everyone could ask questions and some kind of plan of improvement would be made
It is disappointing that too little had happened, and it feels kind of like a relationship where a bad thing happened, where the immediate fallout was addressed, but then never quite aired out. I think it would be very healthy for the community to take these steps and reflect on & learn from the SBF affair as well as the mismanaged aftermath, and then hopefully we can all move forward.
Formatting error; this is something Siebe is saying, not part of the Will quotation.
Thanks Rob! Fixed it.
In case it helps, here’s some data from Meta Coordination Forum attendees on how much they think the FTX collapse should influence their work-related actions and how much it has influenced their work-related actions:
On average, attendees thought the average MCF attendee should moderately change their work-related actions because of the FTX collapse (Mean of 4.0 where 1 = no change and 7 = very significant change; n = 39 and SD = 1.5)
On average, attendees reported that the FTX collapse had moderately influenced their work related actions (Mean of 4.2 where 1 = no change and 7 = very significant change; n = 39 and SD = 1.7)
My interpretation of this is that MCF attendees have changed their professional behavior a reasonable amount (according to MCF attendees), although maybe this doesn’t address broader questions of reform (eg., ecosystem-wide work that requires substantial coordination).
And here’s a summary of responses to the question “what lessons do we need to learn from the past year”, asked directly after the above question:
Improve governance and organizational structures (mentioned by 7 respondents):
Shore up governance, diversify funding sources, build more robust whistleblower systems, and have more decentralized systems in order to be less reliant on key organizations/people.
Build crisis response capabilities (mentioned by 6 respondents):
Create crisis response teams, do crisis scenario planning, have playbooks for crisis communication, and empower leaders to coordinate crisis response.
Improve vetting and oversight of leaders (mentioned by 5 respondents):
Better vet risks from funders/leaders, have lower tolerance for bad behavior, and remove people responsible for the crisis from leadership roles.
Diversify and bolster communication capacities (mentioned by 5 respondents):
Invest more in communications for crises, improve early warning/information sharing, and platform diverse voices as EA representatives.
Increase skepticism and diligence about potential harms (mentioned by 4 respondents):
Adopt lower default trust, consult experts sooner, and avoid groupthink and overconfidence in leaders.
Learn about human factors in crises (mentioned by 3 respondents):
Recognize the effect of stress on behavior, and be aware of problems with unilateral action and the tendency not to solve collective action problems.
Adopt more resilient mindsets and principles (mentioned by 3 respondents):
Value integrity and humility, promote diverse virtues rather than specific people, and update strongly against naive consequentialism.
I thought that this might be relevant when discussing how much or little has been done post-FTX.
My personal guess is that public discussions on the Forum under-represent changes to org policies, institutional norms, and fuzzy updates about who to trust how much.
(I work at CEA, so I could be very biased.)
I think in any world, including ones where EA leadership is dropping the ball or is likely to cause more future harm like FTX, it would be very surprising if they individually had not updated substantially.
As an extreme illustrative example, really just intended to get the intuition across, imagine that some substantial fraction of EA leaders are involved in large scale fraud and continue to plan to do so (which to be clear, I don’t have any evidence of), then of course the individuals would update a lot on FTX, but probably on the dimensions of “here are the ways Sam got caught, here is what I really need to avoid doing to not get caught myself”.
It would be very surprising if a crisis like FTX would not cause at least moderately high scores on a question like the one you chart above. The key thing that I would want to see is evidence that the leadership has updated in a direction that will likely prevent future harm, and does not push people further into deceptive relationships with the world.
The concrete list of changes below helps, though as far as I can tell practically none of them have actually been implemented (and the concrete numbers you cite for people who mention them seems quite low, given that 50+ people were at the coordination forum).
Briefly going through them:
I don’t think much of any funding diversification has occurred (though I do think achieving that is hard). There are no whistleblower systems in place at any major EA orgs as far as I know, and my sense is we are more reliant on a smaller number of people in leadership than we were before (as more people decided to step back due to the conflict and stress that leadership has implied over the past months).
I don’t think any such crisis response teams or crisis scenario planning has been done, at least to my knowledge. I don’t know what people mean by “crisis communication” though IMO it’s clear that the issue with FTX was not one of EA comms, though if people mean “do investigations into bad things that have happened and communicate the results in a credible and verifiable manner” then I think it’s clear nothing of that sort has occurred, neither for FTX, and it seems like we also rolled extremely low on crisis communication in the OpenAI board crisis.
I don’t think any such removals have happened, and my sense is tolerance of bad behavior of the type that seems to me most responsible for FTX has gone up (in-particular heavy optimization for optics and large tolerance for divergences between public narratives and what is actually going on behind the scenes).
I don’.t think there are any initiatives for that kind of early information sharing. My sense is the rumor mill has gotten less functional instead of more, as the environment in which people act has become more adversarial, though it’s not super clear. But it seems like there are no serious efforts in this space.
I think this has probably happened implicitly, which I do think is good.
This one is kind of vague. I don’t know of anything we’ve done that helps here, and I think the OpenAI board situation is at least one point of evidence that people in EA leadership still lack on this dimension.
My sense is integrit, trying to make sure that your de-facto actions and professed virtues line up, that you are generally open and honest, and that you are willing to stand up for your beliefs, seems to overall have gotten a lot worse, as people have re-emphasized the importance of good PR and optics in the wake of FTX.
Naive consequentialist plans also seem to have increased since FTX, mostly as a result of shorter AI timelines and much more involvement of EA in the policy space.
Overall, I don’t think the coordination forum survey is much evidence about good things happening here, and the things that people did want to see have not seen much movement since the coordination forum.
I’ve heard this claim repeatedly, but it’s not true that EA orgs have no whistleblower systems.
I looked into this as part of this project on reforms at EA organizations: Resource on whistleblowing and other ways of escalating concerns
Many organizations in EA have whistleblower policies, some of which are public in their bylaws (for example, GiveWell and ACE publish their whistleblower policies among other policies). EV US and EV UK have whistleblower policies that apply to all the projects under their umbrella (CEA, 80,000 Hours, etc.) This is just a normal thing for nonprofits; the IRS asks whether you have one even though they don’t strictly require it, and you can look up on a nonprofit’s 990 whether they have such a policy.
Additionally, UK law, state law in many US states, and lots of other countries provide some legal protections for whistleblowers. Legal protection varies by state in the US, but is relatively strong in California.
Neither government protections nor organizational policies cover all the scenarios where someone might reasonably want protection from negative effects of bringing a problem to light. But that seems to be the case in all industries, including in the nonprofit field in general, not something unusual about EA.
I’m not aware of any EA organizations that provide financial rewards for whistleblowers, which seem like they’d be very tricky to administer without creating incentives you don’t want. The main example of financial rewards that I’m aware of is that the US government provides large financial rewards to whistleblowers whose evidence leads to the conviction of some fraud cases.
I think that is correct as far as it goes, but I suspect that the list of things you generally won’t get protection from (from your linked post) is significantly more painful in practice in EA than in most industries.
For example, although individuals dependent on small grants are probably particularly vulnerable to retaliation in ~all industries, that’s practically a much bigger hole in EA than elsewhere. The general unavailability of protection for disclosures about entities you don’t work for is more stifling in fields with a patchwork of mostly small-to-midsize orgs than in (say) the aerospace industry. Funding centralization could make retaliation easier to pull off.
So while the scope of coverage might be similar on paper in EA, it seems reasonably possible that the extent of protection as applied is unusually weak in EA.
Agree, although those incentive problems could potentially be mitigated by limiting compensation to losses (e.g., loss of job, grant opportunity, an estimate of lost reputation) incurred due to good-faith whistleblowing activity that met specified criteria.
My understanding is that UK law and state law whistleblower protections are extremely weak and only cover knowledge of literal and usually substantial crimes (including in California). I don’t think any legally-mandated whistleblower protections make much of a difference for the kind of thing that EAs are likely to encounter.
I checked the state of the law in the FTX case, and unless someone knew specifically of clear fraud going on, they would have not been protected, which seems like it makes them mostly useless for things we care about. They also wouldn’t cover e.g. capabilities companies being reckless or violating commitments they made, unless they break some clear law, and even then protections are pretty limited. So I can’t really think of any case, except the most extreme, in which at least the US state protections come into play.
I was not aware of any CEA or 80k whistleblower systems. If they have some, that seems good! Is there any place that has more details on them? (you also didn’t mention them in the article you linked, which I had read recently, so I wasn’t aware of them)
Also, for the record, organizational whisteblower protections seem not that important to me. I e.g. care more about having norms against libel suits and other litigious behavior, though the norms for that seem mostly gone, so I expect substantially less whistleblowing of that type in the future. I mostly covered them because I was comprehensively covering the list of things people submitted to the Coordination Forum.
An alternative take on this (I haven’t researched this topic myself): https://forum.effectivealtruism.org/posts/LttenWwmRn8LHoDgL/josh-jacobson-s-quick-takes?commentId=ZA2N2LNqQteD5dE4g
I’d like to single out this part of your comment for extra discussion. On the Sam Harris podcast, Will MacAskill named leadership turnover as his main example of post-FTX systemic change; I’d love to know why you and Will seem to be saying opposite things here.
I’d also love to hear from more people whether they agree or disagree with Oliver on these two points:
Was “heavy optimization for optics and large tolerance for divergences between public narratives and what is actually going on behind the scenes” one of the EA behaviors that was most responsible for FTX?
Has this behavior increased in EA post-FTX?
So, I think it’s clear that a lot of leadership turnover has happened. However, my sense is that the kind of leadership turnover that has occurred is anti-correlated with what I would consider good. Most importantly, it seems to me that the people in EA leadership that I felt were often the most thoughtful about these issues took a step back from EA, often because EA didn’t live up to their ethical standards, or because they burned out trying to affect change and this recent period has been very stressful (or burned out for other reasons, unrelated to trying to affect change).
Below I’ll make a concrete list of leadership transitions I know have occurred and judge specific individuals, which I want to be clear on, are my personal judgements and I expect lots of people will disagree with me here:
Max Dalton left CEA. My sense is despite my many disagreements with him, he still seemed to me the best CEO that CEA has had historically, and he seemed to have a genuine strong interest in acting in high-integrity ways. My understanding is that the FTX stuff burned him out (as well as some of the Owen stuff, though the FTX stuff seemed more important).
He was replaced by Zack, who seems to think that this WaPo piece is a good way to start tackling FTX-related issues (more of my thoughts on that here). Also, in contrast to leadership claims that funding and ideological diversity is important, he is an ex-Open Philanthropy employee with pretty strong ties to the organization.
My sense is that most people in EA leadership would agree with me that Max stepping down and being replaced by Zack is a bad sign for post-FTX EA Reform (but also, my sense is many would think that Zack will do better on other dimensions that others consider more important).
Becca Kagan left the EV board. Given that she did so explicitly because of concerns that people were not taking FTX seriously enough, this seems like obviously a movement in a bad direction.
Will MacAskill and Nick Beckstead left the EV board. I do think these are reasonable moves given their historical affiliation with FTX, though my sense is this was mostly overdetermined by the legal constraints, basic COI principles making it very difficult for them to act as board members, and the bad optics of keeping them on the board. But this one does seem real.
Claire Zabel left as head of Open Phil’s capacity-building team. Claire seemed to me to also be among the people at Open Phil with the strongest interest in integrity. I have strong disagreements with the actions her team has taken since FTX, but I have trouble seeing this as a positive development.
Holden stepped back as CEO of Open Philanthropy, replaced by Alexander Berger. This also seems to me like a mostly negative development on the dimension of post-FTX reform. I have disagreements with Holden here, but my sense is he has thought much more about honesty and integrity than Alexander has, and Alexander’s takes on Wytham don’t fill me with that much hope.
Owen was relieved of a lot of his duties and banned from a lot of EA stuff. I think the process followed here was kind of reasonable, but my sense is Owen is among EA leadership one of the people most thoughtful about integrity and honesty, so on this specific dimension it seems like a step backwards (though there having been any kind of investigation that was followed up on is a mild positive sign)
Shakeel left CEA as Head of Comms. I don’t think this has much to do with FTX, though I do think Shakeel did really mess up post-FTX communications at CEA and I view this as a mildly good sign.
I think these are all the major leadership changes I can think of right now. There are very likely more I am forgetting. At least the ones I have here seem to me unlikely to help much with making EA into less of the kind of thing that would cause future things like FTX, though my guess is some people disagree with me on this.
Edit: Also seems like Nicole Ross is stepping down from the EV board. This also seems quite sad to me, she seemed like the person left on the EV board with the strongest moral compass on the relevant dimension. I don’t know the two people who are joining (Patrick Gruban and Johnstuart Winchell), so can’t speak to them, but on the surface having someone from EA Germany seems good.
Given that it appears EVF will soon be sent off to the scrapping yards for disassembly, it seems that changes in EVF board composition—for better or worse—may be less salient than they would have been been in 2022 or even much of 2023.
So “a lot of leadership turnover has happened” may not be quite as high-magnitude as had those changes had occurred in years past. Furthermore, some of these changes seem less connected to FTX than others, so it’s not clear to me how much turnover has happened as a fairly direct result of FTX. The most related change was Will & Nick leaving the EVF board, but I strongly suspect there was little practical choice there and so is weak evidence of some sort of internal change in direction.
All that is to say that I am not sure how much the nominal extent of leadership turnover suggests EA is turning over a new leadership leaf or something.
Who on your list matches this description? Maybe Becca if you think she’s thoughtful on these issues? But isn’t that one at most?
Becca, Nicole and Max all stand out as people who I think burned out trying to make things go better around FTX stuff.
Also Claire leaving her position worsened my expectations of how much Open Phil will do things that seem bad. Alexander also seems substantially worse than Holden on this dimension. I think Holden was on the way out anyways, but my sense was Claire found the FTX-adjacent work very stressful and that played a role in her leaving (I don’t thinks she agrees with me on many of these issues, but I nevertheless trusted her decision-making more than others in the space).
What are you referring to when you say “Naive consequentialism”?[1] Because I’m not sure that it’s what others reading might take it to mean?
Like you seem critical of the current plan to sell Wytham Abbey, but I think many critics view the original purchase of it as an act of naive consequentialism that ignored the side effects that it’s had, such as reinforcing negative views of EA etc. Can both the purchase and the sale be a case of NC? Are they the same kind of thing?
So I’m not sure the 3 respondents from the MCF and you have the same thing in mind when you talk about naive consequentialism, and I’m not quite sure I am either.
Both here and in this other example, for instance
The issue is that there are degrees of naiveness. Oliver’s view, as I understand it, is that there are at least three positions:
Maximally Naive: Buy nice event venues, because we need more places to host events.
Moderately Naive: Don’t buy nice event venues, because it’s more valuable to convince people that we’re frugal and humble than it is valuable to host events.
Non-Naive: Buy nice event venues, because we need more places to host events, and the value of signaling frugality and humility is in any case lower than the value of signaling that we’re willing to do weird and unpopular things when the first-order effects are clearly positive. Indeed, trying to look frugal here may even cause more harm than benefit, since:
(a) it nudges EA toward being a home for empty virtue-signalers instead of people trying to actually help others, and
(b) it nudges EA toward being a home for manipulative people who are obsessed with controlling others’ perceptions of EA, as opposed to EA being a home for honest, open, and cooperative souls who prize doing good and causing others to have accurate models over having a good reputation.
Optimizing too hard for reputation can get you into hot water, because you’ve hit the sour spot of being too naive to recognize that many others can see what you’re doing and discount your signals accordingly, but not naive enough to just blithely do the obvious right thing without overthinking it.
There are obviously cases where reputation matters for impact, but many people fall into the trap of fixating on reputation when they lack the skill to take into account enough higher-order effects.
(Of course, the above isn’t the only reason people might disagree on the utility of event venues. If you think EA is mainly bottlenecked on research and ideas, then you’ll want to gather people together to solve problems and share their thoughts. If you instead think EA’s big bottleneck is that we aren’t drawing in enough people to donate to GiveWell top charities, then you should think events are a lot less useful, unless maybe it’s a very large event targeted at drawing in new people to donate.)
I think this captures some of what I mean, though my model is also that the “Maximally naive” view is not very stable, in that if you are being “maximally naive” you do often end up just lying to people (because the predictable benefits from lying to people outweigh the predictable costs in that moment).
I do think a combination of being “maximally naive” combined with strong norms against deception and in favor of honesty can work, though in-general people want good reasons for following norms, and arguing for honesty requires some non-naive reasoning.
‘Naive consequentialist plans also seem to have increased since FTX, mostly as a result of shorter AI timelines and much more involvement of EA in the policy space.’
This gives me the same feeling as Rebecca’s original post: that you have specific information about very bad stuff that you are (for good or bad reasons) not sharing.
I don’t particularly feel like my knowledge here is confidential, it would just take a bunch of inferential distance to cross. I do have some confidential information, but it doesn’t feel that load-bearing to me.
This dialogue has a bit of a flavor of the kind of thing I am worried about: https://www.lesswrong.com/posts/vFqa8DZCuhyrbSnyx/integrity-in-ai-governance-and-advocacy?revision=1.0.0
At the risk of over-emphasizing metrics, it seems that at least some of these reforms could and probably should be broken down into SMART goals (i.e., those that are specific, measurable, achievable, relevant, and time-bound).
Example: Better vet risks from funders/leaders might be broken down into sub-tasks like (1) Stratify roles and positions by risk level (critical, severe, moderate, etc.); (2) Determine priorities for implementation and the re-vetting schedule; (3) develop adjudication guidelines; (4) decide who investigates and adjudicates suitability; (5) set measurable and time-bound progress indicators (e.g., the holders of 75% of Critical Risk roles/positions have been investigated and adjudicated by the end of 2025).
[Note: The specific framework above borrows from the framework for security clearances and public-trust background checks in the US government. Obviously things in EA would need to be different, and the risks are different, so this is meant as an example rather than a specific proposal on this point. Yet, some of the core system needs would be at least somewhat similar.]
@Rebecca Kagan I’ve sent you a message and think it could be valuable for me and perhaps other new EV board members to get more information from you in order to learn and avoid mistakes. I’d be happy to take you up on your offer for discussion.
‘and think confusion on this issue has indirectly resulted in a lot of harm.’
Can you say a bit more about this?
I’m very grateful that Rebecca had the integrity to resign her board seat and to share the reason why. I’ve published a new post that shares evidence supporting her allegations that EA leaders made mistakes around FTX and don’t seem interested in helping the community learn the appropriate lessons, and echoes her call for an independent investigation. My post documents important issues where EA leaders have not been forthcoming in their communications, troublesome discrepancies between leaders’ communications and credible media reports, and claims that leaders have made about post-FTX reforms that appear misleading.
Thank you for posting this publicly. It’s useful information for everyone to know.
Wasn’t there some law firm that did an investigation? Plus some other projects listed here.
It would be useful for you to clarify exactly what you’d like to see happen and how this differs from the things that did happen, even though this might be obvious to someone who is high-context on the situation like you are. On the other hand, I’d have to do a bit of research to figure out what you’re suggesting.
The post has a footnote, which reads:
As far as I know, what has been shared publicly from the investigation is that no one at EVF had actual knowledge of SBF’s fraud.
My take is:
EA (ie mainly elite EAs) fucked up and have considerable responsibility over the FTX thing
EA also fucked up big time with the OpenAI board drama, in a way that blew up less badly than it could have, but reflects even worse on the state of elite EA than FTX does
Public investigations and post-mortems won’t help per se. What would help is a display of leadership that convincingly puts to bed any concern of similarly poor epistemics and practices taking place in the future
Wasn’t the OpenAI thing basically the opposite of the mistake with FTX though? With FTX people ignored what appears to have been a fair amount of evidence that a powerful, allegedly ethical businessperson was in fact shady. At OpenAI, people seem to have got (what they perceived as, but we’ve no strong evidence they were wrong) evidence, that a powerful, allegedly ethically motivated businessperson was in fact shady, so they learnt the lessons of FTX and tried to do something about it (and failed.)
I think that’s why it’s informative. If EA radically changes in response to the FTX crisis, then it could easily put itself in a worse position (leading to more negative consequences in the world).
The intrinsic problem appears to be in the quality of the governance, rather than a systematic error/blind-spot.
To be more clear, I am bringing the OpenAI drama up as it is instructive for highlighting what is and is not going wrong more generally. I don’t think the specifics of what went wrong with FTX point at the central thing that’s of concern. I think the key factor behind EA’s past and future failures come down to poor quality decision-making among those with the most influence, rather than the degree to which everybody is sensitive to someone’s shadiness.
(I’m assuming we agree FTX and the OpenAI drama were both failures, and that failures can happen even among groups of competent, moral people that act according to the expectations set for them.)
I don’t know what the cause of the poor decision-making is. Social norms preventing people from expressing disagreement, org structures, unclear responsibilities, conflicts of interests, lack of communication, low intellectual diversity — it could be one of these, a combination, or maybe something totally different. I think it should be figured out and resolved, though, if we are trying to change the world.
So, if there is an investigation, it should be part of a move to making sure EAs in positions of power will consistently handle difficult situations incredibly well (as opposed to just satisfying people’s needs for more specific explanations of what went wrong with FTX).
There are many ways in which EA can create or destroy value, and looking just at our eagerness to ‘do something’ in response to people being shady is a weirdly narrow metric to assess the movement on.
EDIT: would really appreciate someone saying what they disagree with
I don’t get why the EA Forum Team prioritized ‘RCTs are good actually’ (#2) above this post (#4) in its ‘EA Forum Digest #183’? I’d appreciate an explanation for this prioritization especially given that:
This post has 4x more upvotes (287 compared to 69 for ‘RCTs are good actually’)
This post has 5x more comments (32 compared to 6)
Integrity is a guiding principle of CEA, which the EA Forum Team is part of. ‘RCTs are good actually’ questions the relative importance of integrity processes in doing RCTs. In contrast, this post questions integrity processes in EA.
(I hope this comment comes across as a healthy critique and genuine query, not a cynical ‘gotcha’ attempt. This post went up not long before the digest email was sent so I’m guessing this has something to do with it. Quoted numbers are as of 10:30am Canberra time on 4 April, eleven hours after the digest arrived in my email inbox)
I wouldn’t read too much into the exact ordering in the EA Forum digest. At least if I was making such a digest I would mostly be busy filling it up at all, and it would feel unnecessarily nitpicky and stressful to me to be judged on even the relative ordering of the articles I put in there.
Ah fair call I can see how my comment was nitpicky
I am still concerned about the promotion of the (well-intentioned) RCT post that seemed to undervalue integrity processes for doing RCTs on vulnerable people (in my view). But I appreciate I could have misinterpreted this.
In any case, I can also see that my comment could be experienced as stressful or judgey by the Forum team AND author of the RCT post. I’m genuinely really sorry if this has happened. I appreciate you’ve taken on difficult and important tasks and trust you have the best of intentions with them :) Thanks for your efforts and I’ll keenly be more tactful in future.