There is a new Time article
Seems certain 98% we’ll discuss it
I would like us to try and have a better discussion about this than we sometimes do.
Consider if you want to engage
I updated a bit on important stuff as a result of this article. You may disagree. I am going to put my “personal updates” in a comment
Excepts from the article that I think are relevant. Bold is mine. I have made choices here and feel free to recommend I change them.
Yet MacAskill had long been aware of concerns around Bankman-Fried. He was personally cautioned about Bankman-Fried by at least three different people in a series of conversations in 2018 and 2019, according to interviews with four people familiar with those discussions and emails reviewed by TIME.
He wasn’t alone. Multiple EA leaders knew about the red flags surrounding Bankman-Fried by 2019, according to a TIME investigation based on contemporaneous documents and interviews with seven people familiar with the matter. Among the EA brain trust personally notified about Bankman-Fried’s questionable behavior and business ethics were Nick Beckstead, a moral philosopher who went on to lead Bankman-Fried’s philanthropic arm, the FTX Future Fund, and Holden Karnofsky, co-CEO of OpenPhilanthropy, a nonprofit organization that makes grants supporting EA causes. Some of the warnings were serious: sources say that MacAskill and Beckstead were repeatedly told that Bankman-Fried was untrustworthy, had inappropriate sexual relationships with subordinates, refused to implement standard business practices, and had been caught lying during his first months running Alameda, a crypto firm that was seeded by EA investors, staffed by EAs, and dedicating to making money that could be donated to EA causes.
MacAskill declined to answer a list of detailed questions from TIME for this story. “An independent investigation has been commissioned to look into these issues; I don’t want to front-run or undermine that process by discussing my own recollections publicly,” he wrote in an email. “I look forward to the results of the investigation and hope to be able to respond more fully after then.” Citing the same investigation, Beckstead also declined to answer detailed questions. Karnofsky did not respond to a list of questions from TIME. Through a lawyer, Bankman-Fried also declined to respond to a list of detailed written questions. The Centre for Effective Altruism (CEA) did not reply to multiple requests to explain why Bankman-Fried left the board in 2019. A spokesperson for Effective Ventures, the parent organization of CEA, cited the independent investigation, launched in Dec. 2022, and declined to comment while it was ongoing.
In a span of less than nine months in 2022, Bankman-Fried’s FTX Future Fund—helmed by Beckstead—gave more than $160 million to effective altruist causes, including more than $33 million to organizations connected to MacAskill. “If [Bankman-Fried] wasn’t super wealthy, nobody would have given him another chance,” says one person who worked closely with MacAskill at an EA organization. “It’s greed for access to a bunch of money, but with a philosopher twist.”
But within months, the good karma of the venture dissipated in a series of internal clashes, many details of which have not been previously reported. Some of the issues were personal. Bankman-Fried could be “dictatorial,” according to one former colleague. Three former Alameda employees told TIME he had inappropriate romantic relationships with his subordinates. Early Alameda executives also believed he had reneged on an equity arrangement that would have left Bankman-Fried with 40% control of the firm, according to a document reviewed by TIME. Instead, according to two people with knowledge of the situation, he had registered himself as sole owner of Alameda.
Bankman-Fried’s approach to managing the business was an even bigger problem. “As we started to implement some of the really basic, standard corporate controls, we found more and more cases where I thought Sam had taken dangerous and egregious shortcuts,” says one person who later raised concerns about Bankman-Fried to EA leaders. “And in many cases [he] had concealed the fact that he had done that.”
“We didn’t know how much money we actually had. We didn’t have a clear accounting record of all the trades we’d done,” Bouscal says. “Sam continued pushing us more and more in this direction of doing a huge number of trades, a huge number of transfers, and we couldn’t account for that.” At the same time, she adds, Bankman-Fried was spending enormous amounts of money because “he didn’t have a distinction between firm capital and trading capital. It was all one pool.”
The meeting was short. Mac Aulay and the management team offered Bankman-Fried a buyout in exchange for his resignation as CEO, and threatened to quit if he refused. Bankman-Fried sat there silently, according to two people present, then got up and left. The next day, he came back with his answer: he would not step down. Instead, the other four members of the management team resigned, along with roughly half of Alameda’s 30 employees. Mac Aulay, an Australian citizen, was forced to leave the country shortly afterward, because her work visa was tied to Alameda.
edit: text added
Bouscal recalled speaking to Mac Aulay immediately after one of Mac Aulay’s conversations with MacAskill in late 2018. “Will basically took Sam’s side,” said Bouscal, who recalls waiting with Mac Aulay in the Stockholm airport while she was on the phone. (Bouscal and Mac Aulay had once dated; though no longer romantically involved, they remain close friends.) “Will basically threatened Tara,” Bouscal recalls. “I remember my impression being that Will was taking a pretty hostile stance here and that he was just believing Sam’s side of the story, which made no sense to me.”
But one of the people who did warn others about Bankman-Fried says that he openly wielded this power when challenged. “It was like, ‘I could destroy you,’” this person says. “Will and Holden would believe me over you. No one is going to believe you.”
Sometime that year, the Centre for Effective Altruism did an internal investigation relating to CEA and Alameda, according to one person who was contacted during the investigation, and who said it was was conducted in part by MacAskill. Bankman-Fried left the board of the organization in 2019.
“You vouch for him?” Musk asked MacAskill.
“Very much so!” MacAskill replied. “Very dedicated to making the long-term future of humanity go well.”
None of the early Alameda employees who witnessed Bankman-Fried’s behavior years earlier say they anticipated this level of alleged criminal fraud. There was no “smoking gun,” as one put it, that revealed specific examples of lawbreaking. Even if they knew Bankman-Fried was dishonest and unethical, they say, none of them could have foreseen a fraud of this scope.
Some thoughts on how to hold a good discussion here:
Please lets both write how we feel and how we think about this but clearly seperate them.
Many senior figures just aren’t likely to respond to this. Personally I both believe they have good reasons and take it seriously but am confused as to why there has been so little comment. But I don’t think we should expect responses
Note that downvotes and disagreevotes are a way of people expressing their views without having to spend the effort to type them and that is a good thing. It makes the discussion more representative, not less. In particular, in my anecdotal experience from discussion on facebook, if you give people the ability to vote, you see a much more representative set of participants than if poeple just write.
Nathan—thanks for sharing the Time article excerpts, and for trying to promote a constructive and rational discussion.
For now, I don’t want to address any of the specific issues around SBF, FTX, or EA leadership. I just want to make a meta-comment about the mainstream media’s feeding frenzy around EA, and its apparently relentless attempts to discredit EA.
There’s a classic social/moral psychology of ‘comeuppance’ going on here: any ‘moral activists’ who promote new and higher moral standards (such as the EA movement) can make ordinary folks (including journalists) feel uncomfortable, resentful, and inadequate. This can lead to a public eagerness to detect any forms of moral hypocrisy, moral failings, or bad behavior in the moral activist groups. If any such moral failings are detected, they get eagerly embraced, shared, signal-amplified, and taken as gospel. This makes it easier to dismiss the moral activists’ legitimate moral innovations (e.g. focusing on scope-sensitivity, tractability, neglectedness, long-termism), and allows a quicky, easy return to the status quo ante (e.g. national partisan politics + scope-insensitive charity as usual).
We see this ‘psychology of comeuppance’ in the delight that mainstream media took when televangelists who acted greedy, lustful, and/or mendacious suffered various falls from grace over the last few decades. We see it in the media’s focus on the (relatively minor) moral mis-steps and mis-statements of ‘enemy politicians’ (i.e. those in whatever party the journalists don’t like), compared to the (relatively major) moral harms done by bad government policies. We see it throughout cancel culture, which is basically the psychology of comeuppance weaponized through social media to attack ideological enemies.
I’m not positing an organized conspiracy among mainstream journalists to smear EA. Rather, I’m pointing out a widespread human psychological propensity to take delight in any moral failings of any activist groups that make people feel morally inadequate. This propensity may be especially strong among journalists, since it motivates a lot of their investigative reporting (sometimes in the legitimate public interest, sometimes not).
I think it’s useful to recognize the ‘comeuppance psychology’ when it’s happening, because it often overshoots, and amplifies moderately bad moral errors into looking like they’re super-bad moral errors. When a lot of credible, influential media sources are all piling onto a moral activist group (like EA), it can be extremely stressful, dispiriting, and toxic for the group. It can lead the group to doubt their own valid ideas and values, to collapse into schisms and recriminations, to over-correct its internal moral norms in an overly puritanical direction, and to ostracize formerly valued leaders and colleagues.
I’ve seen EA do a lot of soul-searching over the last few months. Some of it has been useful, valid, and constructive. Some of it has been self-flagellating, guilt-stricken, and counter-productive. I think we should take the Time article seriously, learn what we can from it, and update some of our views of issues and people. But I think our reactions should be tempered and contextualized by understanding that the media’s ‘comeuppance psychology’ can also lead to hasty, reactive, over-corrections.
Thanks for sharing this, I already knew about the phenomena and had vague thoughts this might be a significant contributor, but appreciate you spelling it out.
I’m suspecting that the comeuppance-related behavior is not only about EA being a movement emphasizing ethical innovation, but also about EA recently gaining a lot in more visible influence and social status, e.g. via more public outreach campaigns leading to features in cover stories. My impression is that the distribution of social status of public actors and social movements is fairly zero sum (actually probably even negative sum because of the incentives to invest in and stick to defensive PR). This, combined with the public discourse being relatively scatterbrained and not very optimized for truth-seeking leads to a lot of distorted publications that aim more at lowering the social standing of an actor than giving a clear impression of what is really going on.
MaxRa—I agree this is also part of the mainstream media’s anti-EA mind-set: a zero-sum view of influence, prestige, and power. There are many vested interests (e.g. traditional political institutions, charities, think tanks, media outlets) that are deeply threatened by EA, because they simply don’t care about scope-sensitivity, tractability, neglectedness, or long-termism. Indeed, these EA values directly challenge their day-to-day partisanship and virtue-signaling.
The EA movement may have naively under-estimated the strength of these vested interests, and their willingness to play dirty (through negative PR campaigns) to protect their influence.
What “organization” do you currently have evidence is “running” a negative PR campaign against us because we directly threaten its interests? We’re not a threat to TIME magazine in any way I can see.
David—TIME magazine for decades has promoted standard left/liberal Democrat-aligned narratives that prioritize symbolic partisan issues over scope-sensitive impact.
From the viewpoint of their editors, EA represents an embarrassing challenge to their America-centric, anthropocentric, short-termist, politicized way of thinking about the world’s problems.
We may not be a direct threat to their subscription revenue, newsstand sales, or ad revenue.
But we are a threat to the ideology that their editors have strong interests in promoting—an ideology that may seem invisible if you agree with it, but which seems obviously biased if you don’t agree with it.
This is how partisan propaganda operates in the 21st century: it tries to discredit rival ideologies and world-views with a surprising ferocity and speed, once they sense a serious threat.
IMHO, EA needs to get a bit less naive about what people and institutions are willing to do to protect their world-views and political agendas.
That doesn’t seem to match with EA being a front cover story last year, and being shown in a positive light.
I feel like an equally informative version of this is “people are more critical about the bad behavior of those they disagree with politically ”, and then it sounds relevant yes, but far less sinister and discrediting.
I think that’s a somewhat different point. It’s often true that people are more critical about bad behavior by their political opponents’.
But most of the news stories I read in mainstream media that are critical of EA go far beyond demonizing EA individuals. I sense that these editors & journalists are feeling a panicky, uneasy, defensive reaction to the EA movement’s epistemics and ethics, not just to EA individuals. It reminds me of the defensive, angry reactions that meat-eaters often show when they encounter compelling vegan arguments about animal welfare.
Admittedly this is a rather vague take, but I think we do under-estimate how much the EA perspective threatens many traditional world-views.
Are they issuing comeuppance because you’ve positioned yourselves above them morally and they’re waiting to pounce on any mistake, or are they issuing comeuppance because EA didn’t listen to anyone who warned them about obvious scams? I don’t think the only reason there is a widespread media attack on EA (which is not really even true), is because you’re simply more morally active than them and they are uncomfortable with that. Plenty of journalists are activists themselves who think EA isn’t even doing enough, so the witch hunt argument doesn’t really make sense does it?
Hm you say “EA didn’t listen to anyone who warned them about obvious scams”, but the article says:
And
So I’m not sure you can say there were warnings of “obvious scams”.
Also
So I’m not sure it is accurate to say that “EA didn’t listen to” the warnings which were given. I’m certainly curious about the quality of the internal investigation by CEA. I wouldn’t be surprised if there were gaps/it was of low quality. But I also wouldn’t be surprised if it was of expected/good quality given the nature of the complaints made. And I wouldn’t be surprised to find that Sam would have fooled a non-EA, commissioned investigation too, enough that non-EA nonprofits would have felt comfortable taking his money. I mean, I assume Sam would have refused to give internal financial documents to independent investigators, and such a refusal to engage thoroughly from Alameda (“Um, no you can’t see our internal documents? Who do you think you are..?”) would be so normal for an investment firm that it can’t even be seen as a red flag.
I’m not surprised that CEA is refusing to comment til after the commissioned independent investigation is complete, whether or not their 2019 internal investigation was of high, decent, or low quality. I’m not sure which it was yet. I guess I’ll wait to see.
[Edit: In general I’m against pushing to make others responsible for the sins of others without a lot of proof. Especially when the “sinners” had dark triad traits who could have been trying to manipulate the others. I know the general population and journalists don’t think that way or have as much patience in that regard, but I’d like it if EAs did. Judge leadership for competence, and replace them if needed, sure, but comeuppance here is still likely to be punishment for trying and failing. And I think punishment should be reserved for the actual sinners themselves. I’m not at all sure anyone who didn’t work directly with SBF at Alameda “sinned” here. And if they didn’t, EA itself and EA leaders don’t “deserve” comeuppance, IMO.
I find comeuppance as journalistic motivation plausible, but I also admit that comeuppance might not be the journalist’s intention with this article, even subconsciously. But it sounds like you are also arguing that comeuppance would be warranted for other reasons here, and I just don’t think so. Comeuppance is moral punishment. I’ll reiterate that it would be fine to push that leadership should be changed (after the investigation). But let the actual sinners, and the sinners alone, be punished for their sins. [[I don’t want to suppress discussion, so sure, place your bets, but please don’t assume moral fault yet.]]
Finally, I agree with you that many journalists are activists themselves. But I’ll also note that when journalists and others say that “EA isn’t doing enough”, they are still potentially using another way to shame moral actors who otherwise appear to be doing more than them. It is a frame that EA has more agency and privilege than them (perhaps unjustly given), but still has less actual goodness and merit than them. So I still find it very plausible that the recent journalists are (consciously or unconsciously) doling out extra blame and shame to put aspiring altruists in their place. And if it is not the journalists themselves doing this, perhaps, as a business, they are catering to the many, many readers who click for and revel in “comeuppance”.]
There are so many parallels to the Christian church
I thought the previous article by Charlotte Alter on sexual misconduct in EA was pretty misleading in a lot of ways, as the top comments have pointed out, since it omitted a lot of crucial context, primarily used examples from the fringes of the community, and omitted various enforcement actions that were taken against the people mentioned in the article, which I think overall produced an article that had some useful truths in it, but made it really quite hard for readers to come to a good map of what is actually going on with that kind of stuff in EA.
This article, in contrast, does not have, as far as I can tell, any major misrepresentations in it. I do not know the details about things like conversations between Will and Tara, of course, since I wasn’t there, and I have a bit of a feeling there is some exageration in the quotes by Naia here, but having done my own investigation and having talked to many people about this, the facts and rough presentation of what happened here seems basically correct.
It still has many of the trapping of major newspaper articles, and think continues to not be amazingly well-optimized for people to come to a clear understanding of the details, and is more optimized to tell a compelling story, but at least my perception is that the rough narrative of the story lines up pretty well with what I think indeed happened. When I found out similar details in early 2022, I also had quite a strong reaction that nobody seemed to be acting on all of these warning flags.
I’ve read this comment a few times, and my brain goes ”???” whenever I get to your last clause: “I also had quite a strong reaction that nobody seemed to be acting on all of these warning flags”
I just don’t get it in a way that connects to my reading of the article. What are “all these warning flags” and what counts as “inaction”? I don’t want to say your take is wrong because you are sort of sharing feelings, but like.… according to the article, ex-Alameda employees don’t seem to think that those flags were warning flags for the massive fraud and crash-and-burn failure that was to come. And re: inaction, the article says CEA did an internal investigation in 2019 (it drops the info kinda randomly. As you say, the article isn’t well-optimized to come away with an understanding of the details). And idk what new warning flags came after 2019, I’m not seeing any in the article.
I mostly like your comment, but I’m also left wondering… Do you know things not in the article? Did I miss something? [Is this just a “vibe” we will disagree on regardless?] I can’t quite reconcile your take.
[Edit: I had been thinking about asking this over DM for a couple days, but now that this post is no longer an active topic, I figured, “what the hay, ask it in thread”. However you can answer over DM if you prefer, or ignore cuz the post is giving dying breaths, np.]
I don’t know what you mean by this. I’ve definitely talked to many ex-Alameda employees and they totally think this was a warning flag for the massive fraud. Nobody assigned huge probability to the specific scenario that happened but “Sam causes some kind of huge explosion, or does something pretty fradulent or at the very least builds a highly unethical organization” was totally the kind of thing many people were worried about (which is indeed why some of them went around warning others).
I’ve written about this in many of my other comments. Also, CEA doing an investigation in 2019 seems I think wrong, or at least I have never heard of this investigation, and if there was an investigation it seems like it was worse than useless by creating a sense that “something” had been done, but really without any actual consequences as far as I can tell.
(Sorry I took so long to come back) Thanks for clarifying. Hm I’m surprised then that it really seems like the journalist didn’t turn up such quotes about fraud. I do think you are right that many of them expected a crash-and-burn… of some sort. I feel like I should have written something more precise like “crash and burn 3 years later, after making 15B on paper” which comes with so many signals over the years that if I were such a person I’d end up discounting my early suspicions. If I were in their or CEA’s shoes I’d probably have expected something like what happened with Tara’s company, a crash and burn pretty soon after (2019?), so I’d be assuming something got fixed along the way if not. Especially given how an ex-employee(s?) talked about burning through the Asian arbitrage dollars with bad trade decisions.. they’d have had to fix it, right, or they’d have gone belly-up way sooner? I guess crypto was just that much of a gold rush and so few “adults in the room” that they could keep fudging their numbers for that long..?
Maybe the investigation was worse than useless in the end, but reasonably any action taken was going to start with an investigation. It depends the quality of the investigation, but for now I’m much more comfortable considering this a bug of the world than something to blame CEA for.
[Edit: This isn’t to say that I think no mistakes were made. But my complaints are not focused on EA leaders specifically (I’m hesitant to call out any single person til CEA’s commissioned investigation is complete), and are different from what the article discusses. I discuss that in shortform]
This is textbook Gell-Mann amnesia
How so? Aren’t these both cases where Habryka has similar amounts of professional knowledge? If not, which case do you think he knew more about?
Based on conversations with people at the time, it seems plausible to me that this is true. However, this is not as serious a concern as you might think: IMHO it was reasonable to consider both SBF and Tara highly untrustworthy at the time. Will trusted SBF too much, but his skepticism of Tara seems justified. Tara’s hedge fund suffered a major loss later, and I heard she showed low integrity in communicating with stakeholders about the loss.
Relevant quote from the article:
“every other long-time EA involved had left because of the same concerns” is significant corroboration though (and a direct quote from an on-the-record source).
Isn’t Caroline Ellison an obvious exception?
Yeah, I think it’s probably fair to say that I worded that a bit too strongly. I do think she fits the reference class significantly less well than many of the other EAs who left (notably, she was only 23 at the time), but I should have been more precise.
Threats are (usually) very bad, even if the person threatened later does something bad. But I still don’t actually feel I know “Will did something really bad to Tara”*, because it’s so vague how far a stretch “basically threatenrd” is from “threatened”.
*I’m already convinced he made a major error of judgment in backing Sam given the existence of thr doc listing his misdeeds mentioned in the Time article.
Personal feelings (which I don’t imply are true or actionable)
I am annoyed and sad.
I want to feel like I can trust the leaders of this community are playing by a set of agreed rules. Eg I want to hear from them. And half of me trusts them and half feels I should take an outside view that leaders often seek to protect their own power. The disagreement between these parts causes hurt and frustration.
I also variously feel hurt, sad, afraid, compromised, betrayed.
I feel ugly that I talk so much about my feelings too. It feels kind of obscene.
I feel sad that saying negative things, especially about Will. I sense he’s worked really hard. I feel ungrateful and snide. Yuck.
Object level
I don’t think this article moves me muchThis article moves me a bit on a number of important things:We have some more colour around the specific warnings that were given
It becomes much more likely that MacAskill backed Bankman-Fried in the aftermath of the the early Alameda disagreements which was ex-ante, dubious and ex-post disasterous. The comment about threatening Mac Auley is very concerning.
I update a bit that Sam used this support as cover
I sense that people ought to take the accusations of inappropriate sexual relationships more seriously to be consistent, though I personally I am uncertain cos we don’t have much information
edit mainly after talking to Naia in the comments, I update towards being uncertain about whether we knew SBF was unusually badly behaved (from being confident he wasn’t). ie maybe we did have the information required to be pretty sure he wasn’t worth funding or to keep him at arms length. As I say I am uncertain but previously I dismissed this
The 80k interview feels even worse researched/too soft than I previously thought
I still sense that core EAs take this seriously
I still think they don’t think they can talk
I still don’t understand why they can’t give a clear promise of when they will talk and that the lack of this makes me trust them less
I think we had lots of info that sam was a bit dodgy before the FTX crash, but that this was not above normal the levels of many CEOs of rapidly growing business (I have read most about Google’s early days and it was very shifty)Perhaps EA should have higher standards, but I sense not.I still think that we should have been much more careful linking ourselves reputationally to FTXI think the big thing here to note is that even those who saw sam at his worst did not expect the FTX crash, so I guess the question is “should sam have been lauded, given his early behaviour at alameda”. I think no, but given what we knew I am uncertain whether he should have been condemned
not condemned eitherI think not talking while there is in investigation is reasonable
I have made both criticisms and defences of MacAskill and Beckstead and stand by them
I still think they are both very talented, perhaps more so as a result of the growth and wisdom this will engender in them (I have often thought it was dumb that people remove leaders who make mistakes) edit Though this article does add additional concerns
I would still like an argument that they shouldn’t be removed from boards, when almost any other org would. I would like the argument made and seen to be made.
I have noticed how hard it is to talk publicly about these things. Recently I’ve updated more in favour that there just are emotional, social and some career costs (and benefits) to trying to have accurate semi-public discussions about these things. People DM me. I hear people are annoyed, I build both social credit and debt. I think less than some say, but more than none.
I cannot deny that I am tempted to mediate my comments so that people will like me and probably do a bit
Missing context
I have a reasonable amount of time for the notion that EA leaders should have an independent investigation and I don’t think the article gives that enough credit
Many business leaders are disagreeable people who do grey things. Uber’s activities were deliberately illegal in many countries and I probably on balance support that. edit I am less in agreement with my tone here. The article mentions this, but in my opinion it should be written in big letters in the top that:
If even they didn’t think this, I don’t think we should be surprised that core EAs didn’t either.
Relevant things people may or may not have known:
In the early days of Alameda, SBF reneged on deals with other EAs and had very poor financial management. Many core EAs knew this
Other:
My gut says that Naia Bouscal is telling the truth, since before I knew her in relation to this, I thought she was a pretty straight shooting twitter account.
Edited to combine two comments (one personal one more general) into one and add points as I think of them.
How do you feel?
What do you think?
It was, and we explicitly said that it was at the time. Many of those of us who left have a ton of experience in startups, and the persistent idea that this was a typical “founder squabble” is wrong, and to be honest, getting really tiresome to hear. This was not a normal startup, and these were not normal startup problems.
(Appreciate the words of support for my honesty, thank you!)
You may indeed believe that and have said that, but the question for us is: Was it reasonable for EA leaders to think this degree of bad behaviour was particularly out of the ordinary for the early days of a startup?
To take Nathan Young’s four examples, looking at some of what major news outlets said prior to 2018 about these companies’ early days...it doesn’t seem that unusual? (Assuming we now know all the key accusations that were made—there may of course have been more.)
Facebook
“The company and its employees have also been subject to litigation cases over the years...with its most prominent case concerning allegations that CEO Mark Zuckerberg broke an oral contract with Cameron Winklevoss, Tyler Winklevoss, and Divya Narendra to build the then-named “HarvardConnection” social network in 2004, instead allegedly opting to steal the idea and code to launch Facebook months before HarvardConnection began… The original lawsuit was eventually settled in 2009, with Facebook paying approximately $20 million in cash and 1.25 million shares.” (Wikipedia, referencing articles from 2007 to 2011)
“Facebook co-founder, Eduardo Saverin, no longer works at Facebook. He hasn’t since 2005, when CEO Mark Zuckerberg diluted Saverin’s stake in Facebook and then booted him from the company.” (Business Insider, 2012)
“we also uncovered two additional anecdotes about Mark’s behavior in Facebook’s early days that are more troubling...— an apparent hacking into the email accounts of Harvard Crimson editors using data obtained from Facebook logins, as well as a later hacking into ConnectU” (Business Insider, 2010)
Google
“Asked about his approach to running the company, Page once told a Googler his method for solving complex problems was by reducing them to binaries, and then simply choosing the best option,” Carlson writes. “Whatever the downside he viewed as collateral damage he could live with.” That collateral damage sometimes consisted of people. In 2001, frustrated with the layer of managers overseeing engineers, Page decided to fire all of them, and publicly explained that he didn’t think the managers were useful or doing any good.” (Quartz, 2014)
“Page encouraged his senior executives to fight the way he and Brin went at it. In meetings with new hires, one of the two co-founders would often provoke an argument over a business or product decision. Then they would both sit back, watching quietly as their lieutenants verbally cut each other down.” (Business Insider, 2014)
Gates
“Allen portrays the Microsoft mogul as a sarcastic bully who tried to force his founding partner out of the firm and to cut his share in the company as he was recovering from cancer.” (Guardian, 2011)
″...he recalls the harsher side of Gates’s character, anecdotes from the early days...Allen stopped playing chess with Gates after only a few games because Gates was such a bad loser he would sweep the pieces to the floor in anger; or how Gates would prowl the company car park at weekends to check on who had come in to work; or the way he would browbeat Allen and other senior colleagues, launching tirades at them and putting them down with the classic denigrating comment: “That’s the stupidest fucking thing I’ve ever heard!” (Guardian, 2011)
“They met in 1987, four months into her job at Microsoft...meeting her in the Microsoft car park, he asked her out” (Independent, 2008)
Bezos
Obviously his treatment of workers is no secret (and it seems natural for people to think he’s probably always been this way)
It’s not surprising to me if EA leaders thought most startups were like this—we just only hear stories about the ones that make it big.
I’ve only worked for one startup myself and I wasn’t privy to what went on between executives, but: one of them said to a (Black, incidentally) colleague upon firing him “You’ll never work again,” another was an older married man who was grinding up against young female colleagues at an office party (I actually suggested he go home and he said, “No—I’m having fun” and laughed and went back to it), and another made a verbal agreement with some of us to pay us overtime if we worked 12-hour days for several weeks and then simply denied it and never did. [edit: I should clarify this was not an EA org]
Thanks for giving honest quotes on a serious crime. On balance I’m in favour of your giving quotes here and that can’t have been easy (though I feel the article is inaccurate in tone).
I’m sad to hear that this is tiring, though I still am gonna say things I think. Feel free to DM me if you think I’m wrong but don’t want to engage publicly.
My error, By normal I don’t mean good, I mean “not usual”.
I sense there were this level of concern externally about facebook, that google did some pretty shifty shit, that Gates and Bezos were similarly cutthroat.
Do you disagree.
Yes, I disagree. My understanding of what happened at each of those four companies in the early days is qualitatively, categorically different from what happened at Alameda.
It really must feel awful to report serious misconduct and have it not be taken seriously. I’ve had a similar experience and it crushed me mentally.
I’ve been thinking about this situation a lot. I don’t know many details, but I’m trying to sort through what I think EA leadership should have done differently.
My main thought is, maybe in light of these concerns, they should have kept taking his money, but not tied themselves to him as much. But I don’t know many details about how they tied themselves to him. Its just, handling misconduct cases gets complicated when the misconduct is against one of the 100 richest people in the world. And while it’s clear Sam treated people poorly, broke many important rules and lied frequently, it was not clear he was stealing money from customers. And so it just leaves me confused. But thank God I am not in charge of handling these sorts of things.
I know it’s also not your responsibility to know what to do in situations like this, but I’d be curious to hear what you wished EA leadership/infrastructure had done differently. I think that might help give shape to my thoughts around this situation.
I don’t known if communicating super clearly here. So want to clarify. This is not meant as a critical comment at all! I hope it doesn’t read as downplaying your experience, because I do feel super alarmed about everything and get the sense EA fucked up big here. I feel fully on support of you here, but I’m worried my confusion makes that harder to read.
Retracting because on reflection I’m like, no one knew he was stealing funds, but I think leadership knew enough of the ingredients to not be surprised by this. It’s not just Sam treating employees poorly, but leadership heard that he would lie to people (including investors), mix funds, play fast and loose with the rules. They may not have known the disastrous ways these would combine. Even so, it seems super bad and while I’m still confused as to how the ideal way to handle it would be. It does seem clear to me it was egriously mishandled.
One important thing to note is that when we first warned everyone, he was not yet in the richest 100 people in the world. If they had taken our warnings seriously at the start, he may never have become that rich in the first place.
Agree. And also worth noting it seems like he may have never actually been that rich, but just, you know, lied and did fraud.
The general thing I’m hearing is, with a lot of people who do misconduct, you/CEA will hear about this misconduct relatively early on, and they should take action before things get too large to correct. That, early & decisive action is important. Leadership should be taking a lot more initiative in responding to misconduct.
This tracks with my experience too. I’ve reported professional misconduct, have it not be taken seriously, and have that person continue to gain power. The whole experience was maddening. So, yeah, +1 to early intervention following credible misconduct reports.
Feel free to DM me, I’ve complained before, I’ll complain again.
Unless somehow it is me, in which case get someone to make a prediction market and bet it up using a newly created google account.
Lol not you. I deleted most detail I included in that comment because I feel like it’s distracting from SBF discussion (like, this convo should not be used as a soapbox for me), the case has recently been reopen (which means probably best if I don’t talk about it and also there might be a good outcome). And I also just worry about pissing people off.
It’s also like, what are people supposed to do with an anonymous comment with a very vague allegation.
Well I’d still like to know. My general stance is that information about misdeeds should more often be public. I wish that I’d known what many knew about SBF.
Hm maybe, I’m not sure. I like to have a professional atmosphere, and public sharing of misdeeds can lead to a culture of like gossip. But, I think it is appropriate to speak publicly about it if the situation was mishandled (in my case, unclear as it’s been reopened) or if the person should be blacklisted (I do not think this is the case here).
Again, I think I’d like more public sharing on professional misdeeds, on the margin. Many EA orgs have mistakes pages for this reason and that’s good.
I think the main problem being faced again and again is that internal reporting lacks teeth.
I think public reporting is an inadequate alternative. It’s a big demand to ask people to become public whistleblowers, especially since most things worth reporting aren’t always black and white. It’s hard to publicly speak out about things if you’re not certain about them (eg because of self-doubt, wondering if it’s even worth bothering, creating a reputation for yourself, etc).
Additionally, the subsequent discourse seems to put additional burden on those speaking out. If I spoke up about something just to see a bunch of people doubt what I’ve said is true (or, like in previous cases, have to engage with the wrongdoer and proofread their account of events) I’d probably regret my choice.
I think that the wiki could solve this. Having public records that someone hard nosed (like me) could write on others behalf.
I know that my messing with prediction markets around this hasn’t always gone well (sorry) but I think there is something good in that space too. I think Sam’s “chance of fraud” would have been higher than anyone else’s.
I don’t think gossip ought to be that public or legible.
Firstly, I don’t think it would work for achieving your goals; I would still hesitate about having my opinions uploaded without feeling very confident in them (rumours are powerful weapons and I wouldn’t want to start one if I was uncertain).
Secondly, I don’t think it’s worth the costs of destroying trust. A whole bunch more people will distance themselves from EA if they know their public reputation is on the line with every interaction. (I also agree with Lawrence on the Slack leaks, FWIW).
I see why you might want public info (akin to scandal markets) when people are more high-profile, but I don’t think Sam Bankman-Fried would have passed that bar in 2018.
I disagree. I upload 60% opinions all the time. I would about gossip if I thought I could control it.
I think we could build systems to handle this. I think there is something whistleblower marketty
I think he would have as FTX got going. Also he might in 2018.
Fair enough! It could be useful, so I’d be happy to be wrong here.
fwiw I will probably post something in the next ~week (though I’m not sure if I’m one of the people you are waiting to hear from).
I feel glad.
Personally it doesn’t need to be soon, but I appreciate something I can hold you to a lot, that makes me not worry that this trying to minimise.
Here’s my tentative take:
It’s really hard to find competent board members that meet the relevant criteria.
Nick (together with Owen) did a pretty good job turning CEA from a highly dysfunctional into a functional organization during CEA’s leadership change in 2018/2019.
Similarly, while Nick took SBF’s money, he didn’t give SBF a strong platform or otherwise promote him a lot, and instead tried to independently do a good (not perfect, but good enough!) job running a philanthropic organization. While SBF may have wanted to use the philanthropy to promote the FTX/SBF brand, Nick didn’t do this. [Edit: This should not be read as me implying that Will did those things. While I think Will made some mistakes, I don’t think this describes them.]
Continuity is useful. Nick has seen lots of crises and presumably learnt from them.
So, while Will should be removed, Nick has demonstrated competence and should stay on.
(Meta note: I feel frustrated about the lack of distinction between Nick and Will on this question. People are a bit like “Will did a poor job, therefore Nick and Will should be removed from the board.” Please, discuss the two people separately.)
Thanks for making the case. I’m not qualified to say how good a Board member Nick is, but want to pick up on something you said which is widely believed and which I’m highly confident is false.
Namely—it isn’t hard to find competent Board members. There are literally thousands of them out there, and charities outside EA appoint thousands of qualified, diligent Board members every year. I’ve recruited ~20 very good Board members in my career and have never run an open process that didn’t find at least some qualified, diligent people, who did a good job.
EA makes it hard because it’s weirdly resistant to looking outside a very small group of people, usually high status core EAs. This seems to me like one of those unfortunate examples of EA exceptionalism, where EA thinks its process for finding Board members needs to be sui generis. EA makes Board recruitment hard for itself by prioritising ‘alignment’ (which usually means high status core EAs) over competence, sometimes with very bad results (e.g. ending up with a Board that has a lot of philosophers and no lawyers/accountants/governance experts).
It also sometimes sounds like EA orgs think their Boards have higher entry requirements than the Boards of other well-run charities. Ironically, this typically produces very low quality EA Boards, mainly made up of inexperienced people without relevant professional skills, but who are thought of as ‘smart’ and ‘aligned’.
Of course, it will be hard to find new Board members right now, because CEA’s reputation is in tatters and few people will want to join an organisation that is under serious legal threat. But it seems at best a toss up whether it’s worth keeping tainted Board member(s) because they might be tricky to replace, especially when they have recused themselves from literally the single biggest issue facing the charity.
And even if one really values “alignment,” I suspect that a board’s alignment is mostly that of its median member. That may have been less true at EVF where there were no CEOs, but boards are supposed to exercise their power collectively.
On the other hand, a board’s level of legal, accounting, etc. knowledge is not based on the mean or median; it is mainly a function of the most knowledgeable one or two members.
So if one really values alignment on say a 9-member board, select six members with an alignment emphasis and three with a business skills emphasis. (The +1 over a bare majority is to keep an alignment majority if someone has to leave.)
You seem to imply that it’s fine if some board members are not value-aligned as long as the median board member is. I strongly disagree: This seems a brittle setup because the median board member could easily become non-value-aligned if some of the more aligned board members become busy and step down, or have to recuse due to a COI (which happens frequently), or similar.
I’m very surprised that you think a 3 person Board is less brittle than a bigger Board with varying levels of value alignment. How do 3 person Boards deal with all the things you list that can affect Board make up? They can’t, because the Board becomes instantly non-quorate.
I expect a 3-person board with a deep understanding of and commitment to the mission to do a better job selecting new board members than a 9-person board with people less committed to the mission. I also expect the 9-person board members to be less engaged on average.
(I avoid the term “value-alignment” because different people interpret it very differently.)
I don’t agree with that characterization.
On my 6⁄3 model, you’d need four recusals among the heavily aligned six and zero among the other three for the median member to be other; three for the median to be between heavily aligned and other. If you’re having four of six need to recuse on COI grounds, there are likely other problems with board composition at play.
Also, suggesting that alignment is not the “emphasis” for each and every board seat doesn’t mean that you should put misaligned or truly random people in any seat. One still should expect a degree of alignment, especially in seat seven of the nine-seat model. Just like one should expect a certain level of general board-member competence in the six seats with alignment emphasis.
I think 9-member boards are often a bad idea because they tend to have lots of people who are shallowly engaged, rather than a smaller number of people who are deeply engaged, tend to have more diffusion of responsibility, and tend to have much less productive meetings than smaller groups of people. While this can be mitigated somewhat with subcommittees and specialization, I think the optimal number of board members for most EA orgs is 3–6.
This is a really good comment!
Non-profit boards have 100% legal control of the organisation– they can do anything they want with it.
If you give people who aren’t very dedicated to EA values legal control over EA organisations, they won’t be EA organisations for very long.
There are under 5,000 EA community members in the world – most of them have no management experience.
Sure, you could give up 1⁄3 of the control to people outside of the community, but this doesn’t solve the problem (it only reduces the need for board members by 1⁄3).
The assumption that this 1⁄3 would come from outside the community seems to rely on an assumption that there are no lawyers/accountants/governance experts/etc. in the community. It would be more accurate, I think, to say that the 1⁄3 would come from outside what Jack called “high status core EAs.”
Sorry that’s what I meant. I was saying there are 5,000 community members. If you want the board to be controlled by people who are actually into EA, then you need 2⁄3 to come from something like that pool. Another 1⁄3 could come from outside (though not without risk). I wasn’t talking about what fraction of the board should have specific expertise.
Another clarification, what I care about is whether they deeply grok and are willing to act on the principles – not that they’re part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing. I was using the number of community members as a rough indication for how many people exist who actually apply the principles – I don’t mind if they actively participate in the community or not, and think it’s good if some people don’t.
Thanks, Ben. I agree with what you are saying. However, I think that on a practical level, what you are arguing for is not what happens. EA boards tend to be filled with people who work full-time in EA roles, not by fully-aligned talent individuals from the private sector (e.g. lawyers, corporate managers) who might be earning to give having followed 80k’s advice 10 years ago
Ultimately, if you think there is enough value within EA arguments about how to do good, you should be able to find smart people from other walks of life who have: 1) enough overlap with EA thinking (because EA isn’t 100% original after all) to have a reasonable starting point along with 2) more relevant leadership experience and demonstrably good judgement, and linked to the two previous 3) mature enough in their opinions and / or achievements to be less susceptible to herding.
If you think that EA orgs won’t remain EA orgs if you don’t appoint “value aligned” people, it implies out arguments aren’t strong enough for people who we think should be convinced by them. If that’s the case, it’s a real good indicator your argument might not be that good and to reconsider.
To be concrete, I expect a board of 50% card-carrying EAs and 50% experienced high achievement non-EAs with good understanding of similar topics (e.g. x-risk, evidence based interventions) to appraise arguments of what high-/lower-risk options to fund much better than a board of 100% EAs with the same epistemic and discourse background and limited prior career / board experience.
Edit- clarity and typos
I agree that there’s a broader circle of people who get the ideas but aren’t “card carrying” community members, and having some of those on the board is good. A board definitely doesn’t need to be 100% self-identified EAs.
Another clarification is that what I care about is whether they deeply grok and are willing to act on the principles – not that they’re part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing.
This said, I think there are surprisingly few people out there like that. And due to the huge scope differences in the impact of different actions, there can be huge differences between what someone who is e.g. 60% into applying EA principles would do compared to someone who is 90% into it (using a made up scale).
I think a thing that wouldn’t make sense if for, say, Extinction Rebellion, to appoint people to their board who “aren’t so sold on climate change being the world’s biggest problem”. Due to the point above, you can end up in something that feels like this more quickly than it first seems or is intuitive.
Isn’t the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our “belief system” is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?
Also I think a lot of the time when people say “value alignment”, they are in fact looking for signals like self-identification as EAs, or who they’re friends with or have collaborated / worked with. I also notice we conflate our aesthetic preferences for communication with good reasoning or value alignment; for example, someone who knows in-group terminology or uses non-emotive language is seen as aligned with EA values / reasoning (and by me as well often). But within social-justice circles, emotive language can be seen as a signal of value alignment. Basically, there’s a lot more to unpack with “value alignment” and what it means in reality vs. what we say it ostensibly means.
Also to tackle your response, and maybe I’m reading between the lines too hard here / being too harsh on you here, but I feel there’s goalpost shifting in your original post about EA value alignment and you now stating that people who understand broader principles are also “value aligned”.
Another reflection: the more we speak about “value alignment” being important, the more it incentivises people to signal “value alignment” even if they have good arguments to the contrary. If we speak about valuing different perspectives, we give permission and incentivise people to bring those.
Yes—but the issue plays itself out one level up.
For instance, most people aren’t very scope sensitive – firstly in their intuitions, and especially when it comes to acting on them.
I think scope sensitivity is a key part of effective altruism, so appointing people who are less scope sensitive to boards of EA orgs is similar to XR appointing people who are less concerned about climate change.
I agree and think this is bad. Another common problem is interpreting agreement on what causes & interventions to prioritise as ‘value alignment’, whereas what actually matters are the underlying principles.
It’s tricky because I think these things do at least correlate with with the real thing. I don’t feel like I know what to do about it. Besides trying to encourage people to think more deeply, perhaps trying one or two steps harder to work with people one or two layers out from the current community is a good way to correct for this bias.
That’s not my intention. I think a strong degree of wanting to act on the values is important for the majority of the board. That’s not the same as self-identifying as an EA, but merely understanding the broad principles is also not sufficient.
(Though I’m happy if a minority of the board are less dedicated to acting on the values.)
(Another clarification from earlier is that it also depends on the org. If you’re doing an evidence-based global health charity, then it’s fine to fill your board with people who are really into global health. I also think it’s good to have advisors from clearly outside of the community – they just don’t have to be board members.)
I agree and this is unfortunate.
To be clear I think we should try to value other perspectives about the question of how to do the most good, and we should aim to cooperate with those who have different values to our own. We should also try much harder to draw on operational skills from outside the community. But the question of board choice is firstly a question of who should be given legal control of EA organisations.
Now having read your reply, I think we’re likely closer together than apart on views. But...
I don’t think this is how I see the question of board choice in practice. In theory yes, for the specific legal, hard mechanisms you mention. But in practice in my experience boards significantly check and challenge direction of the organisation, so the collective ability of board members to do this should be factored in appointment decisions which may trade off against legal control being put in the ‘safest pair of hands’.
That said, I feel back and forth responses on the EA forum may be exhausting their value here; I feel I’d have more to say in a brainstorm about potential trade-offs between legal control and ability to check and challenge, and open to discussing further if helpful to some concrete issue at hand :)
Two quick points:
Yes, legal control is the first consideration, but governance requires skill not just value-alignment
I think in 2023 the skills you want largely exist within the community; it’s just that (a) people can’t find them easily (hence I founded the EA Good Governance Project) and (b) people need to be willing to appoint outside their clique
Alignment is super-important for EA organisations, I would put it as priority number 1, because if you’re aligned to EA values then you’re at least trying to do the most good for the world, whereas if you’re not, you may not be even trying to do that.
For an example of a not-for-profit non-EA organisation that has suffered from a lack of alignment in recent times, I would point to the Wikimedia Foundation, which has regranted excess funds to extremely dubious organisations: https://twitter.com/echetus/status/1579776106034757633 (see also: https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2022-10-31/News_and_notes ). This is quite apart from the encyclopedia project itself arguably deviating from its stated goals of maintaining a neutral point of view, which is a whole other level of misalignment, but I won’t get into that here.
Hi Robin—thanks for this and I see your point. I think Jason put it perfectly above—alignment is often about the median Board member, where expertise is about the best Board member in a given context. So you can have both.
I have also seen a lot of trustees learn about the mission of the charity as part of the recruitment process and we shouldn’t assume the only aligned people are people who already identify as EAs.
The downsides of prioritising alignment almost to the exclusion of all else are pretty clear, I think, and harder to mitigate than the downsides or lacking technical expertise, which takes years to develop.
The nature of most EA funding also provides a check on misalignment. An EA organization that became significantly misaligned from its major funders would quickly find itself unfunded. As opposed to Wikimedia, which had/has a different funding structure as I understand it.
TL;DR: You’re incorrectly assuming I’m into Nick mainly because of value alignment, and while that’s a relevant factor, the main factor is that he has an unusually deep understanding of EA/x-risk work that competent EA-adjacent professionals lack.
I might write a longer response. For now, I’ll say the following:
I think a lot of EA work is pretty high-context, and most people don’t understand it very well. E.g., when I ran EA Funds work tests for potential grantmakers (which I think is somewhat similar to being a board member), I observed that highly skilled professionals consistently failed to identify many important considerations for deciding on a grant. But, after engaging with EA content at an unusual level of depth for 1-2 years, they can improve a lot (i.e., there were some examples of people improving their grantmaking skills a lot). Most such people never end up attaining this level of engagement, so they never reach the level of competence I think would be required.
I agree with you that too much of a focus on high status core EAs seems problematic.
I think value-alignment in a broader sense (not tracking status, but actual altruistic commitment) matters a great deal. E.g., given the choice between personal prestige and impact, would the person reliably choose the latter? I think some high-status core EAs who were on EA boards were not value-aligned in this sense, and this seems bad.
EDIT: Relevant quote—I think this is where Nick shines as a board member:
@Jack Lewars is spot on. If you don’t believe him, take a look at the list of ~70 individuals on the EA Good Governance Project’s trustee directory. In order to effectively govern you need competence and no collective blindspots, not just value alignment.
I’m definitely not saying value alignment is the only thing to consider.
I have a fair amount of accounting / legal / governance knowledge and as part of my board commitments think it’s a lot less relevant than deeply understanding the mission and strategy of the relevant organization (along with other more relevant generalist skills like management, HR, etc.). Edit: Though I do think if you’re tied up in the decade’s biggest bankruptcy, legal knowledge is actually really useful, but this seems more like a one-off weird situation.
It seems intuitive that your chances of ending up in a one off weird situation are reduced if you have people who understand the risks properly in advance. I think a lot of what people with technical expertise do on Boards is reduce blind spots.
I think that’s false; I think the FTX bankruptcy was hard to anticipate or prevent (despite warning flags), and accepting FTX money was the right judgment call ex ante.
I think Jack’s point was that having some technical expertise reduces the odds of a Bad Situation happening at a general level, not that it would have prevented exposure to the FTX bankruptcy specifically.
If one really does not want technical expertise on the board, a possible alternative is hiring someone with the right background to serve as in-house counsel, corporate secretary, or a similar role—and then listening to that person. Of course, that costs money.
I read his comment differently, but I’ll stop engaging now as I don’t really have time for this many follow-ups, sorry!
It’s clear to me that the pre-FTX collapse EVF board, at least, needed more “lawyers/accountants/governance” expertise. If someone had been there to insist on good governance norms, I don’t believe that statutory inquiry would likely have been opened—at a minimum it would have been narrower. Given the very low base rate of SIs, I conclude that the external evidence suggests the EVF UK board was very weak in legal/accounting/governance etc. capabilities.
Overall, I think Nick did the right thing ex ante when he chose to run the Future Fund and accept SBF’s money (unless he knew specifics about potential fraud).
If he should be removed from the board, I think we either need an argument of the form “we have specific evidence to doubt that he’s trustworthy” or “being a board member requires not just absence of evidence of untrustworthiness, but proactively distancing yourself from any untrustworthy actors, even if collaborating with them would be beneficial”. I don’t buy either of these.
“[K]new specifics about potential fraud” seems too high a standard. Surely there is some percentage X at which “I assess the likelihood that these funds have been fraudulently obtained as X%” makes it unacceptable to serve as distributor of said funds, even without any knowledge of specifics of the potential fraud.
I think your second paragraph hinges on the assumption that Nick merely had sufficient reason to see SBF as a mere “untrustworthy actor[]” rather than something more serious. To me, there are several gradations between “untrustworthy actor[]” and “known fraudster.”
(I don’t have any real basis for an opinion about what Nick in particular knew, by the way . . . I just think we need to be very clear about what levels of non-specific concern about a potential bad actor are or are not acceptable.)
I agree with you: When I wrote “knew specifics about potential fraud”, I meant it roughly in the sense you described.
To my current knowledge, Nick did not have access to evidence that the funds were likely fraudulently obtained. (Though it’s not clear that I would know if it were the case.)
I think I’d bet at like 6% that evidence will come out in the next 10 years that Nick knew funds were likely fraudulently obtained. I think by normal definitions of those words it seems very unlikely to me.
I would be willing to take the other side of this bet, if the definition of “fraud” is restricted to “potentially stealing customer funds” and excludes thinks like lying to investors.
So: excludes securities fraud?
That was an example; I’d want it to exclude any type of fraud except for the large-scale theft from retail customers that is the primary concern with FTX.
Although at that point—at least in my view—the bet is only about a subset of knowledge that could have rendered it ethically unacceptable to be involved with FTXFF. Handing out money which you believed more likely than not to have been obtained by defrauding investors or bondholders would also be unacceptable, albeit not as heinous as handing out money you believed more likely than not to have been stolen from depositors. (I also think the ethically acceptable risk is less than “more likely than not” but kept that in to stay consistent with Nathan’s proposed bet which used “likely.”)
What if the investor decided to invest knowing there was an X% chance of being defrauded, and thought it was a good deal because there’s still an at least (100-X)% chance of it being a legitimate and profitable business? For what number X do you think it’s acceptable for EAs to accept money?
Fraud base rates are 1-2%; some companies end up highly profitable for their investors despite having committed fraud. Should EA accept money from YC startups? Should EA accept money from YC startups if they e.g. lied to their investors?
I think large-scale defrauding unsuspecting customers (who don’t share the upside from any risky gambles) is vastly worse than defrauding professional investors who are generally well-aware of the risks (and can profit from FTX’s risky gambles).
(I’m genuinely confused about this question; the main thing I’m confident in is that it’s not a very black-and-white kind of thing, and so I don’t want to make my bet about that.)
I don’t know the acceptable risk level either. I think it is clearly below 49%, and includes at least fraud against bondholders and investors that could reasonably be expected to cause them to lose money from what they paid in.
It’s not so much the status of the company as a fraud-commiter that is relevant, but the risk that you are taking and distributing money under circumstances that are too close to conversion (e.g., that the monies were procured by fraud and that the investors ultimately suffer a loss). I can think of two possible safe harbors under which other actors’ acceptance of a certain level of risk makes it OK for a charity to move forward:
In many cases, you could imply a maximum risk of fraud that the bondholders or other lenders were willing to accept from the interest rate minus inflation minus other risk of loss—that will usually reveal that bondholders at least were not factoring in more than a few percent fraud risk. The risk accepted by equity holders may be greater, but usually bondholders take a haircut in these types of situations—and the marginal dollars you’re spending would counterfactually have gone to them in preference to the equity holders. However, my understanding is that FTX didn’t have traditional bondholders.
If the investors were sophisticated, I think the percentage of fraud risk they accepted at the time of their investment is generally a safe harbor. For FTX, I don’t have any reason to believe this was higher than the single digits; as you said, the base rate is pretty low and I’d expect the public discourse pre-collapse to have been different if it were believed to be significantly higher.
However, those safe harbors don’t work if the charity has access to inside information (that bondholders and equity holders wouldn’t have) and that inside information updates the risk of fraud over the base rates adjusted for information known to the bond/equity holders. In that instance, I don’t think you can ride off of the investor/bondholder acceptance of the risk as low enough.
There is a final wrinkle here—for an entity as unregulated as FTX was, I don’t believe it is plausible to have a relatively high risk of investor fraud and a sufficiently low risk of depositor fraud. I don’t think people at high risk of cheating their investors can be deemed safe enough to take care of depositors. So in this case there is a risk of investor fraud that is per se unacceptable, and a risk of investor fraud that implies an unacceptable risk of depositor fraud. The acceptable risk of investor fraud is the lower of the two.
Exception: If you can buy insurance to ensure that no one is worse off because of your activity, there may be no maximum acceptable risk. Maybe that was the appropriate response under these circumstances—EA buys insurance against the risk of fraud in the amount of the donations, and returns that to the injured parties if there was fraud at the time of any donation which is discovered within a six-year period (the maximum statute of limitations for fraudulent conveyance in any U.S. state to my knowledge). If you can’t find someone to insure you against those losses at an acceptable rate . . . you may have just found your answer as to whether the risk is acceptable.
I am quite confident we knew he was unusually bad behaved by EA standards. I think a bunch of people thought he was not that far of an outlier by Silicon Valley founder standards, though I think they were wrong.
I do indeed a bunch of people are taking this quite seriously. I do think that in-general the FTX explosion is hugely changing a lot of people’s outlook on EA and how to relate to the world.
To be clear, I am quite confident the legal consequences for talking would be quite minor and have talked to some people with extensive legal experience about this. At this point there is no good legal reason to not talk.
I think people aren’t talking because it seems stressful and because they are worried being more associated with FTX would be bad for PR. I also think a lot of people kind of picked up on a vibe that core people aren’t supposed to talk about FTX stuff because it makes us look bad and because there used to be a bunch of organizational policies in-place that did prevent talking, but I think those are mostly gone by now.
While I agree with a strict reading of this comment, I want to point out that there was another red flag around FTX/Alameda that several people in the EA leadership likely knew about since at least late 2021, which in my opinion was more severe than the matters discussed in the Time article and which convinced me back in 2021 that FTX/Alameda were putting a lot of effort into consistently lying to the public.
In particular, in October 2021, I witnessed (sentence spread across bullet points to give emphasis to each part of it):
A high-status, important EA (though not one of the very top people)
who had worked at Alameda after FTX was founded, and left before October 2021
publicly offhandedly implying that “FTX” was the new name for Alameda (seemingly unaware that they were supposed to be distinct companies)
in a place where another very high-status EA (this time probably one of the very top people, or close to it) clearly took note of it
I won’t give more details publicly, in order to protect the person’s privacy, but this happened in a very public place.
It wasn’t just me who was aware of this. Nate Soares reported having heard the rumor that “Alameda Research had changed its name to FTX” as well, though I think he left out one important aspect of it: that this rumor was being shared by former insiders, not by e.g. random clueless crypto people.
In case you don’t understand why the rumor was a big deal, I explained it in my comment in Nate Soares’s post. Quoting from it:
I suspect you will not be very impressed by this, and ask me why I didn’t share my concerns widely at the time. But I was just a low-status person with no public platform and only one or two friends. I shared my concerns with my partner (in fact, more than once, because I was so puzzled by that comment) but not with people I’m not close to. [ETA: in retrospect, I think a more correct explanation would be to say that I probably stayed silent because I guessed I’d lose status if I’d spoken up. 🙁]
I’m not sure why this wasn’t taken seriously by the EA leadership. This seems to be a pretty clear example of the FTX/Alameda leadership blatantly lying to the public about their internal workings and prominent EAs knowing about that.
I of course knew that FTX and Alameda were very closely related, that Sam and Caroline were dating and that the two organizations had very porous boundaries. But I did not know that the rest of the world did not know this, and I had no idea that it mattered much legally. They were both located in the Bahamas, and I definitely did not know enough finance law to know that these two organizations were supposed to operate at arms length.
Maybe there was someone else who did successfully put the pieces together and then stayed quiet, but I would definitely not treat this as a “smoking gun” since I saw it myself, as did many people I know, and neither me nor other people realized that this was actually pretty shady.
And legally there were actually distinct companies, at least the same way GiveWell and OpenPhil were different companies back in 2018 when they still worked in the same office and shared some leadership, and that itself doesn’t raise any flags for me. FTX and Alameda definitely actually were different companies with some offices being just for Alameda (like the one in Berkeley), though they definitely did not operate “at arms length” as I think the legal situation required.
I’m similar. In general I’m noticing a mismatch between how the article leaves me feeling versus what it leaves me thinking. E.g. concerns about the other half of the FTX/Alameda split are labelled as just “internal politics,” but when EA leaders treat the concerns about SBF as “typical startup squabbles” that’s labelled “downplaying,” “rationalizing,” or “dismissing.” (Obviously with the benefit of hindsight we think that’s fair, but we don’t know how different the two sides actually looked to outsiders at the time.)
By the way, I really like your approach of separating out feelings and thoughts.
What do you mean by “concerns about the other half of the FTX/Alameda split”?
“[Name], who had perhaps raised the loudest concerns about Bankman-Fried, was distrusted by some EA leaders because of internal politics during her time at the Centre for Effective Altruism”
I tried to address this argument with the point about every other long-time EA leaving Alameda for the same reasons. I’ve avoided naming those other EAs out of respect for their privacy, but they include multiple very core and well-respected EAs. The parallel you’re trying to draw here just really doesn’t hold up.
I don’t think you should.
Re: “In the weeks leading up to that April 2018 confrontation with Bankman-Fried and in the months that followed, Mac Aulay and others warned MacAskill, Beckstead and Karnofsky about her co-founder’s alleged duplicity and unscrupulous business ethics” -
I don’t remember Tara reaching out about this, and I just searched my email for signs of this and didn’t see any. I’m not confident this didn’t happen, just noting that I can’t remember or easily find signs of it.
In terms of what I knew/learned 2018 more generally, I discuss that here.
EA leaders should be held to high standards, and it’s becoming increasingly difficult to believe that the current leadership has met those standards. I’m open to having my mind changed when the investigation is concluded and the leaders respond (and we get a better grasp on who knew what when). As it stands now, I would guess it would be in the best interest of the movement (in terms of avoiding future mistakes, recruitment, and fundraising) for those who have displayed significantly bad judgement to step down from leadership roles. I recognize that they have worked very hard to do good, and I hope they can continue helping in non-leadership roles.
I disagree.
I just think being a leader would be really hard. I am much less public that these people and I find dealing with that difficult. Now imagine billions of $ and 1000s of poeple looking to you to be a good role model.
I think we should hold our leaders to high standards, but we should be gracious when they fail. While I have criticisms of Will MacAskill, I think he’s one of the best we have.
I think I’d prefer to see us discuss what the errors were and see if he can work on them, because he’s already way ahead of most of us in terms of relevant competences. I am open to people stepping down, but I don’t think permanently. We all make errors, it’s about whether we can credibly convince relevant people that we won’t make them again.
Look, I think Will has worked very hard to do good and I don’t want to minimize that, but at some point (after the full investigation has come out) a pragmatic decision needs to be made about whether he and others are more valuable in the leadership or helping from the sidelines. If the information in the article is true, I think the former has far too great a cost.
This was not a small mistake. It is extremely rare for charitable foundations to be caught up in scandals of this magntiude, and this article indicates that a signficant amount of the fallout could have been prevented with a little more investigation at key moments, and that clear signs of unethial behaviour were deliberately ignored. I think this is far from competent.
We are in the charity business. Donors expect high standards when it comes to their giving, and bad reputations directly translate into dollars. And remember, we want new donors, not just to keep the old ones. I simply don’t see how “we have high standards, except when it comes to facilitating billion dollar frauds” can hold up to scrutiny. I’m not sure we can “credibly convince people” if we keep the current leadership in place. The monetary cost could be substantial.
We also want to recruit people to the movement. Being associated with bad behaviour will hurt our ability to recruit people with strong moral codes. Worse though, would be if we encouraged “vultures”. A combination of low ethical standards and large amounts of money would make our movement an obvious target for unethical exploiters, as appears to have already happened with SBF.
Being a brilliant philosopher or intellectual does not necessarily make you a great leader. I think we can keep the benefits of the former while recognizing that someone is no longer useful at the latter. Remaining in a leadership position is a privilege, not a right.
If a global health organization made a mistake in judgment that caused [its] effectiveness to permanently decline by (say) 30%, and it was no longer effective in comparison to alternatives we could counterfactually fund, I suspect very few of us would support continuing to fund it. I would find it potentially concerning, from a standpoint of impartiality, if we do not apply the same standard to leaders. After all, we didn’t protect the hypothetical global health organization’s beneficiaries merely out of a sense of fairness.
I see the argument that applying such a standard to leaders could discourage them from making EV-positive bets. However, experiencing an adverse outcome on most EV-positive bets won’t materially impact a leader’s long-term future effectiveness. Moreover, it could be difficult to evaluate leaders from a 100% ex ante perspective. There’s a risk of evaluating successful bets by their outcome (because outsiders may not understand that there was a significant bet + there is low incentive to evaluate the ex ante wisdom of taking a risk if all turned out well) but unsuccessful bets from an ex ante perspective. That would credit the leader with their winnings but not with most of their losses, and would overincentivize betting.
I suspect a big part of the disagreement here is whether this aspect of the analogy holds?
Right—I think a major crux between Nathan and Titotal’s comments involve assumptions or beliefs about the extent to which certain leaders’ long-term effectiveness has been impaired. My gut says there will ultimately be very significant impairment as applied to public-facing / high-visibility roles, less so for certain other roles.
If almost all current leaders would be better than any plausible replacement, even after a significant hit to long-term effectiveness, then I think that says something about the leadership development pipeline that is worth observing.
I think it’s relatively obvious that there’s a dearth of competent leadership/management in EA. I think this is even more extreme for EA qua EA, since the personal costs : altruistic rewards tradeoff for EA qua EA work is arguably worse than e.g. setting up an AI governance initiative or leading a biosecurity project.
I don’t think we actually want to incentivise positive-EV bets as such? Some amount of risk aversion ought to be baked in. Going solely by EV only makes sense if you make many repeated uncorrelated bets, which isn’t really what Longtermists are doing.
Fair enough—my attempted point was to acknowledge concerns that being too quick to replace leaders when a bad outcome happened might incentivize them to be suboptimally conservative when it comes to risk.
Isn’t part of this considering whether Will’s comparative advantage is as a Board member? It seems very unlikely to me that it is, versus being a world class philosopher and communicator.
So I agree with your general point that leaders who make mistakes might not need to resign, but in the specific case I can’t see how Will is most impactful by being a Board member at really any org, as opposed to e.g. a philosophical or grant-making advisor.
That assumes their level of ‘public leader-ness’ is fixed. You might prefer to have no one trying to represent EA publicly than to have people doing it sub-optimally.
I think I agree with both of these, actually: EA needs unusually good leaders, possibly better than we can even expect to attract.
(Compare EA with, say, being an elite businessperson or politician or something.)
I am also eager to see what the investigation concludes, but I’m pretty convinced at this point that EA leaders made big mistakes.
It’s not obvious to me (yet) that they should’ve known not to take Sam’s money—non-profits accept donations from dubious characters all the time. Even if EA leaders thought Sam was sketchy (which it appears some did), it’s not clear to me they should’ve known Sam was don’t-take-money-from-this-person bad. This is a line non-profits walk all the time, and many have erred on the side of taking money from people they shouldn’t have taken money from.
But I cannot wrap my head around why—knowing what it appears they knew then—anyone thought it was a good idea to put this guy on a pedestal; to elevate him as a moral paragon and someone to emulate; to tie EA’s reputation so closely to his. It really feels like they should’ve (at least) known not to do that.
I always find this claim a bit confusing: did we actually do those things? Are there some specific examples of doing this?
I can think of… the 80k interview and that’s about it? I guess engaging with the FTX Foundation was somewhat positive but I don’t think it was putting him on a pedestal. In fact when I look back I feel like a lot of the content linking Sam to EA came from people talking to Sam. I may well just not be remembering stuff though!
I don’t deny that Sam was perceived in that way, but it’s not clear to me that this was something that was done (even accidentally) by “EA leaders”.
ETA July: I regret posting the following comment for several reasons, partly because I got crucial information wrong and failed to put things into context and prevent misunderstandings. Please consider reading my longer explanation at the top of my follow-up comment here. I’m sorry to anyone I upset.
------------------------------------------------------------------
At EAG London 2022, they [ETA: this was an individual without consent of the organizers] distributed hundreds of stickers depicting Sam on a bean bag with the text “what would SBF do?”. To my knowledge, never before were flyers depicting individual EAs at EAG distributed. (Also, such behavior seems generally unusual to me, like, imagine going to a conference and seeing hundreds of flyers and stickers all depicting one guy. Doesn’t that seem a tad culty?)
On the 80k website, they had several articles mentioning SBF as someone highly praiseworthy and worth emulating.
Will vouched for SBF “very much” when talking to Elon Musk.
Sam was invited to many discussions between EA leaders.
There are probably more examples.
Generally, almost everyone was talking about how great Sam is and how much good he has achieved and how, as a good EA, one should try to be more like him.
These were not an official EAG thing — they were printed by an individual attendee.
Yeah it was super weird.
Ah thanks, I didn’t know that! Sorry, could have noticed my confusion here. I edited the above comment.
When I see agreevotes [racking up quickly on contextless statements] like above, I always feel weird. Do people agree that that counts as pedestalling by leaders? Or they agree that all that happened? IDK, but I feel weird because I actually think I disagree (with all those people I guess) that these are fair for the point being made:
I was at EAGL 22, but I never saw this, so it was not a universal handout. However I remember ppl talking about it, and it was not by EA leaders or offical conference ppl. It was (if I heard correctly) some kind of joke by some attendee(s). I still believe this because EAG conference staff are pretty serious about making the event professional, so even if they thought it was secretly funny I’d have a hard time believing they would do that. Worth noting I think this was also a meta-injoke about claims of EA being a cult, obviously by riffing on the WWJD and giving “glorious leader” vibes. (Trying to avoid sounding harsh, but to whoever likes this type of thing, here’s a reason not to do rogue, injoke stuff like that at an official event. Now we are dealing with it as submitted evidence of potential leadership corruption a year later, and unfortunately it did make EA look more like a cult even though that was actually the punchline not the truth. I imagine some attendees at the time felt icky about it too. Expect the injoke to make it to the outgroup, and anticipate how it will look. Save it for the unofficial afterparty, at most please.)
Honestly that just seems normal to me, idk. I have met a couple other people who have profiles on that website. They weren’t being put on a pedestal, and 80K couldn’t be fairly seen as tying 80k (let alone EA’s) rep to theirs (because they aren’t billionaires who would assume that). But it did make sense to list Sam as a notable example for earning-to-give, which is a recommended career path. It isn’t like this was unwarranted use. Bad luck IMO. Seems like maybe what we are seeing in this instance is that if you want to use a billionaire like a normal example of something in a different talking point you are trying to make, society won’t “let” you in the long run, because the billionaire’s rep is actually bigger than yours and bigger than the rep of the point you are trying to make. So the billionaire’s rep will swallow yours up whether you meant it to or not. EAs would do well to keep this lesson in mind for future prominent people even if those people are angels-on-earth. Still bad luck I say, I mean keep in mind SBF was likely trying to actively fool people (including himself methinks) as to his competence. [Edit: And as Ubuntu adds below, they aren’t really EA leaders who had heard anything negative about SBF. Hence bad luck/big oof moment.]
This was discussed elsewhere better than I can (I will hunt for the link and edit in shortly [Edit: I didn’t want to dig for it so I just made this section longer]), but essentially anybody would do this for their friend or trusted acquaintance when they think something will be mutually beneficial, it was not a professional EA capacity and it was also in private text messages, not public. That it was shared publicly was tbh pretty uncool. I mean it’s obvious Macaskill misjudged Sam’s character/potential, but he actually wasn’t trying to recommend Sam publicly there, and this sounds more like “person-to-person, I think it’s very much worth you guys having a conversation! You care about the same stuff and I know that stuff is why you both are interested in Twitter! Give it a go!” than “I, an EA leader, vouch for Sam, also an EA leader” type of exchange. Will got duped but I’m pretty unclear he was acting inappropriately here in his own personal life. I mean when you have two fabulously wealthy acquaintances, and one wants to do a business deal with the other, you introduce them, and yes you vouch for the first if you know them reasonably well just so they give each other the time of day. There’s no expectation that a deal will move forward without due diligence occurring, or that your vouch will become law. Musk knows that Will is not a finance professional, so he should know that Will can only vouch character. And no harm, no foul. But if there was harm for the EA brand here, it was by Musk making the private messages public which like, huh I’d never have expected that if I was Will.
It looks to me like Musk did so for a bit of clout to show he is hard-to-dupe.[I was incorrectly pairing this with another Elon text but the rest stands].I don’t think he wanted to make a dig at Macaskill for vouching tbh. Musk’s financial professional (Michael Grimes) vouched for SBF too with about the same level of confidence. If I were Musk I’d prefer that people I like and respect (which it sounds like Will is one) keep trying to make connections for me when they believe the connections are of potential high value, and I/Musk would still always do my own due diligence. So it feels kind of weird to me that EAs are against Will having tried to connection-make when maybe even Musk wouldn’t be so hard on him?[Edit: I now realize that maybe some people are upset that Will was cool with arranging for “possibly EA money” to be spent on Twitter. To that, it’s worth noting that EA doesn’t own it’s donor’s money, helping out your major donors and acquaintances in their goals is prosocial and normal, there are EAs who view social coordination and combatting misinformation as important cause areas, and it would have been a business move not a donation]
Sounds normal for a truly-massive donor who at least understood the movement or was thought to, and who people roughly enjoyed talking to as well. Again, keep in mind that SBF was likely actively trying to fool people here. I wonder if anyone reading this can say “No, having Sam involved in discussion would have just absolutely been a hard-no from me. No way he could convince me that he, a huge donor, deserves a seat at the table. And if anybody felt he gave them good input in the past… their opinions wouldn’t matter, I’d have said ‘No, don’t invite him.’ In fact even if he gave me good input over the years in my leadership, controlled half my funding, and he was possibly expecting an invite, I would definitely still never have invited him.” [This sounds so strong that it almost looks like I’m strawmanning the other side, I’m giving so little wiggle-room. But I do genuinely think to be able to judge leaders harshly here, you have to believe a claim as watertight as that quote would have been said by other experienced leaders, because I’m only confident a claim as watertight as that keeps competent actors and the rest from inviting Sam.] IDK I just find it quite surprising that readers are acting like disengaging with Sam, a major donor, was so black and white and easy. The majority of people got sucked in… it’s easy to say what should have happened now.
This perhaps yes. Lot of injokes and stuff in EA culture about SBF generally, too. But how much of this was grassroots in origination, because having a heckin billionaire in your extended crew is cool and interesting, vs how much of this was done by leaders publicly? Should people really not have felt excited and expressed it within their ingroup based on what they thought?
I’d be interested to see them. In general I feel like what I’m seeing listed here is pretty human and normal and I wouldn’t call it putting him on a pedestal, or tying EA’s rep to his (intentionally or even in ways ppl necessarily could have anticipated would end up very entangled), or elevating him as moral paragon publicly.
(ETA: Sorry for not engaging with everything you wrote. I’m short on time and I’ll try to elaborate on my views in a week or so.)
Just to clarify my position: I think it’s clear that we put SBF on a pedestal and promoted him as someone worth emulating, I don’t really know what to say to someone who disagrees with this. (Perhaps you interpret the phrase “put someone on a pedestal” differently; yes, we didn’t built statues of SBF, I agree.)
But I also think that basically almost all of this has been completely understandable. I mean, guy makes 10B dollars and wants to donate it all? One needs to be deranged to not try to emulate him, to not want to learn from him and to not paint him as highly morally praiseworthy. I certainly tried emulating that and learning from SBF (with little success obviously). At the time, I didn’t think that we went too far. I even thought the sticker thing was kinda funny (if weird and inadvisable), but I didn’t really give it much thought at all at the time.
That’s fair but next time I strongly recommend you include context and thoughts so lurkers don’t latch onto what you say as proof that leaders or anyone else did anything unreasonable. Lilly’s comment is:
This comment has a negative connotation, and is arguably the opposite connotation of the comment you wrote just now. So when you jump in to provide examples of that without context, it appears you support the negative conclusion.
Additionally, maybe this is a cultural difference, but “put someone on a pedestal” is only used negatively where I am from. It’s about putting someone in a position above that where they’d naturally belong. I argue that if you think his treatment was normal, you also don’t think he was “put on a pedestal” in a colloquial interpretation.
Here’s a slightly different point I couldn’t quite word well before: From my perspective, I am not even sure that anything EA did about him was anything they didn’t do about other prominent people to the extent it made sense. Like, I don’t pay that much attention and suck at names tbh, but I still know with high confidence that essentially everything people name as proof that SBF was unjustly elevated, anything an EA leader did toward/about SBF, they also did toward/about many others they have respected over the years. I just don’t see it as unusual or bad (yet) at all. If ppl think that things like named above count as “putting someone on a pedestal” and “tying EA’s rep to theirs” (implied=bad), then there are dozens, maybe 100+ who could be counted as such[1]. I honestly wonder if it just looks like SBF was boosted a lot because he is a billionaire (who wanted to be boosted) and the general public think that is so cool. And EAs do too, like I think maybe EAs forget about other names who got boosted over the years but maybe who just didn’t stick in our consciousness or public consciousness because they got little media fanfare? If the public had wanted to hear about something other than billionaires, wouldn’t other peeople have been mentioned much more? If people wanted to hear about the founding of Givewell, wouldn’t Karnofsky and Hassenfeld been boosted more than SBF? But we live in bugged world where people want to hear about billionaires instead (and the public or at least journalistic consciousness may have been tying him to EA no matter what) and I think that would have shaped almost any person’s way of talking, including other electable leaders—leaders we might want to tell ourselves wouldn’t have done as Will did, but I’ll probably never feel confident of that.
So, yeah, I suppose the crux is that people think “I or [other leaders we can elect now] wouldn’t have spoken well of Sam if [we] had known what was said in 2018 or 2019 about Sam.” But CEA did at least do an internal investigation into CEA/Alameda in 2019 and yet here we are. So I find it a little weird to say that.
Especially expecting that Sam is quite manipulative and EA leaders are just a bunch of high-trust humans (who I wouldn’t change if I could press a button btw, because whatever way they think about things is literally why the movement got started). To me, idk, saying EA leaders made mistakes feel a bit like “you guys shouldn’t be the way you are as people, you should expect some people to try to grift” and I disagree with that.
Maybe we shouldn’t have such high-trust people in leadership? I’m not sold. Actually I’m not even sold we can find people who would have noticed and investigated SBF’s actual financial fraud who have anything close to a typical human psychology (Gladwell’s book “Talking toStrangers” has a great section about Bernie Madoff’s reception in this regard, but I digress). IDK. I feel so bad thinking that these people I know try so, so hard are being told they made mistakes which basically amount to them being kind and excited about a person, one person among many others they have been kind and excited about. Maybe this time it wasn’t deserved, but, sigh, Will and the others are philosophers and altruists, not psychologists and business moguls who might be primed to spot a manipulative person or reputational risk. It feels like to me, as of right now, adding a new person(s) into the mix with more expertise in spotting manipulators and hazards, makes more sense than removing “EA leaders” including Nick or even Will from their posts.
But I will wait to see from the investigation! I worry that submitting lists of things without context to show that “SBF was put on a pedestal” (whether or not you view that term with negative connotation), will lead to entrenched opinions, more upset, and possibly even more news pieces being written, which may not be right in the longrun.
Like, there are 153 eps of 80K podcast, the 80K website profiles have been rotated many times throughout the years, there have been so many people shouted-out by Will at various times in his talks and interviews—especially EAG talks, there are for sure dozens of prominent EAs who people bring up repeatedly again and again in conversation etc, there are lots of EAs who news profiles have done features on, lots of non-leader EAs and non-EAs who get invited to private discussions between leaders. I could go on
ETA July:
I regret posting my comments for several reasons. I’m sorry to anyone I upset.
Specifically, I regret not putting more effort into ensuring that my first comment is not going to be misinterpreted, and ensuring to put things into context, e.g., that “putting SBF on a pedestal”, if meaning something like “holding SBF up as a role model”, was—certainly for those who didn’t know him well in person!—in the vast majority of instances reasonable and understandable at the time, and I would have easily done the same! (Some things like e.g. tying EA’s reputation to SBF to such a close extent were perhaps not super wise but much of this is probably hindsight bias).
I feel also bad about mentioning the flyers, mostly because I got important information wrong which is a grave mistake in such a situation, and partly because my phrasing was too harsh/critical (if the person who created those flyers is ever reading this, I’m sorry, you had good intentions and it wasn’t a big deal at all!).
I wrote the comment because I was disconcerted by the original comment (and its initial high Karma count) which seemed to seriously question whether we “elevated [SBF] as a moral paragon and someone to emulate “, or “tied EA’s reputation closely to his”, and asked for specific examples. I still feel like it’s a no-brainer that EAs, in general, obviously and understandably elevated SBF as a role model and someone worth emulating! Seriously questioning this still strikes me as defensive motivated reasoning and concerning. Many people seemed to agree with the commenter which made me worry that EAs would refuse to learn any lessons from this whole scandal which might risk repeating a similar monumental catastrophe. (Me being upset[1] is not meant to be an excuse but if anything a further reason why I should have written this comment differently. As a general rule, it’s simply bad to write comments when one is upset because it clouds one’s judgment and reduces compassion, and this is an obvious mistake which I should not have made.)
So why did the FTX scandal happen? One simplistic perspective is (ignoring many other, more important causal factors!): someone with dark triad/malevolent traits got more and more power and ended up doing something extremely bad. Other people did not realize that this person is malevolent, or suspected it and didn’t speak up (e.g., because of fear, miscalculation, or motivated reasoning, or opportunism). I’ve seen that story before and it really shaped my outlook on life.
That’s why I wanted to make the following argument: let’s not put too much faith in the character judgment of the people who championed SBF (and have known him very well) going forward, to make sure that something like this doesn’t happen again. Let’s not be like “oh well, there is nothing we can learn from this, no need to change anything”. That does seem like a very important point to me and I stand by that.
Now, importantly, I wasn’t trying to imply that any EA leader knew about the fraud or did something illegal. I also wasn’t trying to imply that mistakes like ‘suboptimal character judgment’ are even remotely comparable to the mistakes that SBF made. Of course, it’s not even close. In some sense, it’s a minor mistake that probably more than 95% of people would have made (because lots of things would have to come together to not make such mistakes). In fact, in my experience, many amazing people don’t have great character judgment.
But on the other hand, it’s still substantial and worth keeping in mind and should be factored in when making, e.g., board decisions (as board members appoint executive directors and those should have good character) or when trusting these people’s character judgment in the future. (Also, I feel like some comments seemed to suggest that being naive and overly trusting is just cute but not worth worrying about which I don’t agree with.)
As I also wrote in the original comment, I’m not even sure that EA leaders, including Will, made any mistakes ex ante given the enormous uncertainty and complexity of the whole situation and all the important trade-offs involved. I do think that it’s plausible though that some mistakes were made, including significant ones.
I also regret having singled out Will and I’m sorry if this comment upset anyone. I worry that others may have interpreted my comment as trying to put all the blame on him which I really didn’t want to. I did it, because, to my knowledge, Will was really the EA leader who championed SBF the most and had the closest personal connection to him (aside from people like Caroline, etc., of course). And generally, I think it’s valuable to give specific examples when possible. It’s important to note that many others were involved in this too and could have stepped in!
To be perfectly clear, I think EA leaders, including Will, have done tremendous good and worked very hard to make the world a better place. I don’t want to belittle their extraordinary contributions.
Last, I worry that my comment was interpreted as taking the side of EA critics which is not the case. I think that much criticism of EA and EA leaders in the media has been unfair and exaggerated.
There is more I could write about all of this but this issue is emotionally taxing and I already spent several days on this comment, and I’m trying to move on. (Several days for just writing this crappy comment? Yeah, most of this was just feeling guilty without being able to do anything else productive. This ties to the general issue of how much time to put into comments. FWIW, in the months before writing the comments in March, I was actively challenging myself to write comments more quickly (and often). In hindsight, this could have been a mistake since I may lack the necessary verbal intelligence to pull this off.)
--------------------------------------------------------------------
[Original comment.]
Thanks, these are good points.
I do think it’s plausible that (some!) EA leaders made substantial mistakes. Spotting questionable behavior or character is hard but not impossible, especially if you have known them for 10 years and work very closely with them and basically were in a mentee-mentor relationship (like e.g. Will, is my impression). I don’t fault other people, e.g. those who rarely or never interacted with SBF, for not having done more.
Either people ignored warning signs → clear mistake. Or they didn’t notice anything even though others had noticed signs (like e.g. Habryka)-> suboptimal character judgment. I think the ability to spot such people and don’t let them into positions of power is extremely important.
Of course, the crucial question is what could have been done even if you know 100% that SBF is not at all trustworthy. It’s plausible to me that not much could have been done because SBF already accumulated so much power. So it’s plausible that no one made substantial mistakes. On the other hand, no one forced Will to write Musk and vouch for SBF which perhaps wasn’t wise if you have concerns about SBF. On the other hand, it’s perhaps also reasonable to gamble on SBF given the inevitable uncertainty about other’s character and the large possible upsides. Perhaps I’m just suffering from hindsight bias.
Also, just to be clear, I agree that much of the criticism against EAs and EA leaders we see in the media is unfairly exaggerated. I’m wary of contributing to what I perceive as others unjustly piling-on a movement of moral activists, probably fueled by do-gooder derogation, and so on (as Geoffrey mentions in his comment.)
Why have I been so upset? The usual. The ideals of EA are very close to my heart so it made me very sad to see so many people (outside of EA) hate on EA ideals and to ridicule so many important values and concepts. That’s a terrible sign for the long-term trajectory of humanity and it has reduced the global level of good-will, cooperation, and trust. It made many people more cynical about the very ideas of altruism and truth-seeking itself.
+1 to basically all of this and thanks for adding context to the stickers thing.
I also want to add—Again, Beckstead, MacAskill and Karnofsky are not 80k. So going back to the original claim that we’re discussing (and others like it I guess):
Well, “they” are not 80k, so I’m really not surprised 80k featured SBF positively on their website. “EA leaders” are not a single shadowy entity, they’re a group of individuals who get packaged together in a variety of combinations when people realise there are no adults in the room.
Sounds like an individual attendee might have done this. I don’t see this as a big deal. I don’t think that we should be so concerned about possible bad PR that we kill off any sense of fun in the community. I suspect that doing so will cost us members rather than gain us members.
[Edit: Okay it sounds like the stickers were done by attendees. That’s much less surprising.]
Woah what’s the story behind the stickers, what the hell? Is this a dank memes thing? I assume it’s meant to be funny but I don’t get it.
I still genuinely don’t know if the signed Huel thing was meant to be a joke.
Some specific examples of EA leaders putting SBF on a pedestal that I found with a bit of brief digging:
At the time FTX blew up, SBF was featured on 80k’s homepage. Also, if you clicked “start here” on that homepage (the first link aside from a subscription form) you were brought to an article that featured SBF as one of three individual profiles.
Both of these mentions linked to a more in depth profile of SBF that had been created in 2014 and regularly updated, and clearly “puts him on a pedestal” (“This approach — where he donates a significant proportion of his income to organisations aiming to make the world a better place as effectively as possible — is allowing Sam to have a pretty staggering impact.”)
When Will wrote about how the EA Funding situation had changed, he praised SBF not just for his donations but also his personal virtue “I think the fact that Sam Bankman-Fried is a vegan and drives a Corolla is awesome, and totally the right call.”
In Will’s appearance on the 80k podcast, he uses SBF as the exemplar of earning to give, and says convincing SBF to pursue ETG was “really the important impact.” He also uses SBF to illustrate “fat tails” of impact. On another 80k podcast, 80k staff talks about SBF to illustrate the impact of their 1:1 team.
In a talk at EAG London 2021, Ben Todd argues “that the recent success of Sam Bankman-Fried is an additional reason to aim high.” (Ben also mentions SBF in a variety of posts discussing EA’s funding levels, but that seems to me like an unavoidable part of discussing that issue and less like putting him on a pedestal.)
All those examples come from the period after concerns about SBF had been raised to EA leaders. Prior to that, there are plenty more examples especially for 80k (e.g. SBF was on the 80k homepage from late 2014 to late 2017.)
As far as I can tell, EA leaders started promoting SBF in 2014 or so, seeing him as a great example of altruistic career choice in general, and of ETG (a counterintuitive model of altruistic career choice that originated in EA) specifically. Then leaders kept promoting SBF despite the warnings they got from Alameda co-founders, and continued to do so until FTX blew up.
Yes, the Corolla comment looks less innocent if the speaker has significant reasons to believe Sam was ethically shady. If you know someone is ethically shady but decide to work them with anyway, you need to be extra careful not to make statements that a reasonable person could read as expressing a belief in that person’s good ethics.
I mean, that’s not how I read it. The whole paragraph is:
I can see how some people might read it that way though.
Yes, I think that him, e.g. being interviewed by 80K didn’t make much of a difference. I think that EA’s reputation would inevitably be tied to his to an extent given how much money they donated and the context in which that occurred. People often overrate how much you can influence perceptions by framing things differently.
I agree with what others have said re: pedestal, so am not going to produce more quotes or anecdotes. I stand by the claim, though.
I think people may have been inclined to put SBF on a pedestal because earning to give was the main thing people criticized about early EA. People were otherwise pretty supportive of early EA ideas; I mean, it’s hard not to support finding more cost-effective global health charities. When SBF emerged, I think this was a bit of a “see, we told you so” moment for EAs who had been around for a long time, especially because SBF had explicitly chosen to earn to give because of EA. So it wasn’t just: “look this guy is earning to give and has billions of dollars!” The subtext was also: “EA is really onto something with its thinking and advice.” He became a poster boy for the idea that we can actually intellectualize our way to making the world better (so fuck the haters).
I think a more plausible defense of senior EAs is not that this pedestal thing didn’t happen, but that (as @Stefan_Schubert suggests) it may not have made that much of a difference. EAs might well have rallied around SBF even if senior people hadn’t promoted him. And this is definitely possible, but I wonder if things would’ve played out pretty differently if senior EAs had been like “look, we’ll take your money, and we’ll recommend some great people to work for you, but we don’t want to personally serve on the board of FTX Foundation/vouch for you/have you on our podcast, etc because we have heard rumors about X, Y, and Z, and think they pose reputational risks to EA.”
Lastly: it looks like the three former Alameda employees accused SBF of having “inappropriate sexual relationships with subordinates” around the beginning of the #MeToo movement. Alameda launched in the fall of 2017 and the confrontation with Sam occurred in April of 2018. The NYT published its article about Harvey Weinstein on October 5th, 2017, and dozens of men were accused of harassment between then and February 2018. The fact that SBF’s alleged inappropriate sexual behavior occurred around the height of the #MeToo movement doesn’t make me think EA leaders had less of a reason to worry about the reputational risks of promoting him.
Hm I wouldn’t have thought of your second paragraph. I’m not sure I agree that was an intention, but interesting.
IDK, CEA did do an investigation in 2019 into CEA/Alameda relations, according to the news article, so I’m not sure (yet!) they behaved unreasonably here given the nature of the complaints made. (I’m also not sure they behaved reasonably). Somebody tried a bit to actually figure things out at least. And I prefer that than just saying “Hey, SBF, check out these rumors. Rather than try to figure out which side is right, we will do some sort of average thing where we take your money and help you out in some ways but not others, and possibly leave a lot of value on the table or keep a lot of risk, not sure which, but oh well”. That doesn’t seem like the optimal outcome for either possible situation.
I’m reminded of split and commit. If I see something that looks like a 10-ft alligator on my property, but it might also be a log, is it an optimal strategy to continue the yardwork anyway but give it a wide berth? Or am I going to investigate further to see whether it is an alligator (and if so call animal control and have it removed) or if it is a log (and if so I can mow my grass right up to the base, even take a rest on it if I want). It’s not a perfect analogy but you get the idea. [1]
Anyway, it looks possible that people thought about this to the extent it seemed reasonable at the time, given the scale of the complaints made (which the article admits never implied anything like what happened or even implied fraud for sure—perhaps lazy accounting that they’d hopefully grow out of as they professionalized, yes). They came to the wrong conclusions, but I might be okay with this tbh. (but we’ll see how reasonable the conclusion was)
Were the claims about inappropriate sexual relationships either by the women themselves, or at least about nonconsensual relationships? Without commenting on how appropriate the relationships were otherwise, I’m not in favor of consensual relationships (consensual meaning, the women themselves would say they were consensual) being
branded part oflumped in with episodes of harassment and the #metoo movement. You can’t really create a #metoo moment for someone else.[Edit: Maybe if I try to make the analogy better, maybe 2019 CEA investigated to the extent they could (I very much doubt they were allowed to feel the actual shape of the alligator/see many of Alameda’s internal documents) and (reasonably?) decided that SBF was not either an alligator or a log, but an alligator statue (that people keep complaining is an alligator), or a dead alligator (that people were right to complain about before but it looks like things have changed), or a crippled alligator you aren’t worried about, or an otherwise-chill alligator protecting it’s babies which you don’t want to move. IDK. But then in 2022 we all learned that this was wrong too, and he was actually a frickin T-Rex pretending (excellently) to be an [alligator statue etc]. Because actually the fiasco that ended up happening was way out of scale with what even the complainants said, and the Time piece notes that in multiple places. Nobody expected a sneaky T-Rex!]
There is the whole vouching for SBF as prospective purchaser of Twitter:
You vouch for him?
Very much so! Very dedicated to making the long-term future of humanity go well.
How was he to know that was going to be made public? That’s not “to put this guy on a pedestal; to elevate him as a moral paragon and someone to emulate; to tie EA’s reputation so closely to his,” that’s “I think these two should talk and it seems like that comes down to how much I vouch for him. And honestly I do vouch for him in this context [given what I know at this point].”
@Michael_PJ offered a comment about “content linking Sam to EA.” That last sentence is hard to read as anything but.
One should know that conversations with someone as famous and unfiltered as Elon Musk about the year’s most-talked about acquisition could go public. There are also other non-public boosts like the quote at the end of the article. But even if not, the private vouch still goes to @lilly ’s point about why anyone would boost/vouch for SBF “knowing what it appears they knew then.”
Maybe he knew there was a chance his text would end up being publicly revealed in court (I wouldn’t, but okay), but that’s quite different from public promotion. And I wouldn’t consider this “content linking Sam to EA” either, and anyway the context of Michael_PJ using those words was the thing I quoted—that’s the relevant thing we’re discussing here. And again, the quote at the end of the article doesn’t read to me as “to put this guy on a pedestal; to elevate him as a moral paragon and someone to emulate; to tie EA’s reputation so closely to his” (although granted maybe “friend” was unnecessary for a simple “thanks for hosting”). And if I was him, I think I’d have vouched for SBF to Musk too: “You’re both cut-throat businessmen, both insanely good at making money, both very dedicated to making the long-term future go well...I think you’ll get on just fine.”
I think that’s a good example of “why would people do this given what they knew?”, I’m not sure it’s an example of pedestalising etc. I’m being a bit fussy here because I do think I’ve seen the specific claim that there was lots of public promotion of Sam and I’m just not sure if it’s true.
Fuss away. E.g.
Jack Lewars “to avoid putting single donors on a huge pedestal inside and outside the community” and again “putting [SBF] on a pedestal and making them symbolic of EA”
Gideon Futerman “making and encouraging Will (and I guess until recently to a lesser extent SBF) the face of EA”
tcheasdfjkl “while a lot of (other?) EAs are promoting him publicly”
Peter S. Park “But making SBF the face of the EA movement was a really bad decision”
Devon Fritz “EA decided to hold up and promote SBF as a paragon of EA values and on of the few prominent faces in the EA community”
Dean Abele “I don’t know if I should stay in EA. I would feel very sad if [Will] publicly praised someone who turned out to be morally bankrupt. Of course, everyone makes mistakes. But still, some trust has been lost.”
Peter Wildeford “The other clear mistake was promoting Sam so heavily as the poster child of EA.” [This comment is no longer endorsed by its author]
David_Althaus “I think it’s clear that we put SBF on a pedestal and promoted him as someone worth emulating, I don’t really know what to say to someone who disagrees with this.”
Habryka “Some part of EA leadership ended up endorsing SBF very publicly and very strongly despite having very likely heard about the concerns, and without following up on them (In my model of the world Will fucked up really hard here)” and again “[Will] was the person most responsible for entangling EA with FTX by publicly endorsing SBF multiple times”
Jonas Vollmer “while Nick took SBF’s money, he didn’t give SBF a strong platform or otherwise promote him a lot...So...Will should be removed”
Also the 80k interview was done by 80k...has anyone claimed that Rob Wiblin or anyone at 80k knew Sam was “sketchy”? Apparently Rob wasn’t even corrected about Sam’s non-frugality—he sounds kind of out of the loop.
I agree with you’re first half. I wonder if a startup or even a non-EA non-profit would be so self-flagellating for taking his money, even given they had heard some troubling reports. If not, I think EAs should chill out on thinking we could have been expected to do a deep investigation [Edit: apparently CEA did one on CEA/Alameda relations in 2019 but no comment yet on how it went] or hold-off on taking money. I mean everyone else had his expected net worth wrong too (Billionaire lists for example, and FTX’s own investors who should have been much more interested in internals than donation recipients). [Some data/insider info on how much big charities like WWF or Doctors Without Borders investigate donors would be great here, but without seeing that I’m assuming it was normal to accept money from SBF]
But as for the idea he was framed as a moral paragon by leaders, idk. I never got that vibe. Was I missing it? It seems more like he was framed, by news outlets and everyman EAs, and maaaybe some EA leaders on occasion but I actually don’t even remember this except for from Sam’s promos himself, as more of a cool-but-humble industrious guy than a moral leader or rep of the movement itself if anything. I mean did he ever speak at an EAG or something? What would it even look like if EA leaders were tying EA’s rep to his? Maybe he was mentioned some (usually non-EA) places as a notable personality in the movement.? But that doesn’t to me say “this guy’s put on a pedestal by leaders” and “emulate this guy” (in fact EAs wanted people to not emulate him and try direct work first). I honestly wonder if I missed something.
My weakly-held take is: You can’t help it if cool-seeming billionaires get fans, and it is hard to help it if those cool billionaires get associations with the things they fund and themselves talk about. Ex: Elon who people associate with AI safety even though he has never worked on it himself. Associations in the eyes of the public, and news outlets/bloggers/Twitter bumping those associations are gonna happen. I’m not sure EA leaders did anything to boost (or stem!) this effect (fine with being proven wrong though). I do hope/demand that next time, EA leaders are more protective of the EA brand and do try to stem this potential association effect. Like “Hey buddy, we already weren’t gonna put you on a pedestal for being associated with our brand, but actually you don’t get to do that either. Please keep involvement with us out of your public talks and self-promotion. We want to be known for the work we do, not who funds us” Dustin does this (keeps EA out of his professional brand). And most wealthy people prefer to donate discretely so it was maybe a red flag that SBF leaned right into being associated with EA, and would be a red flag in future too. Idk.
[Edit: You might consider the last part negligence, that EA leaders didn’t give SBF a slap on the wrist for EA-associating. If so maybe you still aren’t happy with leadership. But I just want to flag that if that is what happened that is still much better than leaders actively boosting him (could be wrong that the latter happened though) and would likely warrant different response. I guess I view the former as “mistakes of medium-size (but small for most leaders due to diffuse responsibility if there was no one whose job it clearly was to talk to Sam about this), passively made, not-unusual-behavior” whereas I’d view active pedestalling or active tying-EA-rep-with-SBF to be “big mistakes, actively made, unusual behavior”]
How long do investigations like these typically take?
Feelings:
My patience is running out. If the response of EA leaders after the investigation is lacking, I’m not sure I would still want to be part of this community. I’m also not sure what to do or feel in the meantime while we wait for their responses.
I’m struggling to see how releasing information already provided to the investigation would obstruct it. A self-initiated investigation is not a criminal, or even a civil, legal process—I am much less inclined to accept it as an adequate justification for a significant delay, especially where potentially implicated people have not been put on full leaves of absence.
I’m guessing that the worry is that if Will said he thinks X then that might create pressure for the independent investigation to conclude X since the independent investigators are being paid by CEA and presumably want to be hired by other companies in the future.
Presumably they have interviewed Will and/or have done enough work to reliably figure out what he thinks secondhand.
Independent investigators have incentives both for and against whitewashing for whoever is paying the bills. A reputation for whitewashing causes a loss in interest by organizations who want/need outsiders to view the results as unbiased.
That doesn’t create the same pressure as a public statement which signals “this is the narrative”.
My interpretation of the refusal was that the investigation hadn’t got to hearing all that information yet.
Asking people to turm over relevant emails is close to day-one stuff. And if the investigators haven’t conducted significant enough interviews with Will and Nick yet to have figured out the gist of what Time figured out, this is going to be a very slow investigation.
Seems like we’re in for a very slow investigation. Per Will, looks like “a minimum of 2 months” before the investigation is completed.
That still implies it’s over half complete, I think? Rather unusual not to talk to the most knowledgeable people by that point.
I’m assuming part of it is that they didn’t want to bias other people’s recollections who may not have been spoken to, but that’s a moot point now with this article.
Yeah I would certainly think/hope investigators would’ve talked to the most knowledgeable people and already uncovered everything in the Time article.
Some points I think need mentioning:
(1) Everything we read here needs to be adjusted for hindsight bias. I have no idea what the baserate for messy breakups among early-stage start-ups are, but I assume it’s significant. So “had a messy start-up breakup” is not a super reliable signal of a future scammer.
(2) We have only heard one sight of the story. We don’t know the reasons given by the people that didn’t quit. We don’t know how SBF explained the story at the time to Will. I think it’s likely that those details will make some of the EA Leader’s behaviour sound a bit more understandable.
That being said, some of the details reported in the article certainly sound concerning and gives us significant reasons to believe in serious wrongdoing. All of this should be seriously be investigated and we need to think carefully about how we can avoid things like that in the future.
I agree with other commenters that a CEA response is long overdue. I understand that they are involved in a trial and have they reasons, but they should try to say as much as they can, since the downsides of remaining silent are significant.
What’s new here, I think, is not only the level of specificity in general but that the concerns included things that raised the specter of what was to come. I submit that refusal “to implement standard business practices,” concealment of “dangerous and egregious shortcuts,” lack of “a distinction between firm capital and trading capital,” especially when combined with evidence of general lack of ethics, would make someone manifestly and obviously unfit to serve as a de facto trustee for over a million depositors of [edit: eleven!] figures in deposits. There are also references to what sounds a lot like repeated allegations of fraud against investors—I know investors properly get less sympathy than depositors, but that would still be serious criminal conduct.
If one accepts the statements in the article and draws certain inferences from them—mainly that leaders knew about as much as was in the article—then it seems more likely to me than not that those leaders knew or should have known there was a substantial probability SBF had committed or would commit significant fraud of some sort (although probably not fraud against depositors specifically).
Adding: I think this article also raises my level of concern that no one seems to have been looking out for the grantees. I’d like to think that this information would have caused people to be much more careful and self-protective around SBF/FTX adjacent stuff at a minimum, like incorporating an organization to receive any grants rather than exposing oneself to liability. But did grantees know about these concerns so they could protect themselves?
Some miscellaneous takeaways from this article…
New (to me) information included:
The specificity of Naia’s allegations (the part about “Will basically threatened Tara” seems particularly important/bad)
The quotes from the planning document for the meeting between SBF and other Alameda execs. These give important specifics, and seem highly credible (since the four other members of the management team all agreed with them).
Holden being one of the people who was told about concerns with Sam (not shocking to me, and I think Holden was in a worse position to act on this info than e.g. Will, but still an update)
CEA doing “an internal investigation relating to CEA and Alameda” sometime in 2019. I’d love to know more about this, e.g. did the full board review the findings? Who besides Will conducted the investigation?
Thoughts after reading the article and comments:
I agree with Nathan that “The 80k interview feels even worse researched/too soft than I previously thought”
I think EA dodged a bullet in that FTX collapsed relatively soon after CEA had shored up a deficiency in PR expertise (which was done largely in anticipation of Will’s book release as I understand it). I imagine that CEA’s response, while imperfect, would have been significantly worse if that PR capacity were not in place. I also think it’s fair to wonder whether CEA leadership should have added PR expertise sooner given what they apparently knew about SBF (even if you think getting in bed with a possibly sketchy billionaire is the right move, giving yourself more protection if he does turn out to be really sketchy seems reasonable way to hedge risk.)
I’m really glad Ben West is planning to write something in the relatively near future as I think some sort communication from CEA/EVF leadership is overdue.
To the contrary, this strikes me as really unspecific. What does it mean to “basically threaten” someone? What was the implied consequence of going against Will and/or Sam? What did Will say? The article raises a lot of questions.
Yeah, while I thought Naia provided a bunch of specifics the thing about Will threatening Tara is definitely an area where I’d find more specifics very informative.
Personally, this solidifies my negative update over the past 6 months on the judgment and trustworthiness of the bulk of senior EAs. I mean trustworthiness on the basis of competence, not motive.
There are some very competent leaders within EA so I don’t think we should make sweeping assumptions. I think we need to make EA a meritocracy
Sure, but my impression of the number of them and their competence has decreased. It’s still moderately high. And meritocracy cuts both ways—meritocracy would push harder on judging current leaders by their past success—Ie harshly—and not be as beholden to contingent or social reasons for believing they’re competent
Note Will MacAskill has written a shortform today saying among other things: “I won’t comment publicly on FTX at least until the independent investigation commissioned by EV is over. Unfortunately, I think that’s a minimum of 2 months, and I’m still sufficiently unsure on timing that I don’t want to make any promises on that front. I’m sorry about that: I’m aware that this will be very frustrating for you; it’s frustrating for me, too.”
Was this was all private individual warnings, or were the also public components?
At least I know of basically nothing public until FTX became very high-profile, and then it was so high-profile that it was kind of expected you would get some public warnings anyway even if they were false. I do think some of the public warnings in 2021 and early 2022 were actually pretty real and played a role in some of my beliefs in early 2022, but I think you did actually have to dig into the details of them to find it out, and I wouldn’t have bothered to even look if I hadn’t heard rumors of things, and even then, none of the public warnings were that informative.
I very much want this not to be true, but I suspect that if the Time editorial staff has done their due diligence, the odds of that are low. Thus it needs to be said:
Anyone who was publicly proclaiming to care about long-termism but then secretly ignoring the broken step that was SBF—effectively trading ethics and morality for money and power—is not only a hypocrite but has done far more damage to EA than their lifetime contributions could ever offset.
I have argued previously that conflating the actions of a person with the values of a group is a fallacy. However, when a subset of that group—especially ones in leadership roles—conspires to bury unseemly information, it starts to look like the whole group is specious, tainted, and untrustworthy.
I look forward to the independent investigation report. But as resilient as I’ve been to the SBF implosion thus far, this is making me seriously reconsider how closely I want to associate with EA.
From my DMs:
”Publicly commenting on who knew what when means a significant risk of being subpoenaed or having to appear in court as a witness, which would be a huge hassle and time cost for whoever speaks out (and possibly also for their employer and other people they’re connected to). I believe this is one of the main reasons why senior figures aren’t commenting on this.”
In most cases, I think the being-dragged-into-legal-proceedings risk of a random person speaking out is considerably less than this quote would imply. First, you’d need a litigant who cared enough about what you had to say to issue a subpoena—the FTX debtors, presumably. Even then, they would only care if they were litigating a suit for which the information would be relevant and which didn’t settle quickly enough. Unless the person on the other end denied the facts, it’s doubtful they would want to burn one of their limited number of depositions on a third party who heard something. And they’d likely get any relevant e-mails from the other person anyway. There are restrictions on subpoenas—for instance, in the US gederal context, they generally cannot command attendance more than 100 miles away from where the person lives, regularly conducts business, etc. [FRCP 45; FRBP 9016.] If you’re in a non-US country and are not an employee or agent of a party, international process is often very, very slow to the point it is a last resort for getting information like that.
None of that is legal advice, and people who have questions about their potential exposure for speaking out should consult with an appropriate lawyer.
“which would be a huge hassle and time cost for whoever speaks out”
Wait—so if leaders were complicit, yet admitting to that would be a hassle, then it’s better that they not mention their complicity? I’m afraid of a movement that makes such casual justifications for hiding malefactors and red flags. I’m going to keep showing outsiders what you all say to each other! :O
I think the quote is saying that speaking out would saddle the person who spoke out about what someone else knew with significant costs. Although I think the quote overstates the risk, I don’t think your reasoning holds. It’s not clear to me why anyone has a duty to voluntarily burden themselves with costs to aid the litigation interests of a third party.
If the statement is actually about a senior leader’s own knowledge, and their organization received significant funds from FTX/Alameda-linked sources, they are very going to be involved in litigation whether they speak or not.
Is this the time to bring up better governance again? Why do we allow CEA to be part of a foundation, controlling community assets without community oversight?
If there was functional community oversight (like e.g. EA Germany has), we would know exactly why SBF was forced out of EVF (then CEA) board.
Can you say more about how this works with organizations like EA Germany? I don’t know anything about SBF leaving the CEA board, but here’s a plausible case:
Some people have concerns about him.
They talk quietly among each other.
Someone respected quietly takes SBF aside and says they think he should resign.
He resigns, looking externally like anyone else who leaves the board including for reasons like “I don’t have time for this now that my company is growing rapidly”.
In this case I think we probably wouldn’t learn about 1-3 or the motivation for 4 even if there was community oversight, unless the people with the concerns or the people they shared them with decided to make them public. And this doesn’t seem like a question of governance?
I do see how governance might affect what the community learned if he didn’t want to step down and was voted out, but even people who don’t want to leave projects often will do it without being officially forced to go if they can see how the official process would turn out and leaving voluntarily lets them save face or get concessions.
EA Germany board members are voted into office by members (aka the community). The board is formally responsible to ordinary community members, and they need to explain their actions. So the remaining board members would at the very least face questions. It’s not a complete fix of course, but I think it could have helped.
Would the board normally face questions if someone left? Especially if they [edit: as in “the person leaving”] clearly had other big things they were working on?
@Jeff Kaufman technically you might be right that even if board members were voted in and were responsible to members, they wouldn’t have to disclose their information of explain their actions.
But if you are voted in and therefore formally accountable to members, you are likely to both feel obligated to explain things like this, and also be motivated to explain important goings on to keep your support to ensure you have a mandate in the community to stay on the board.
Whether we agree or disagree with boards being more democratic (a different question) assuming a board is voted in it’s hard to imagine they wouldn’t be far more likely to publicly explain their actions and face questions.
I’m also often confused by this common argument I see on the EA forums that people might not have time, or might consider other things more important than responding to critical governance issues or decision making—I remember this argument touted on the open phil thread. It seems a convenient excuse for not publicly responding to issues, which seems like a key function of any management body. There may be other good legal or confidentiality reasons not to respond, but I find the “workingother things” or “not enough time” reasoning weak.
Sorry, edited my comment to clarify what I meant by being busy here. The idea isn’t that people might not have time to respond to questions, it’s that the CEO of a fast growing startup deciding they don’t have time to be on the board of a foundation isn’t likely to generate questions.
That sounds like a very sensible set-up (aside from anything else, presumably it significantly lessens the chances of a “surprise! we bought an abbey” moment).
Personal feelings: I thought Karnofsky was one of the good ones! He has opinions on AI safety, and I agree with most of them! Nooooooooooo!
Object-level: My mental model of the rationality community (and, thus, some of EA) is “lots of us are mentally weird people, which helps us do unusually good things like increasing our rationality, comprehending big problems, etc., but which also have predictable downsides.”
Given this, I’m pessimistic that, in our current setup, we’re able to attract the absolute “best and brightest and also most ethical and also most epistemically rigorous people” that exist on Earth.
Ignoring for a moment that it’s just hard to find people with all of those qualities combined… what about finding people with actual-top-percentile any of those things?
The most “ethical” (like professional-ethics, personal integrity, not “actually creates the most good consequences) people are probably doing some cached thing like “non-corrupt official” or “religious leader” or “activist”.
The most “bright” (like raw intelligence/cleverness/working-memory) people are probably doing some typical thing like “quantum physicist” or “galaxy-brained mathematician”.
The most “epistemically rigorous” people are writing blog posts, which may or may not even make enough money for them to do that full-time. If they’re not already part of the broader “community” (including forecasters and I guess some real-money traders), they might be an analyst tucked away in government or academia.
A broader-problem might be something like: promote EA --> some people join it --> the other competent people think “ah, EA has all those weird problems handled, so I can keep doing my normal job” --> EA doesn’t get the best and brightest.
I think a common maladaptive pattern is to assume that the rationality community and/or EA is unusually good at “increasing our rationality, comprehending big problems”, and I really, really, really doubt that “the most “epistemically rigorous” people are writing blog posts”.
Although my feelings are broadly more optimistic about EA’s ability to improve its institutions and move forward than many stances I’ve seen, the entire trajectory of the FTX blowup has made me prefer the idea of a career that takes me farther away from insular EA bubbles, rather than a career that immerses me in EA bubbles. It’s not a huge change—I just used to think that I wouldn’t particularly care if I stayed in an EA bubble professionally, and now that seems unwise, making some career choices even more favorable than previously. This article specifically didn’t cause the change, but it’s something I’ve been mulling over for a while and I think this simply crystallized the thought for me.
Edited to just be the markets.
Prediction markets are pretty well calibrated and help us see a median view on things.
Before this article was published the below market was at 15%. Though it should be noted that I am the second largest holder of “No” and Naia, featured in the article is the third largest holder of “Yes”
We can see how much this shifts the market.
Also:
Finally: Will SBF plead guilty. if he doesn’t then there is gonna be a public trial and that’s gonna be pretty punishing for us.
@Nathan Young This is interesting, but I’m struggling to understand how it is helpful or would change things. Can you help me understand?
I’m curious if this:
is another update for you in the direction of not being able to communicate freely given that:
(E.g. Lawrence Newport says, “I will now have to update against how open I am in discussions with other EAs—which is a shame as the intellectual freedom, generosity, honesty and subtlety are what I love about this community—but it seems I will have to consider “what may a journalist think of this if this person leaked it?” as a serious concern.” Although I guess this case is more relevant to senior EA leaders being able to communicate freely with one another rather than the rest of us?)
I don’t know what the emails contained. My general stance:
Only share private messages if there was serious wrongdoing
I would support those who warned sharing their emails
I would not support general email sharing
I sense Lawrence and I disagree on this a bit.
I think youre right to think about the specific email contents here. For example, disclosing an email that shows Person A was aware of certain facts generally poses fewer concerns about interference with deliberative processes and reasonable expectations of trust than does disclosure of deliberations, evidence relating to an individual’s thinking or internal mental processes, etc.
Agree. I thought the article implied that more was shared with TIME than just the emails from the people concerned to senior EA leaders given their use of “among” rather than “to”
and quoting a reply from a leader:(I wouldn’t have mentioned it if I’d thought otherwise, although it’s encouraging that you and Nathan Young seem to be implying that you didn’t interpret that part like I did—I was genuinely curious to get Nathan’s take.)
Still, it’s one thing to share private emails from others with a journalist and another for the journalist to quote said emails extensively—the latter would have been a much larger breach of trust.
Edit: I now see that the person quoted is the person raising concerns, not the leader—the person described the leader’s response as ‘dismissing it as a rumor.’