Personal feelings (which I don’t imply are true or actionable)
I am annoyed and sad.
I want to feel like I can trust the leaders of this community are playing by a set of agreed rules. Eg I want to hear from them. And half of me trusts them and half feels I should take an outside view that leaders often seek to protect their own power. The disagreement between these parts causes hurt and frustration.
I also variously feel hurt, sad, afraid, compromised, betrayed.
I feel ugly that I talk so much about my feelings too. It feels kind of obscene.
I feel sad that saying negative things, especially about Will. I sense he’s worked really hard. I feel ungrateful and snide. Yuck.
Object level
I don’t think this article moves me much This article moves me a bit on a number of important things:
We have some more colour around the specific warnings that were given
It becomes much more likely that MacAskill backed Bankman-Fried in the aftermath of the the early Alameda disagreements which was ex-ante, dubious and ex-post disasterous. The comment about threatening Mac Auley is very concerning.
I update a bit that Sam used this support as cover
I sense that people ought to take the accusations of inappropriate sexual relationships more seriously to be consistent, though I personally I am uncertain cos we don’t have much information
edit mainly after talking to Naia in the comments, I update towards being uncertain about whether we knew SBF was unusually badly behaved (from being confident he wasn’t). ie maybe we did have the information required to be pretty sure he wasn’t worth funding or to keep him at arms length. As I say I am uncertain but previously I dismissed this
The 80k interview feels even worse researched/too soft than I previously thought
I still sense that core EAs take this seriously
I still think they don’t think they can talk
I still don’t understand why they can’t give a clear promise of when they will talk and that the lack of this makes me trust them less
I think we had lots of info that sam was a bit dodgy before the FTX crash, but that this was not above normal the levels of many CEOs of rapidly growing business (I have read most about Google’s early days and it was very shifty)
Perhaps EA should have higher standards, but I sense not.
I still think that we should have been much more careful linking ourselves reputationally to FTX
I think the big thing here to note is that even those who saw sam at his worst did not expect the FTX crash, so I guess the question is “should sam have been lauded, given his early behaviour at alameda”. I think no, but given what we knew I am uncertain whether he should have been condemned not condemned either
I think not talking while there is in investigation is reasonable
I have made both criticisms and defences of MacAskill and Beckstead and stand by them
I still think they are both very talented, perhaps more so as a result of the growth and wisdom this will engender in them (I have often thought it was dumb that people remove leaders who make mistakes) edit Though this article does add additional concerns
I would still like an argument that they shouldn’t be removed from boards, when almost any other org would. I would like the argument made and seen to be made.
I have noticed how hard it is to talk publicly about these things. Recently I’ve updated more in favour that there just are emotional, social and some career costs (and benefits) to trying to have accurate semi-public discussions about these things. People DM me. I hear people are annoyed, I build both social credit and debt. I think less than some say, but more than none.
I cannot deny that I am tempted to mediate my comments so that people will like me and probably do a bit
Missing context
I have a reasonable amount of time for the notion that EA leaders should have an independent investigation and I don’t think the article gives that enough credit
Many business leaders are disagreeable people who do grey things. Uber’s activities were deliberately illegal in many countries and I probably on balance support that. edit I am less in agreement with my tone here. The article mentions this, but in my opinion it should be written in big letters in the top that:
None of the early Alameda employees who witnessed Bankman-Fried’s behavior years earlier say they anticipated this level of alleged criminal fraud. There was no “smoking gun,” as one put it, that revealed specific examples of lawbreaking. Even if they knew Bankman-Fried was dishonest and unethical, they say, none of them could have foreseen a fraud of this scope.
If even they didn’t think this, I don’t think we should be surprised that core EAs didn’t either.
Relevant things people may or may not have known:
In the early days of Alameda, SBF reneged on deals with other EAs and had very poor financial management. Many core EAs knew this
Other:
My gut says that Naia Bouscal is telling the truth, since before I knew her in relation to this, I thought she was a pretty straight shooting twitter account.
Edited to combine two comments (one personal one more general) into one and add points as I think of them.
this was not above normal levels for the CEO of a rapidly growing business
It was, and we explicitly said that it was at the time. Many of those of us who left have a ton of experience in startups, and the persistent idea that this was a typical “founder squabble” is wrong, and to be honest, getting really tiresome to hear. This was not a normal startup, and these were not normal startup problems.
(Appreciate the words of support for my honesty, thank you!)
You may indeed believe that and have said that, but the question for us is: Was it reasonable for EA leaders to think this degree of bad behaviour was particularly out of the ordinary for the early days of a startup?
To take Nathan Young’s four examples, looking at some of what major news outlets said prior to 2018 about these companies’ early days...it doesn’t seem that unusual? (Assuming we now know all the key accusations that were made—there may of course have been more.)
Facebook
“The company and its employees have also been subject to litigation cases over the years...with its most prominent case concerning allegations that CEO Mark Zuckerberg broke an oral contract with Cameron Winklevoss, Tyler Winklevoss, and Divya Narendra to build the then-named “HarvardConnection” social network in 2004, instead allegedly opting to steal the idea and code to launch Facebook months before HarvardConnection began… The original lawsuit was eventually settled in 2009, with Facebook paying approximately $20 million in cash and 1.25 million shares.” (Wikipedia, referencing articles from 2007 to 2011)
“Facebook co-founder, Eduardo Saverin, no longer works at Facebook. He hasn’t since 2005, when CEO Mark Zuckerberg diluted Saverin’s stake in Facebook and then booted him from the company.” (Business Insider, 2012)
“we also uncovered two additional anecdotes about Mark’s behavior in Facebook’s early days that are more troubling...— an apparent hacking into the email accounts of Harvard Crimson editors using data obtained from Facebook logins, as well as a later hacking into ConnectU” (Business Insider, 2010)
Google
“Asked about his approach to running the company, Page once told a Googler his method for solving complex problems was by reducing them to binaries, and then simply choosing the best option,” Carlson writes. “Whatever the downside he viewed as collateral damage he could live with.” That collateral damage sometimes consisted of people. In 2001, frustrated with the layer of managers overseeing engineers, Page decided to fire all of them, and publicly explained that he didn’t think the managers were useful or doing any good.” (Quartz, 2014)
“Page encouraged his senior executives to fight the way he and Brin went at it. In meetings with new hires, one of the two co-founders would often provoke an argument over a business or product decision. Then they would both sit back, watching quietly as their lieutenants verbally cut each other down.” (Business Insider, 2014)
Gates
“Allen portrays the Microsoft mogul as a sarcastic bully who tried to force his founding partner out of the firm and to cut his share in the company as he was recovering from cancer.” (Guardian, 2011)
″...he recalls the harsher side of Gates’s character, anecdotes from the early days...Allen stopped playing chess with Gates after only a few games because Gates was such a bad loser he would sweep the pieces to the floor in anger; or how Gates would prowl the company car park at weekends to check on who had come in to work; or the way he would browbeat Allen and other senior colleagues, launching tirades at them and putting them down with the classic denigrating comment: “That’s the stupidest fucking thing I’ve ever heard!” (Guardian, 2011)
“They met in 1987, four months into her job at Microsoft...meeting her in the Microsoft car park, he asked her out” (Independent, 2008)
Bezos
Obviously his treatment of workers is no secret (and it seems natural for people to think he’s probably always been this way)
It’s not surprising to me if EA leaders thought most startups were like this—we just only hear stories about the ones that make it big.
I’ve only worked for one startup myself and I wasn’t privy to what went on between executives, but: one of them said to a (Black, incidentally) colleague upon firing him “You’ll never work again,” another was an older married man who was grinding up against young female colleagues at an office party (I actually suggested he go home and he said, “No—I’m having fun” and laughed and went back to it), and another made a verbal agreement with some of us to pay us overtime if we worked 12-hour days for several weeks and then simply denied it and never did. [edit: I should clarify this was not an EA org]
Thanks for giving honest quotes on a serious crime. On balance I’m in favour of your giving quotes here and that can’t have been easy (though I feel the article is inaccurate in tone).
I’m sad to hear that this is tiring, though I still am gonna say things I think. Feel free to DM me if you think I’m wrong but don’t want to engage publicly.
My error, By normal I don’t mean good, I mean “not usual”.
I sense there were this level of concern externally about facebook, that google did some pretty shifty shit, that Gates and Bezos were similarly cutthroat.
Yes, I disagree. My understanding of what happened at each of those four companies in the early days is qualitatively, categorically different from what happened at Alameda.
It really must feel awful to report serious misconduct and have it not be taken seriously. I’ve had a similar experience and it crushed me mentally.
I’ve been thinking about this situation a lot. I don’t know many details, but I’m trying to sort through what I think EA leadership should have done differently.
My main thought is, maybe in light of these concerns, they should have kept taking his money, but not tied themselves to him as much. But I don’t know many details about how they tied themselves to him. Its just, handling misconduct cases gets complicated when the misconduct is against one of the 100 richest people in the world. And while it’s clear Sam treated people poorly, broke many important rules and lied frequently, it was not clear he was stealing money from customers. And so it just leaves me confused. But thank God I am not in charge of handling these sorts of things.
I know it’s also not your responsibility to know what to do in situations like this, but I’d be curious to hear what you wished EA leadership/infrastructure had done differently. I think that might help give shape to my thoughts around this situation.
I don’t known if communicating super clearly here. So want to clarify. This is not meant as a critical comment at all! I hope it doesn’t read as downplaying your experience, because I do feel super alarmed about everything and get the sense EA fucked up big here. I feel fully on support of you here, but I’m worried my confusion makes that harder to read.
Retracting because on reflection I’m like, no one knew he was stealing funds, but I think leadership knew enough of the ingredients to not be surprised by this. It’s not just Sam treating employees poorly, but leadership heard that he would lie to people (including investors), mix funds, play fast and loose with the rules. They may not have known the disastrous ways these would combine. Even so, it seems super bad and while I’m still confused as to how the ideal way to handle it would be. It does seem clear to me it was egriously mishandled.
One important thing to note is that when we first warned everyone, he was not yet in the richest 100 people in the world. If they had taken our warnings seriously at the start, he may never have become that rich in the first place.
Agree. And also worth noting it seems like he may have never actually been that rich, but just, you know, lied and did fraud.
The general thing I’m hearing is, with a lot of people who do misconduct, you/CEA will hear about this misconduct relatively early on, and they should take action before things get too large to correct. That, early & decisive action is important. Leadership should be taking a lot more initiative in responding to misconduct.
This tracks with my experience too. I’ve reported professional misconduct, have it not be taken seriously, and have that person continue to gain power. The whole experience was maddening. So, yeah, +1 to early intervention following credible misconduct reports.
Lol not you. I deleted most detail I included in that comment because I feel like it’s distracting from SBF discussion (like, this convo should not be used as a soapbox for me), the case has recently been reopen (which means probably best if I don’t talk about it and also there might be a good outcome). And I also just worry about pissing people off.
It’s also like, what are people supposed to do with an anonymous comment with a very vague allegation.
Well I’d still like to know. My general stance is that information about misdeeds should more often be public. I wish that I’d known what many knew about SBF.
Hm maybe, I’m not sure. I like to have a professional atmosphere, and public sharing of misdeeds can lead to a culture of like gossip. But, I think it is appropriate to speak publicly about it if the situation was mishandled (in my case, unclear as it’s been reopened) or if the person should be blacklisted (I do not think this is the case here).
I think the main problem being faced again and again is that internal reporting lacks teeth.
I think public reporting is an inadequate alternative. It’s a big demand to ask people to become public whistleblowers, especially since most things worth reporting aren’t always black and white. It’s hard to publicly speak out about things if you’re not certain about them (eg because of self-doubt, wondering if it’s even worth bothering, creating a reputation for yourself, etc).
Additionally, the subsequent discourse seems to put additional burden on those speaking out. If I spoke up about something just to see a bunch of people doubt what I’ve said is true (or, like in previous cases, have to engage with the wrongdoer and proofread their account of events) I’d probably regret my choice.
I think that the wiki could solve this. Having public records that someone hard nosed (like me) could write on others behalf.
I know that my messing with prediction markets around this hasn’t always gone well (sorry) but I think there is something good in that space too. I think Sam’s “chance of fraud” would have been higher than anyone else’s.
I don’t think gossip ought to be that public or legible.
Firstly, I don’t think it would work for achieving your goals; I would still hesitate about having my opinions uploaded without feeling very confident in them (rumours are powerful weapons and I wouldn’t want to start one if I was uncertain).
Secondly, I don’t think it’s worth the costs of destroying trust. A whole bunch more people will distance themselves from EA if they know their public reputation is on the line with every interaction. (I also agree with Lawrence on the Slack leaks, FWIW).
I see why you might want public info (akin to scandal markets) when people are more high-profile, but I don’t think Sam Bankman-Fried would have passed that bar in 2018.
I would still like an argument that they shouldn’t be removed from boards, when almost any other org would. I would like the argument made and seen to be made.
Here’s my tentative take:
It’s really hard to find competent board members that meet the relevant criteria.
Nick (together with Owen) did a pretty good job turning CEA from a highly dysfunctional into a functional organization during CEA’s leadership change in 2018/2019.
Similarly, while Nick took SBF’s money, he didn’t give SBF a strong platform or otherwise promote him a lot, and instead tried to independently do a good (not perfect, but good enough!) job running a philanthropic organization. While SBF may have wanted to use the philanthropy to promote the FTX/SBF brand, Nick didn’t do this. [Edit: This should not be read as me implying that Will did those things. While I think Will made some mistakes, I don’t think this describes them.]
Continuity is useful. Nick has seen lots of crises and presumably learnt from them.
So, while Will should be removed, Nick has demonstrated competence and should stay on.
(Meta note: I feel frustrated about the lack of distinction between Nick and Will on this question. People are a bit like “Will did a poor job, therefore Nick and Will should be removed from the board.” Please, discuss the two people separately.)
Thanks for making the case. I’m not qualified to say how good a Board member Nick is, but want to pick up on something you said which is widely believed and which I’m highly confident is false.
Namely—it isn’t hard to find competent Board members. There are literally thousands of them out there, and charities outside EA appoint thousands of qualified, diligent Board members every year. I’ve recruited ~20 very good Board members in my career and have never run an open process that didn’t find at least some qualified, diligent people, who did a good job.
EA makes it hard because it’s weirdly resistant to looking outside a very small group of people, usually high status core EAs. This seems to me like one of those unfortunate examples of EA exceptionalism, where EA thinks its process for finding Board members needs to be sui generis. EA makes Board recruitment hard for itself by prioritising ‘alignment’ (which usually means high status core EAs) over competence, sometimes with very bad results (e.g. ending up with a Board that has a lot of philosophers and no lawyers/accountants/governance experts).
It also sometimes sounds like EA orgs think their Boards have higher entry requirements than the Boards of other well-run charities. Ironically, this typically produces very low quality EA Boards, mainly made up of inexperienced people without relevant professional skills, but who are thought of as ‘smart’ and ‘aligned’.
Of course, it will be hard to find new Board members right now, because CEA’s reputation is in tatters and few people will want to join an organisation that is under serious legal threat. But it seems at best a toss up whether it’s worth keeping tainted Board member(s) because they might be tricky to replace, especially when they have recused themselves from literally the single biggest issue facing the charity.
And even if one really values “alignment,” I suspect that a board’s alignment is mostly that of its median member. That may have been less true at EVF where there were no CEOs, but boards are supposed to exercise their power collectively.
On the other hand, a board’s level of legal, accounting, etc. knowledge is not based on the mean or median; it is mainly a function of the most knowledgeable one or two members.
So if one really values alignment on say a 9-member board, select six members with an alignment emphasis and three with a business skills emphasis. (The +1 over a bare majority is to keep an alignment majority if someone has to leave.)
You seem to imply that it’s fine if some board members are not value-aligned as long as the median board member is. I strongly disagree: This seems a brittle setup because the median board member could easily become non-value-aligned if some of the more aligned board members become busy and step down, or have to recuse due to a COI (which happens frequently), or similar.
I’m very surprised that you think a 3 person Board is less brittle than a bigger Board with varying levels of value alignment. How do 3 person Boards deal with all the things you list that can affect Board make up? They can’t, because the Board becomes instantly non-quorate.
I expect a 3-person board with a deep understanding of and commitment to the mission to do a better job selecting new board members than a 9-person board with people less committed to the mission. I also expect the 9-person board members to be less engaged on average.
(I avoid the term “value-alignment” because different people interpret it very differently.)
On my 6⁄3 model, you’d need four recusals among the heavily aligned six and zero among the other three for the median member to be other; three for the median to be between heavily aligned and other. If you’re having four of six need to recuse on COI grounds, there are likely other problems with board composition at play.
Also, suggesting that alignment is not the “emphasis” for each and every board seat doesn’t mean that you should put misaligned or truly random people in any seat. One still should expect a degree of alignment, especially in seat seven of the nine-seat model. Just like one should expect a certain level of general board-member competence in the six seats with alignment emphasis.
I think 9-member boards are often a bad idea because they tend to have lots of people who are shallowly engaged, rather than a smaller number of people who are deeply engaged, tend to have more diffusion of responsibility, and tend to have much less productive meetings than smaller groups of people. While this can be mitigated somewhat with subcommittees and specialization, I think the optimal number of board members for most EA orgs is 3–6.
Non-profit boards have 100% legal control of the organisation– they can do anything they want with it.
If you give people who aren’t very dedicated to EA values legal control over EA organisations, they won’t be EA organisations for very long.
There are under 5,000 EA community members in the world – most of them have no management experience.
Sure, you could give up 1⁄3 of the control to people outside of the community, but this doesn’t solve the problem (it only reduces the need for board members by 1⁄3).
The assumption that this 1⁄3 would come from outside the community seems to rely on an assumption that there are no lawyers/accountants/governance experts/etc. in the community. It would be more accurate, I think, to say that the 1⁄3 would come from outside what Jack called “high status core EAs.”
Sorry that’s what I meant. I was saying there are 5,000 community members. If you want the board to be controlled by people who are actually into EA, then you need 2⁄3 to come from something like that pool. Another 1⁄3 could come from outside (though not without risk). I wasn’t talking about what fraction of the board should have specific expertise.
Another clarification, what I care about is whether they deeply grok and are willing to act on the principles – not that they’re part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing. I was using the number of community members as a rough indication for how many people exist who actually apply the principles – I don’t mind if they actively participate in the community or not, and think it’s good if some people don’t.
Thanks, Ben. I agree with what you are saying. However, I think that on a practical level, what you are arguing for is not what happens. EA boards tend to be filled with people who work full-time in EA roles, not by fully-aligned talent individuals from the private sector (e.g. lawyers, corporate managers) who might be earning to give having followed 80k’s advice 10 years ago
Ultimately, if you think there is enough value within EA arguments about how to do good, you should be able to find smart people from other walks of life who have: 1) enough overlap with EA thinking (because EA isn’t 100% original after all) to have a reasonable starting point along with 2) more relevant leadership experience and demonstrably good judgement, and linked to the two previous 3) mature enough in their opinions and / or achievements to be less susceptible to herding.
If you think that EA orgs won’t remain EA orgs if you don’t appoint “value aligned” people, it implies out arguments aren’t strong enough for people who we think should be convinced by them. If that’s the case, it’s a real good indicator your argument might not be that good and to reconsider.
To be concrete, I expect a board of 50% card-carrying EAs and 50% experienced high achievement non-EAs with good understanding of similar topics (e.g. x-risk, evidence based interventions) to appraise arguments of what high-/lower-risk options to fund much better than a board of 100% EAs with the same epistemic and discourse background and limited prior career / board experience.
I agree that there’s a broader circle of people who get the ideas but aren’t “card carrying” community members, and having some of those on the board is good. A board definitely doesn’t need to be 100% self-identified EAs.
Another clarification is that what I care about is whether they deeply grok and are willing to act on the principles – not that they’re part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing.
This said, I think there are surprisingly few people out there like that. And due to the huge scope differences in the impact of different actions, there can be huge differences between what someone who is e.g. 60% into applying EA principles would do compared to someone who is 90% into it (using a made up scale).
I think a thing that wouldn’t make sense if for, say, Extinction Rebellion, to appoint people to their board who “aren’t so sold on climate change being the world’s biggest problem”. Due to the point above, you can end up in something that feels like this more quickly than it first seems or is intuitive.
Isn’t the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our “belief system” is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?
Also I think a lot of the time when people say “value alignment”, they are in fact looking for signals like self-identification as EAs, or who they’re friends with or have collaborated / worked with. I also notice we conflate our aesthetic preferences for communication with good reasoning or value alignment; for example, someone who knows in-group terminology or uses non-emotive language is seen as aligned with EA values / reasoning (and by me as well often). But within social-justice circles, emotive language can be seen as a signal of value alignment. Basically, there’s a lot more to unpack with “value alignment” and what it means in reality vs. what we say it ostensibly means.
Also to tackle your response, and maybe I’m reading between the lines too hard here / being too harsh on you here, but I feel there’s goalpost shifting in your original post about EA value alignment and you now stating that people who understand broader principles are also “value aligned”.
Another reflection: the more we speak about “value alignment” being important, the more it incentivises people to signal “value alignment” even if they have good arguments to the contrary. If we speak about valuing different perspectives, we give permission and incentivise people to bring those.
Isn’t the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our “belief system” is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?
Yes—but the issue plays itself out one level up.
For instance, most people aren’t very scope sensitive – firstly in their intuitions, and especially when it comes to acting on them.
I think scope sensitivity is a key part of effective altruism, so appointing people who are less scope sensitive to boards of EA orgs is similar to XR appointing people who are less concerned about climate change.
Also I think a lot of the time when people say “value alignment”, they are in fact looking for signals like self-identification as EAs, or who they’re friends with or have collaborated / worked with. I also notice we conflate our aesthetic preferences for communication with good reasoning or value alignment; for example, someone who knows in-group terminology or uses non-emotive language is seen as aligned with EA values / reasoning (and by me as well often).
I agree and think this is bad. Another common problem is interpreting agreement on what causes & interventions to prioritise as ‘value alignment’, whereas what actually matters are the underlying principles.
It’s tricky because I think these things do at least correlate with with the real thing. I don’t feel like I know what to do about it. Besides trying to encourage people to think more deeply, perhaps trying one or two steps harder to work with people one or two layers out from the current community is a good way to correct for this bias.
Also to tackle your response, and maybe I’m reading between the lines too hard here / being too harsh on you here, but I feel there’s goalpost shifting in your original post about EA value alignment and you now stating that people who understand broader principles are also “value aligned”.
That’s not my intention. I think a strong degree of wanting to act on the values is important for the majority of the board. That’s not the same as self-identifying as an EA, but merely understanding the broad principles is also not sufficient.
(Though I’m happy if a minority of the board are less dedicated to acting on the values.)
(Another clarification from earlier is that it also depends on the org. If you’re doing an evidence-based global health charity, then it’s fine to fill your board with people who are really into global health. I also think it’s good to have advisors from clearly outside of the community – they just don’t have to be board members.)
Another reflection: the more we speak about “value alignment” being important, the more it incentivises people to signal “value alignment” even if they have good arguments to the contrary. If we speak about valuing different perspectives, we give permission and incentivise people to bring those.
I agree and this is unfortunate.
To be clear I think we should try to value other perspectives about the question of how to do the most good, and we should aim to cooperate with those who have different values to our own. We should also try much harder to draw on operational skills from outside the community. But the question of board choice is firstly a question of who should be given legal control of EA organisations.
Now having read your reply, I think we’re likely closer together than apart on views. But...
But the question of board choice is firstly a question of who should be given legal control of EA organisations.
I don’t think this is how I see the question of board choice in practice. In theory yes, for the specific legal, hard mechanisms you mention. But in practice in my experience boards significantly check and challenge direction of the organisation, so the collective ability of board members to do this should be factored in appointment decisions which may trade off against legal control being put in the ‘safest pair of hands’.
That said, I feel back and forth responses on the EA forum may be exhausting their value here; I feel I’d have more to say in a brainstorm about potential trade-offs between legal control and ability to check and challenge, and open to discussing further if helpful to some concrete issue at hand :)
Yes, legal control is the first consideration, but governance requires skill not just value-alignment
I think in 2023 the skills you want largely exist within the community; it’s just that (a) people can’t find them easily (hence I founded the EA Good Governance Project) and (b) people need to be willing to appoint outside their clique
Alignment is super-important for EA organisations, I would put it as priority number 1, because if you’re aligned to EA values then you’re at least trying to do the most good for the world, whereas if you’re not, you may not be even trying to do that.
Hi Robin—thanks for this and I see your point. I think Jason put it perfectly above—alignment is often about the median Board member, where expertise is about the best Board member in a given context. So you can have both.
I have also seen a lot of trustees learn about the mission of the charity as part of the recruitment process and we shouldn’t assume the only aligned people are people who already identify as EAs.
The downsides of prioritising alignment almost to the exclusion of all else are pretty clear, I think, and harder to mitigate than the downsides or lacking technical expertise, which takes years to develop.
The nature of most EA funding also provides a check on misalignment. An EA organization that became significantly misaligned from its major funders would quickly find itself unfunded. As opposed to Wikimedia, which had/has a different funding structure as I understand it.
TL;DR: You’re incorrectlyassuming I’m into Nick mainly because of value alignment, and while that’s a relevant factor, the main factor is that he has an unusually deep understanding of EA/x-risk work that competent EA-adjacent professionals lack.
I might write a longer response. For now, I’ll say the following:
I think a lot of EA work is pretty high-context, and most people don’t understand it very well. E.g., when I ran EA Funds work tests for potential grantmakers (which I think is somewhat similar to being a board member), I observed that highly skilled professionals consistently failed to identify many important considerations for deciding on a grant. But, after engaging with EA content at an unusual level of depth for 1-2 years, they can improve a lot (i.e., there were some examples of people improving their grantmaking skills a lot). Most such people never end up attaining this level of engagement, so they never reach the level of competence I think would be required.
I agree with you that too much of a focus on high status core EAs seems problematic.
I think value-alignment in a broader sense (not tracking status, but actual altruistic commitment) matters a great deal. E.g., given the choice between personal prestige and impact, would the person reliably choose the latter? I think some high-status core EAs who were on EA boards were not value-aligned in this sense, and this seems bad.
EDIT: Relevant quote—I think this is where Nick shines as a board member:
For example, if a nonprofit’s mission is “Help animals everywhere,” does this mean “Help as many animals as possible” (which might indicate a move toward focusing on farm animals) or “Help animals in the same way the nonprofit traditionally has” or something else? How does it imply the nonprofit should make tradeoffs between helping e.g. dogs, cats, elephants, chickens, fish or even insects? How a board member answers questions like this seems central to how their presence on the board is going to affect the nonprofit.
@Jack Lewars is spot on. If you don’t believe him, take a look at the list of ~70 individuals on the EA Good Governance Project’s trustee directory. In order to effectively govern you need competence and no collective blindspots, not just value alignment.
I have a fair amount of accounting / legal / governance knowledge and as part of my board commitments think it’s a lot less relevant than deeply understanding the mission and strategy of the relevant organization (along with other more relevant generalist skills like management, HR, etc.). Edit: Though I do think if you’re tied up in the decade’s biggest bankruptcy, legal knowledge is actually really useful, but this seems more like a one-off weird situation.
It seems intuitive that your chances of ending up in a one off weird situation are reduced if you have people who understand the risks properly in advance. I think a lot of what people with technical expertise do on Boards is reduce blind spots.
I think that’s false; I think the FTX bankruptcy was hard to anticipate or prevent (despite warning flags), and accepting FTX money was the right judgment call ex ante.
I think Jack’s point was that having some technical expertise reduces the odds of a Bad Situation happening at a general level, not that it would have prevented exposure to the FTX bankruptcy specifically.
If one really does not want technical expertise on the board, a possible alternative is hiring someone with the right background to serve as in-house counsel, corporate secretary, or a similar role—and then listening to that person. Of course, that costs money.
It’s clear to me that the pre-FTX collapse EVF board, at least, needed more “lawyers/accountants/governance” expertise. If someone had been there to insist on good governance norms, I don’t believe that statutory inquiry would likely have been opened—at a minimum it would have been narrower. Given the very low base rate of SIs, I conclude that the external evidence suggests the EVF UK board was very weak in legal/accounting/governance etc. capabilities.
Overall, I think Nick did the right thing ex ante when he chose to run the Future Fund and accept SBF’s money (unless he knew specifics about potential fraud).
If he should be removed from the board, I think we either need an argument of the form “we have specific evidence to doubt that he’s trustworthy” or “being a board member requires not just absence of evidence of untrustworthiness, but proactively distancing yourself from any untrustworthy actors, even if collaborating with them would be beneficial”. I don’t buy either of these.
“[K]new specifics about potential fraud” seems too high a standard. Surely there is some percentage X at which “I assess the likelihood that these funds have been fraudulently obtained as X%” makes it unacceptable to serve as distributor of said funds, even without any knowledge of specifics of the potential fraud.
I think your second paragraph hinges on the assumption that Nick merely had sufficient reason to see SBF as a mere “untrustworthy actor[]” rather than something more serious. To me, there are several gradations between “untrustworthy actor[]” and “known fraudster.”
(I don’t have any real basis for an opinion about what Nick in particular knew, by the way . . . I just think we need to be very clear about what levels of non-specific concern about a potential bad actor are or are not acceptable.)
I agree with you: When I wrote “knew specifics about potential fraud”, I meant it roughly in the sense you described.
To my current knowledge, Nick did not have access to evidence that the funds were likely fraudulently obtained. (Though it’s not clear that I would know if it were the case.)
I think I’d bet at like 6% that evidence will come out in the next 10 years that Nick knew funds were likely fraudulently obtained. I think by normal definitions of those words it seems very unlikely to me.
I would be willing to take the other side of this bet, if the definition of “fraud” is restricted to “potentially stealing customer funds” and excludes thinks like lying to investors.
That was an example; I’d want it to exclude any type of fraud except for the large-scale theft from retail customers that is the primary concern with FTX.
Although at that point—at least in my view—the bet is only about a subset of knowledge that could have rendered it ethically unacceptable to be involved with FTXFF. Handing out money which you believed more likely than not to have been obtained by defrauding investors or bondholders would also be unacceptable, albeit not as heinous as handing out money you believed more likely than not to have been stolen from depositors. (I also think the ethically acceptable risk is less than “more likely than not” but kept that in to stay consistent with Nathan’s proposed bet which used “likely.”)
What if the investor decided to invest knowing there was an X% chance of being defrauded, and thought it was a good deal because there’s still an at least (100-X)% chance of it being a legitimate and profitable business? For what number X do you think it’s acceptable for EAs to accept money?
Fraud base rates are 1-2%; some companies end up highly profitable for their investors despite having committed fraud. Should EA accept money from YC startups? Should EA accept money from YC startups if they e.g. lied to their investors?
I think large-scale defrauding unsuspecting customers (who don’t share the upside from any risky gambles) is vastly worse than defrauding professional investors who are generally well-aware of the risks (and can profit from FTX’s risky gambles).
(I’m genuinely confused about this question; the main thing I’m confident in is that it’s not a very black-and-white kind of thing, and so I don’t want to make my bet about that.)
I don’t know the acceptable risk level either. I think it is clearly below 49%, and includes at least fraud against bondholders and investors that could reasonably be expected to cause them to lose money from what they paid in.
It’s not so much the status of the company as a fraud-commiter that is relevant, but the risk that you are taking and distributing money under circumstances that are too close to conversion (e.g., that the monies were procured by fraud and that the investors ultimately suffer a loss). I can think of two possible safe harbors under which other actors’ acceptance of a certain level of risk makes it OK for a charity to move forward:
In many cases, you could imply a maximum risk of fraud that the bondholders or other lenders were willing to accept from the interest rate minus inflation minus other risk of loss—that will usually reveal that bondholders at least were not factoring in more than a few percent fraud risk. The risk accepted by equity holders may be greater, but usually bondholders take a haircut in these types of situations—and the marginal dollars you’re spending would counterfactually have gone to them in preference to the equity holders. However, my understanding is that FTX didn’t have traditional bondholders.
If the investors were sophisticated, I think the percentage of fraud risk they accepted at the time of their investment is generally a safe harbor. For FTX, I don’t have any reason to believe this was higher than the single digits; as you said, the base rate is pretty low and I’d expect the public discourse pre-collapse to have been different if it were believed to be significantly higher.
However, those safe harbors don’t work if the charity has access to inside information (that bondholders and equity holders wouldn’t have) and that inside information updates the risk of fraud over the base rates adjusted for information known to the bond/equity holders. In that instance, I don’t think you can ride off of the investor/bondholder acceptance of the risk as low enough.
There is a final wrinkle here—for an entity as unregulated as FTX was, I don’t believe it is plausible to have a relatively high risk of investor fraud and a sufficiently low risk of depositor fraud. I don’t think people at high risk of cheating their investors can be deemed safe enough to take care of depositors. So in this case there is a risk of investor fraud that is per se unacceptable, and a risk of investor fraud that implies an unacceptable risk of depositor fraud. The acceptable risk of investor fraud is the lower of the two.
Exception: If you can buy insurance to ensure that no one is worse off because of your activity, there may be no maximum acceptable risk. Maybe that was the appropriate response under these circumstances—EA buys insurance against the risk of fraud in the amount of the donations, and returns that to the injured parties if there was fraud at the time of any donation which is discovered within a six-year period (the maximum statute of limitations for fraudulent conveyance in any U.S. state to my knowledge). If you can’t find someone to insure you against those losses at an acceptable rate . . . you may have just found your answer as to whether the risk is acceptable.
edit mainly after talking to Naia in the comments, I update towards being uncertain about whether we knew SBF was unusually badly behaved (from being confident he wasn’t). ie maybe we did have the information required to be pretty sure he wasn’t worth funding or to keep him at arms length. As I say I am uncertain but previously I dismissed this
I am quite confident we knew he was unusually bad behaved by EA standards. I think a bunch of people thought he was not that far of an outlier by Silicon Valley founder standards, though I think they were wrong.
I still sense that core EAs take this seriously
I do indeed a bunch of people are taking this quite seriously. I do think that in-general the FTX explosion is hugely changing a lot of people’s outlook on EA and how to relate to the world.
I still think they don’t think they can talk
To be clear, I am quite confident the legal consequences for talking would be quite minor and have talked to some people with extensive legal experience about this. At this point there is no good legal reason to not talk.
I still don’t understand why they can’t give a clear promise of when they will talk and that the lack of this makes me trust them less
I think people aren’t talking because it seems stressful and because they are worried being more associated with FTX would be bad for PR. I also think a lot of people kind of picked up on a vibe that core people aren’t supposed to talk about FTX stuff because it makes us look bad and because there used to be a bunch of organizational policies in-place that did prevent talking, but I think those are mostly gone by now.
None of the early Alameda employees who witnessed Bankman-Fried’s behavior years earlier say they anticipated this level of alleged criminal fraud. There was no “smoking gun,” as one put it, that revealed specific examples of lawbreaking. Even if they knew Bankman-Fried was dishonest and unethical, they say, none of them could have foreseen a fraud of this scope.
If even they didn’t think this, I don’t think we should be surprised that core EAs didn’t either.
While I agree with a strict reading of this comment, I want to point out that there was another red flag around FTX/Alameda that several people in the EA leadership likely knew about since at least late 2021, which in my opinion was more severe than the matters discussed in the Time article and which convinced me back in 2021 that FTX/Alameda were putting a lot of effort into consistently lying to the public.
In particular, in October 2021, I witnessed (sentence spread across bullet points to give emphasis to each part of it):
A high-status, important EA (though not one of the very top people)
who had worked at Alameda after FTX was founded, and left before October 2021
publicly offhandedly implying that “FTX” was the new name for Alameda (seemingly unaware that they were supposed to be distinct companies)
in a place where another very high-status EA (this time probably one of the very top people, or close to it) clearly took note of it
I won’t give more details publicly, in order to protect the person’s privacy, but this happened in a very public place.
It wasn’t just me who was aware of this. Nate Soares reported having heard the rumor that “Alameda Research had changed its name to FTX” as well, though I think he left out one important aspect of it: that this rumor was being shared by formerinsiders, not by e.g. random clueless crypto people.
In case you don’t understand why the rumor was a big deal, I explained it in my comment in Nate Soares’s post. Quoting from it:
Everywhere on the public internet, Alameda Research and FTX had painted themselves as clearly different companies. Since October 2021, they’ve ostensibly had disjoint sets of CEOs. By late 2021 I had watched several interviews with SBF and followed his output closely on Twitter, and saw people talking about Alameda and FTX in several crypto Discord servers. Nowhere did anyone say that Alameda had changed its name to FTX or otherwise act as if they were the same company (though everyone knew they were close).
[...]
How could [the former Alameda employee] know less about FTX and Alameda than me, who had never worked at either company and was just watching everything by the sidelines? If it was possible for this person to think that FTX was merely the new name for Alameda, that almost certainly implied that the FTX/Alameda leadership was putting a lot of effort into consistently lying to the public.
I suspect you will not be very impressed by this, and ask me why I didn’t share my concerns widely at the time. But I was just a low-status person with no public platform and only one or two friends. I shared my concerns with my partner (in fact, more than once, because I was so puzzled by that comment) but not with people I’m not close to. [ETA: in retrospect, I think a more correct explanation would be to say that I probably stayed silent because I guessed I’d lose status if I’d spoken up. 🙁]
I’m not sure why this wasn’t taken seriously by the EA leadership. This seems to be a pretty clear example of the FTX/Alameda leadership blatantly lying to the public about their internal workings and prominent EAs knowing about that.
I of course knew that FTX and Alameda were very closely related, that Sam and Caroline were dating and that the two organizations had very porous boundaries. But I did not know that the rest of the world did not know this, and I had no idea that it mattered much legally. They were both located in the Bahamas, and I definitely did not know enough finance law to know that these two organizations were supposed to operate at arms length.
Maybe there was someone else who did successfully put the pieces together and then stayed quiet, but I would definitely not treat this as a “smoking gun” since I saw it myself, as did many people I know, and neither me nor other people realized that this was actually pretty shady.
And legally there were actually distinct companies, at least the same way GiveWell and OpenPhil were different companies back in 2018 when they still worked in the same office and shared some leadership, and that itself doesn’t raise any flags for me. FTX and Alameda definitely actually were different companies with some offices being just for Alameda (like the one in Berkeley), though they definitely did not operate “at arms length” as I think the legal situation required.
I’m similar. In general I’m noticing a mismatch between how the article leaves me feeling versus what it leaves me thinking. E.g. concerns about the other half of the FTX/Alameda split are labelled as just “internal politics,” but when EA leaders treat the concerns about SBF as “typical startup squabbles” that’s labelled “downplaying,” “rationalizing,” or “dismissing.” (Obviously with the benefit of hindsight we think that’s fair, but we don’t know how different the two sides actually looked to outsiders at the time.)
By the way, I really like your approach of separating out feelings and thoughts.
“[Name], who had perhaps raised the loudest concerns about Bankman-Fried, was distrusted by some EA leaders because of internal politics during her time at the Centre for Effective Altruism”
I tried to address this argument with the point about every other long-time EA leaving Alameda for the same reasons. I’ve avoided naming those other EAs out of respect for their privacy, but they include multiple very core and well-respected EAs. The parallel you’re trying to draw here just really doesn’t hold up.
Personal feelings (which I don’t imply are true or actionable)
I am annoyed and sad.
I want to feel like I can trust the leaders of this community are playing by a set of agreed rules. Eg I want to hear from them. And half of me trusts them and half feels I should take an outside view that leaders often seek to protect their own power. The disagreement between these parts causes hurt and frustration.
I also variously feel hurt, sad, afraid, compromised, betrayed.
I feel ugly that I talk so much about my feelings too. It feels kind of obscene.
I feel sad that saying negative things, especially about Will. I sense he’s worked really hard. I feel ungrateful and snide. Yuck.
Object level
I don’t think this article moves me muchThis article moves me a bit on a number of important things:We have some more colour around the specific warnings that were given
It becomes much more likely that MacAskill backed Bankman-Fried in the aftermath of the the early Alameda disagreements which was ex-ante, dubious and ex-post disasterous. The comment about threatening Mac Auley is very concerning.
I update a bit that Sam used this support as cover
I sense that people ought to take the accusations of inappropriate sexual relationships more seriously to be consistent, though I personally I am uncertain cos we don’t have much information
edit mainly after talking to Naia in the comments, I update towards being uncertain about whether we knew SBF was unusually badly behaved (from being confident he wasn’t). ie maybe we did have the information required to be pretty sure he wasn’t worth funding or to keep him at arms length. As I say I am uncertain but previously I dismissed this
The 80k interview feels even worse researched/too soft than I previously thought
I still sense that core EAs take this seriously
I still think they don’t think they can talk
I still don’t understand why they can’t give a clear promise of when they will talk and that the lack of this makes me trust them less
I think we had lots of info that sam was a bit dodgy before the FTX crash, but that this was not above normal the levels of many CEOs of rapidly growing business (I have read most about Google’s early days and it was very shifty)Perhaps EA should have higher standards, but I sense not.I still think that we should have been much more careful linking ourselves reputationally to FTXI think the big thing here to note is that even those who saw sam at his worst did not expect the FTX crash, so I guess the question is “should sam have been lauded, given his early behaviour at alameda”. I think no, but given what we knew I am uncertain whether he should have been condemned
not condemned eitherI think not talking while there is in investigation is reasonable
I have made both criticisms and defences of MacAskill and Beckstead and stand by them
I still think they are both very talented, perhaps more so as a result of the growth and wisdom this will engender in them (I have often thought it was dumb that people remove leaders who make mistakes) edit Though this article does add additional concerns
I would still like an argument that they shouldn’t be removed from boards, when almost any other org would. I would like the argument made and seen to be made.
I have noticed how hard it is to talk publicly about these things. Recently I’ve updated more in favour that there just are emotional, social and some career costs (and benefits) to trying to have accurate semi-public discussions about these things. People DM me. I hear people are annoyed, I build both social credit and debt. I think less than some say, but more than none.
I cannot deny that I am tempted to mediate my comments so that people will like me and probably do a bit
Missing context
I have a reasonable amount of time for the notion that EA leaders should have an independent investigation and I don’t think the article gives that enough credit
Many business leaders are disagreeable people who do grey things. Uber’s activities were deliberately illegal in many countries and I probably on balance support that. edit I am less in agreement with my tone here. The article mentions this, but in my opinion it should be written in big letters in the top that:
If even they didn’t think this, I don’t think we should be surprised that core EAs didn’t either.
Relevant things people may or may not have known:
In the early days of Alameda, SBF reneged on deals with other EAs and had very poor financial management. Many core EAs knew this
Other:
My gut says that Naia Bouscal is telling the truth, since before I knew her in relation to this, I thought she was a pretty straight shooting twitter account.
Edited to combine two comments (one personal one more general) into one and add points as I think of them.
How do you feel?
What do you think?
It was, and we explicitly said that it was at the time. Many of those of us who left have a ton of experience in startups, and the persistent idea that this was a typical “founder squabble” is wrong, and to be honest, getting really tiresome to hear. This was not a normal startup, and these were not normal startup problems.
(Appreciate the words of support for my honesty, thank you!)
You may indeed believe that and have said that, but the question for us is: Was it reasonable for EA leaders to think this degree of bad behaviour was particularly out of the ordinary for the early days of a startup?
To take Nathan Young’s four examples, looking at some of what major news outlets said prior to 2018 about these companies’ early days...it doesn’t seem that unusual? (Assuming we now know all the key accusations that were made—there may of course have been more.)
Facebook
“The company and its employees have also been subject to litigation cases over the years...with its most prominent case concerning allegations that CEO Mark Zuckerberg broke an oral contract with Cameron Winklevoss, Tyler Winklevoss, and Divya Narendra to build the then-named “HarvardConnection” social network in 2004, instead allegedly opting to steal the idea and code to launch Facebook months before HarvardConnection began… The original lawsuit was eventually settled in 2009, with Facebook paying approximately $20 million in cash and 1.25 million shares.” (Wikipedia, referencing articles from 2007 to 2011)
“Facebook co-founder, Eduardo Saverin, no longer works at Facebook. He hasn’t since 2005, when CEO Mark Zuckerberg diluted Saverin’s stake in Facebook and then booted him from the company.” (Business Insider, 2012)
“we also uncovered two additional anecdotes about Mark’s behavior in Facebook’s early days that are more troubling...— an apparent hacking into the email accounts of Harvard Crimson editors using data obtained from Facebook logins, as well as a later hacking into ConnectU” (Business Insider, 2010)
Google
“Asked about his approach to running the company, Page once told a Googler his method for solving complex problems was by reducing them to binaries, and then simply choosing the best option,” Carlson writes. “Whatever the downside he viewed as collateral damage he could live with.” That collateral damage sometimes consisted of people. In 2001, frustrated with the layer of managers overseeing engineers, Page decided to fire all of them, and publicly explained that he didn’t think the managers were useful or doing any good.” (Quartz, 2014)
“Page encouraged his senior executives to fight the way he and Brin went at it. In meetings with new hires, one of the two co-founders would often provoke an argument over a business or product decision. Then they would both sit back, watching quietly as their lieutenants verbally cut each other down.” (Business Insider, 2014)
Gates
“Allen portrays the Microsoft mogul as a sarcastic bully who tried to force his founding partner out of the firm and to cut his share in the company as he was recovering from cancer.” (Guardian, 2011)
″...he recalls the harsher side of Gates’s character, anecdotes from the early days...Allen stopped playing chess with Gates after only a few games because Gates was such a bad loser he would sweep the pieces to the floor in anger; or how Gates would prowl the company car park at weekends to check on who had come in to work; or the way he would browbeat Allen and other senior colleagues, launching tirades at them and putting them down with the classic denigrating comment: “That’s the stupidest fucking thing I’ve ever heard!” (Guardian, 2011)
“They met in 1987, four months into her job at Microsoft...meeting her in the Microsoft car park, he asked her out” (Independent, 2008)
Bezos
Obviously his treatment of workers is no secret (and it seems natural for people to think he’s probably always been this way)
It’s not surprising to me if EA leaders thought most startups were like this—we just only hear stories about the ones that make it big.
I’ve only worked for one startup myself and I wasn’t privy to what went on between executives, but: one of them said to a (Black, incidentally) colleague upon firing him “You’ll never work again,” another was an older married man who was grinding up against young female colleagues at an office party (I actually suggested he go home and he said, “No—I’m having fun” and laughed and went back to it), and another made a verbal agreement with some of us to pay us overtime if we worked 12-hour days for several weeks and then simply denied it and never did. [edit: I should clarify this was not an EA org]
Thanks for giving honest quotes on a serious crime. On balance I’m in favour of your giving quotes here and that can’t have been easy (though I feel the article is inaccurate in tone).
I’m sad to hear that this is tiring, though I still am gonna say things I think. Feel free to DM me if you think I’m wrong but don’t want to engage publicly.
My error, By normal I don’t mean good, I mean “not usual”.
I sense there were this level of concern externally about facebook, that google did some pretty shifty shit, that Gates and Bezos were similarly cutthroat.
Do you disagree.
Yes, I disagree. My understanding of what happened at each of those four companies in the early days is qualitatively, categorically different from what happened at Alameda.
It really must feel awful to report serious misconduct and have it not be taken seriously. I’ve had a similar experience and it crushed me mentally.
I’ve been thinking about this situation a lot. I don’t know many details, but I’m trying to sort through what I think EA leadership should have done differently.
My main thought is, maybe in light of these concerns, they should have kept taking his money, but not tied themselves to him as much. But I don’t know many details about how they tied themselves to him. Its just, handling misconduct cases gets complicated when the misconduct is against one of the 100 richest people in the world. And while it’s clear Sam treated people poorly, broke many important rules and lied frequently, it was not clear he was stealing money from customers. And so it just leaves me confused. But thank God I am not in charge of handling these sorts of things.
I know it’s also not your responsibility to know what to do in situations like this, but I’d be curious to hear what you wished EA leadership/infrastructure had done differently. I think that might help give shape to my thoughts around this situation.
I don’t known if communicating super clearly here. So want to clarify. This is not meant as a critical comment at all! I hope it doesn’t read as downplaying your experience, because I do feel super alarmed about everything and get the sense EA fucked up big here. I feel fully on support of you here, but I’m worried my confusion makes that harder to read.
Retracting because on reflection I’m like, no one knew he was stealing funds, but I think leadership knew enough of the ingredients to not be surprised by this. It’s not just Sam treating employees poorly, but leadership heard that he would lie to people (including investors), mix funds, play fast and loose with the rules. They may not have known the disastrous ways these would combine. Even so, it seems super bad and while I’m still confused as to how the ideal way to handle it would be. It does seem clear to me it was egriously mishandled.
One important thing to note is that when we first warned everyone, he was not yet in the richest 100 people in the world. If they had taken our warnings seriously at the start, he may never have become that rich in the first place.
Agree. And also worth noting it seems like he may have never actually been that rich, but just, you know, lied and did fraud.
The general thing I’m hearing is, with a lot of people who do misconduct, you/CEA will hear about this misconduct relatively early on, and they should take action before things get too large to correct. That, early & decisive action is important. Leadership should be taking a lot more initiative in responding to misconduct.
This tracks with my experience too. I’ve reported professional misconduct, have it not be taken seriously, and have that person continue to gain power. The whole experience was maddening. So, yeah, +1 to early intervention following credible misconduct reports.
Feel free to DM me, I’ve complained before, I’ll complain again.
Unless somehow it is me, in which case get someone to make a prediction market and bet it up using a newly created google account.
Lol not you. I deleted most detail I included in that comment because I feel like it’s distracting from SBF discussion (like, this convo should not be used as a soapbox for me), the case has recently been reopen (which means probably best if I don’t talk about it and also there might be a good outcome). And I also just worry about pissing people off.
It’s also like, what are people supposed to do with an anonymous comment with a very vague allegation.
Well I’d still like to know. My general stance is that information about misdeeds should more often be public. I wish that I’d known what many knew about SBF.
Hm maybe, I’m not sure. I like to have a professional atmosphere, and public sharing of misdeeds can lead to a culture of like gossip. But, I think it is appropriate to speak publicly about it if the situation was mishandled (in my case, unclear as it’s been reopened) or if the person should be blacklisted (I do not think this is the case here).
Again, I think I’d like more public sharing on professional misdeeds, on the margin. Many EA orgs have mistakes pages for this reason and that’s good.
I think the main problem being faced again and again is that internal reporting lacks teeth.
I think public reporting is an inadequate alternative. It’s a big demand to ask people to become public whistleblowers, especially since most things worth reporting aren’t always black and white. It’s hard to publicly speak out about things if you’re not certain about them (eg because of self-doubt, wondering if it’s even worth bothering, creating a reputation for yourself, etc).
Additionally, the subsequent discourse seems to put additional burden on those speaking out. If I spoke up about something just to see a bunch of people doubt what I’ve said is true (or, like in previous cases, have to engage with the wrongdoer and proofread their account of events) I’d probably regret my choice.
I think that the wiki could solve this. Having public records that someone hard nosed (like me) could write on others behalf.
I know that my messing with prediction markets around this hasn’t always gone well (sorry) but I think there is something good in that space too. I think Sam’s “chance of fraud” would have been higher than anyone else’s.
I don’t think gossip ought to be that public or legible.
Firstly, I don’t think it would work for achieving your goals; I would still hesitate about having my opinions uploaded without feeling very confident in them (rumours are powerful weapons and I wouldn’t want to start one if I was uncertain).
Secondly, I don’t think it’s worth the costs of destroying trust. A whole bunch more people will distance themselves from EA if they know their public reputation is on the line with every interaction. (I also agree with Lawrence on the Slack leaks, FWIW).
I see why you might want public info (akin to scandal markets) when people are more high-profile, but I don’t think Sam Bankman-Fried would have passed that bar in 2018.
I disagree. I upload 60% opinions all the time. I would about gossip if I thought I could control it.
I think we could build systems to handle this. I think there is something whistleblower marketty
I think he would have as FTX got going. Also he might in 2018.
Fair enough! It could be useful, so I’d be happy to be wrong here.
fwiw I will probably post something in the next ~week (though I’m not sure if I’m one of the people you are waiting to hear from).
I feel glad.
Personally it doesn’t need to be soon, but I appreciate something I can hold you to a lot, that makes me not worry that this trying to minimise.
Here’s my tentative take:
It’s really hard to find competent board members that meet the relevant criteria.
Nick (together with Owen) did a pretty good job turning CEA from a highly dysfunctional into a functional organization during CEA’s leadership change in 2018/2019.
Similarly, while Nick took SBF’s money, he didn’t give SBF a strong platform or otherwise promote him a lot, and instead tried to independently do a good (not perfect, but good enough!) job running a philanthropic organization. While SBF may have wanted to use the philanthropy to promote the FTX/SBF brand, Nick didn’t do this. [Edit: This should not be read as me implying that Will did those things. While I think Will made some mistakes, I don’t think this describes them.]
Continuity is useful. Nick has seen lots of crises and presumably learnt from them.
So, while Will should be removed, Nick has demonstrated competence and should stay on.
(Meta note: I feel frustrated about the lack of distinction between Nick and Will on this question. People are a bit like “Will did a poor job, therefore Nick and Will should be removed from the board.” Please, discuss the two people separately.)
Thanks for making the case. I’m not qualified to say how good a Board member Nick is, but want to pick up on something you said which is widely believed and which I’m highly confident is false.
Namely—it isn’t hard to find competent Board members. There are literally thousands of them out there, and charities outside EA appoint thousands of qualified, diligent Board members every year. I’ve recruited ~20 very good Board members in my career and have never run an open process that didn’t find at least some qualified, diligent people, who did a good job.
EA makes it hard because it’s weirdly resistant to looking outside a very small group of people, usually high status core EAs. This seems to me like one of those unfortunate examples of EA exceptionalism, where EA thinks its process for finding Board members needs to be sui generis. EA makes Board recruitment hard for itself by prioritising ‘alignment’ (which usually means high status core EAs) over competence, sometimes with very bad results (e.g. ending up with a Board that has a lot of philosophers and no lawyers/accountants/governance experts).
It also sometimes sounds like EA orgs think their Boards have higher entry requirements than the Boards of other well-run charities. Ironically, this typically produces very low quality EA Boards, mainly made up of inexperienced people without relevant professional skills, but who are thought of as ‘smart’ and ‘aligned’.
Of course, it will be hard to find new Board members right now, because CEA’s reputation is in tatters and few people will want to join an organisation that is under serious legal threat. But it seems at best a toss up whether it’s worth keeping tainted Board member(s) because they might be tricky to replace, especially when they have recused themselves from literally the single biggest issue facing the charity.
And even if one really values “alignment,” I suspect that a board’s alignment is mostly that of its median member. That may have been less true at EVF where there were no CEOs, but boards are supposed to exercise their power collectively.
On the other hand, a board’s level of legal, accounting, etc. knowledge is not based on the mean or median; it is mainly a function of the most knowledgeable one or two members.
So if one really values alignment on say a 9-member board, select six members with an alignment emphasis and three with a business skills emphasis. (The +1 over a bare majority is to keep an alignment majority if someone has to leave.)
You seem to imply that it’s fine if some board members are not value-aligned as long as the median board member is. I strongly disagree: This seems a brittle setup because the median board member could easily become non-value-aligned if some of the more aligned board members become busy and step down, or have to recuse due to a COI (which happens frequently), or similar.
I’m very surprised that you think a 3 person Board is less brittle than a bigger Board with varying levels of value alignment. How do 3 person Boards deal with all the things you list that can affect Board make up? They can’t, because the Board becomes instantly non-quorate.
I expect a 3-person board with a deep understanding of and commitment to the mission to do a better job selecting new board members than a 9-person board with people less committed to the mission. I also expect the 9-person board members to be less engaged on average.
(I avoid the term “value-alignment” because different people interpret it very differently.)
I don’t agree with that characterization.
On my 6⁄3 model, you’d need four recusals among the heavily aligned six and zero among the other three for the median member to be other; three for the median to be between heavily aligned and other. If you’re having four of six need to recuse on COI grounds, there are likely other problems with board composition at play.
Also, suggesting that alignment is not the “emphasis” for each and every board seat doesn’t mean that you should put misaligned or truly random people in any seat. One still should expect a degree of alignment, especially in seat seven of the nine-seat model. Just like one should expect a certain level of general board-member competence in the six seats with alignment emphasis.
I think 9-member boards are often a bad idea because they tend to have lots of people who are shallowly engaged, rather than a smaller number of people who are deeply engaged, tend to have more diffusion of responsibility, and tend to have much less productive meetings than smaller groups of people. While this can be mitigated somewhat with subcommittees and specialization, I think the optimal number of board members for most EA orgs is 3–6.
This is a really good comment!
Non-profit boards have 100% legal control of the organisation– they can do anything they want with it.
If you give people who aren’t very dedicated to EA values legal control over EA organisations, they won’t be EA organisations for very long.
There are under 5,000 EA community members in the world – most of them have no management experience.
Sure, you could give up 1⁄3 of the control to people outside of the community, but this doesn’t solve the problem (it only reduces the need for board members by 1⁄3).
The assumption that this 1⁄3 would come from outside the community seems to rely on an assumption that there are no lawyers/accountants/governance experts/etc. in the community. It would be more accurate, I think, to say that the 1⁄3 would come from outside what Jack called “high status core EAs.”
Sorry that’s what I meant. I was saying there are 5,000 community members. If you want the board to be controlled by people who are actually into EA, then you need 2⁄3 to come from something like that pool. Another 1⁄3 could come from outside (though not without risk). I wasn’t talking about what fraction of the board should have specific expertise.
Another clarification, what I care about is whether they deeply grok and are willing to act on the principles – not that they’re part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing. I was using the number of community members as a rough indication for how many people exist who actually apply the principles – I don’t mind if they actively participate in the community or not, and think it’s good if some people don’t.
Thanks, Ben. I agree with what you are saying. However, I think that on a practical level, what you are arguing for is not what happens. EA boards tend to be filled with people who work full-time in EA roles, not by fully-aligned talent individuals from the private sector (e.g. lawyers, corporate managers) who might be earning to give having followed 80k’s advice 10 years ago
Ultimately, if you think there is enough value within EA arguments about how to do good, you should be able to find smart people from other walks of life who have: 1) enough overlap with EA thinking (because EA isn’t 100% original after all) to have a reasonable starting point along with 2) more relevant leadership experience and demonstrably good judgement, and linked to the two previous 3) mature enough in their opinions and / or achievements to be less susceptible to herding.
If you think that EA orgs won’t remain EA orgs if you don’t appoint “value aligned” people, it implies out arguments aren’t strong enough for people who we think should be convinced by them. If that’s the case, it’s a real good indicator your argument might not be that good and to reconsider.
To be concrete, I expect a board of 50% card-carrying EAs and 50% experienced high achievement non-EAs with good understanding of similar topics (e.g. x-risk, evidence based interventions) to appraise arguments of what high-/lower-risk options to fund much better than a board of 100% EAs with the same epistemic and discourse background and limited prior career / board experience.
Edit- clarity and typos
I agree that there’s a broader circle of people who get the ideas but aren’t “card carrying” community members, and having some of those on the board is good. A board definitely doesn’t need to be 100% self-identified EAs.
Another clarification is that what I care about is whether they deeply grok and are willing to act on the principles – not that they’re part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing.
This said, I think there are surprisingly few people out there like that. And due to the huge scope differences in the impact of different actions, there can be huge differences between what someone who is e.g. 60% into applying EA principles would do compared to someone who is 90% into it (using a made up scale).
I think a thing that wouldn’t make sense if for, say, Extinction Rebellion, to appoint people to their board who “aren’t so sold on climate change being the world’s biggest problem”. Due to the point above, you can end up in something that feels like this more quickly than it first seems or is intuitive.
Isn’t the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our “belief system” is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?
Also I think a lot of the time when people say “value alignment”, they are in fact looking for signals like self-identification as EAs, or who they’re friends with or have collaborated / worked with. I also notice we conflate our aesthetic preferences for communication with good reasoning or value alignment; for example, someone who knows in-group terminology or uses non-emotive language is seen as aligned with EA values / reasoning (and by me as well often). But within social-justice circles, emotive language can be seen as a signal of value alignment. Basically, there’s a lot more to unpack with “value alignment” and what it means in reality vs. what we say it ostensibly means.
Also to tackle your response, and maybe I’m reading between the lines too hard here / being too harsh on you here, but I feel there’s goalpost shifting in your original post about EA value alignment and you now stating that people who understand broader principles are also “value aligned”.
Another reflection: the more we speak about “value alignment” being important, the more it incentivises people to signal “value alignment” even if they have good arguments to the contrary. If we speak about valuing different perspectives, we give permission and incentivise people to bring those.
Yes—but the issue plays itself out one level up.
For instance, most people aren’t very scope sensitive – firstly in their intuitions, and especially when it comes to acting on them.
I think scope sensitivity is a key part of effective altruism, so appointing people who are less scope sensitive to boards of EA orgs is similar to XR appointing people who are less concerned about climate change.
I agree and think this is bad. Another common problem is interpreting agreement on what causes & interventions to prioritise as ‘value alignment’, whereas what actually matters are the underlying principles.
It’s tricky because I think these things do at least correlate with with the real thing. I don’t feel like I know what to do about it. Besides trying to encourage people to think more deeply, perhaps trying one or two steps harder to work with people one or two layers out from the current community is a good way to correct for this bias.
That’s not my intention. I think a strong degree of wanting to act on the values is important for the majority of the board. That’s not the same as self-identifying as an EA, but merely understanding the broad principles is also not sufficient.
(Though I’m happy if a minority of the board are less dedicated to acting on the values.)
(Another clarification from earlier is that it also depends on the org. If you’re doing an evidence-based global health charity, then it’s fine to fill your board with people who are really into global health. I also think it’s good to have advisors from clearly outside of the community – they just don’t have to be board members.)
I agree and this is unfortunate.
To be clear I think we should try to value other perspectives about the question of how to do the most good, and we should aim to cooperate with those who have different values to our own. We should also try much harder to draw on operational skills from outside the community. But the question of board choice is firstly a question of who should be given legal control of EA organisations.
Now having read your reply, I think we’re likely closer together than apart on views. But...
I don’t think this is how I see the question of board choice in practice. In theory yes, for the specific legal, hard mechanisms you mention. But in practice in my experience boards significantly check and challenge direction of the organisation, so the collective ability of board members to do this should be factored in appointment decisions which may trade off against legal control being put in the ‘safest pair of hands’.
That said, I feel back and forth responses on the EA forum may be exhausting their value here; I feel I’d have more to say in a brainstorm about potential trade-offs between legal control and ability to check and challenge, and open to discussing further if helpful to some concrete issue at hand :)
Two quick points:
Yes, legal control is the first consideration, but governance requires skill not just value-alignment
I think in 2023 the skills you want largely exist within the community; it’s just that (a) people can’t find them easily (hence I founded the EA Good Governance Project) and (b) people need to be willing to appoint outside their clique
Alignment is super-important for EA organisations, I would put it as priority number 1, because if you’re aligned to EA values then you’re at least trying to do the most good for the world, whereas if you’re not, you may not be even trying to do that.
For an example of a not-for-profit non-EA organisation that has suffered from a lack of alignment in recent times, I would point to the Wikimedia Foundation, which has regranted excess funds to extremely dubious organisations: https://twitter.com/echetus/status/1579776106034757633 (see also: https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2022-10-31/News_and_notes ). This is quite apart from the encyclopedia project itself arguably deviating from its stated goals of maintaining a neutral point of view, which is a whole other level of misalignment, but I won’t get into that here.
Hi Robin—thanks for this and I see your point. I think Jason put it perfectly above—alignment is often about the median Board member, where expertise is about the best Board member in a given context. So you can have both.
I have also seen a lot of trustees learn about the mission of the charity as part of the recruitment process and we shouldn’t assume the only aligned people are people who already identify as EAs.
The downsides of prioritising alignment almost to the exclusion of all else are pretty clear, I think, and harder to mitigate than the downsides or lacking technical expertise, which takes years to develop.
The nature of most EA funding also provides a check on misalignment. An EA organization that became significantly misaligned from its major funders would quickly find itself unfunded. As opposed to Wikimedia, which had/has a different funding structure as I understand it.
TL;DR: You’re incorrectly assuming I’m into Nick mainly because of value alignment, and while that’s a relevant factor, the main factor is that he has an unusually deep understanding of EA/x-risk work that competent EA-adjacent professionals lack.
I might write a longer response. For now, I’ll say the following:
I think a lot of EA work is pretty high-context, and most people don’t understand it very well. E.g., when I ran EA Funds work tests for potential grantmakers (which I think is somewhat similar to being a board member), I observed that highly skilled professionals consistently failed to identify many important considerations for deciding on a grant. But, after engaging with EA content at an unusual level of depth for 1-2 years, they can improve a lot (i.e., there were some examples of people improving their grantmaking skills a lot). Most such people never end up attaining this level of engagement, so they never reach the level of competence I think would be required.
I agree with you that too much of a focus on high status core EAs seems problematic.
I think value-alignment in a broader sense (not tracking status, but actual altruistic commitment) matters a great deal. E.g., given the choice between personal prestige and impact, would the person reliably choose the latter? I think some high-status core EAs who were on EA boards were not value-aligned in this sense, and this seems bad.
EDIT: Relevant quote—I think this is where Nick shines as a board member:
@Jack Lewars is spot on. If you don’t believe him, take a look at the list of ~70 individuals on the EA Good Governance Project’s trustee directory. In order to effectively govern you need competence and no collective blindspots, not just value alignment.
I’m definitely not saying value alignment is the only thing to consider.
I have a fair amount of accounting / legal / governance knowledge and as part of my board commitments think it’s a lot less relevant than deeply understanding the mission and strategy of the relevant organization (along with other more relevant generalist skills like management, HR, etc.). Edit: Though I do think if you’re tied up in the decade’s biggest bankruptcy, legal knowledge is actually really useful, but this seems more like a one-off weird situation.
It seems intuitive that your chances of ending up in a one off weird situation are reduced if you have people who understand the risks properly in advance. I think a lot of what people with technical expertise do on Boards is reduce blind spots.
I think that’s false; I think the FTX bankruptcy was hard to anticipate or prevent (despite warning flags), and accepting FTX money was the right judgment call ex ante.
I think Jack’s point was that having some technical expertise reduces the odds of a Bad Situation happening at a general level, not that it would have prevented exposure to the FTX bankruptcy specifically.
If one really does not want technical expertise on the board, a possible alternative is hiring someone with the right background to serve as in-house counsel, corporate secretary, or a similar role—and then listening to that person. Of course, that costs money.
I read his comment differently, but I’ll stop engaging now as I don’t really have time for this many follow-ups, sorry!
It’s clear to me that the pre-FTX collapse EVF board, at least, needed more “lawyers/accountants/governance” expertise. If someone had been there to insist on good governance norms, I don’t believe that statutory inquiry would likely have been opened—at a minimum it would have been narrower. Given the very low base rate of SIs, I conclude that the external evidence suggests the EVF UK board was very weak in legal/accounting/governance etc. capabilities.
Overall, I think Nick did the right thing ex ante when he chose to run the Future Fund and accept SBF’s money (unless he knew specifics about potential fraud).
If he should be removed from the board, I think we either need an argument of the form “we have specific evidence to doubt that he’s trustworthy” or “being a board member requires not just absence of evidence of untrustworthiness, but proactively distancing yourself from any untrustworthy actors, even if collaborating with them would be beneficial”. I don’t buy either of these.
“[K]new specifics about potential fraud” seems too high a standard. Surely there is some percentage X at which “I assess the likelihood that these funds have been fraudulently obtained as X%” makes it unacceptable to serve as distributor of said funds, even without any knowledge of specifics of the potential fraud.
I think your second paragraph hinges on the assumption that Nick merely had sufficient reason to see SBF as a mere “untrustworthy actor[]” rather than something more serious. To me, there are several gradations between “untrustworthy actor[]” and “known fraudster.”
(I don’t have any real basis for an opinion about what Nick in particular knew, by the way . . . I just think we need to be very clear about what levels of non-specific concern about a potential bad actor are or are not acceptable.)
I agree with you: When I wrote “knew specifics about potential fraud”, I meant it roughly in the sense you described.
To my current knowledge, Nick did not have access to evidence that the funds were likely fraudulently obtained. (Though it’s not clear that I would know if it were the case.)
I think I’d bet at like 6% that evidence will come out in the next 10 years that Nick knew funds were likely fraudulently obtained. I think by normal definitions of those words it seems very unlikely to me.
I would be willing to take the other side of this bet, if the definition of “fraud” is restricted to “potentially stealing customer funds” and excludes thinks like lying to investors.
So: excludes securities fraud?
That was an example; I’d want it to exclude any type of fraud except for the large-scale theft from retail customers that is the primary concern with FTX.
Although at that point—at least in my view—the bet is only about a subset of knowledge that could have rendered it ethically unacceptable to be involved with FTXFF. Handing out money which you believed more likely than not to have been obtained by defrauding investors or bondholders would also be unacceptable, albeit not as heinous as handing out money you believed more likely than not to have been stolen from depositors. (I also think the ethically acceptable risk is less than “more likely than not” but kept that in to stay consistent with Nathan’s proposed bet which used “likely.”)
What if the investor decided to invest knowing there was an X% chance of being defrauded, and thought it was a good deal because there’s still an at least (100-X)% chance of it being a legitimate and profitable business? For what number X do you think it’s acceptable for EAs to accept money?
Fraud base rates are 1-2%; some companies end up highly profitable for their investors despite having committed fraud. Should EA accept money from YC startups? Should EA accept money from YC startups if they e.g. lied to their investors?
I think large-scale defrauding unsuspecting customers (who don’t share the upside from any risky gambles) is vastly worse than defrauding professional investors who are generally well-aware of the risks (and can profit from FTX’s risky gambles).
(I’m genuinely confused about this question; the main thing I’m confident in is that it’s not a very black-and-white kind of thing, and so I don’t want to make my bet about that.)
I don’t know the acceptable risk level either. I think it is clearly below 49%, and includes at least fraud against bondholders and investors that could reasonably be expected to cause them to lose money from what they paid in.
It’s not so much the status of the company as a fraud-commiter that is relevant, but the risk that you are taking and distributing money under circumstances that are too close to conversion (e.g., that the monies were procured by fraud and that the investors ultimately suffer a loss). I can think of two possible safe harbors under which other actors’ acceptance of a certain level of risk makes it OK for a charity to move forward:
In many cases, you could imply a maximum risk of fraud that the bondholders or other lenders were willing to accept from the interest rate minus inflation minus other risk of loss—that will usually reveal that bondholders at least were not factoring in more than a few percent fraud risk. The risk accepted by equity holders may be greater, but usually bondholders take a haircut in these types of situations—and the marginal dollars you’re spending would counterfactually have gone to them in preference to the equity holders. However, my understanding is that FTX didn’t have traditional bondholders.
If the investors were sophisticated, I think the percentage of fraud risk they accepted at the time of their investment is generally a safe harbor. For FTX, I don’t have any reason to believe this was higher than the single digits; as you said, the base rate is pretty low and I’d expect the public discourse pre-collapse to have been different if it were believed to be significantly higher.
However, those safe harbors don’t work if the charity has access to inside information (that bondholders and equity holders wouldn’t have) and that inside information updates the risk of fraud over the base rates adjusted for information known to the bond/equity holders. In that instance, I don’t think you can ride off of the investor/bondholder acceptance of the risk as low enough.
There is a final wrinkle here—for an entity as unregulated as FTX was, I don’t believe it is plausible to have a relatively high risk of investor fraud and a sufficiently low risk of depositor fraud. I don’t think people at high risk of cheating their investors can be deemed safe enough to take care of depositors. So in this case there is a risk of investor fraud that is per se unacceptable, and a risk of investor fraud that implies an unacceptable risk of depositor fraud. The acceptable risk of investor fraud is the lower of the two.
Exception: If you can buy insurance to ensure that no one is worse off because of your activity, there may be no maximum acceptable risk. Maybe that was the appropriate response under these circumstances—EA buys insurance against the risk of fraud in the amount of the donations, and returns that to the injured parties if there was fraud at the time of any donation which is discovered within a six-year period (the maximum statute of limitations for fraudulent conveyance in any U.S. state to my knowledge). If you can’t find someone to insure you against those losses at an acceptable rate . . . you may have just found your answer as to whether the risk is acceptable.
I am quite confident we knew he was unusually bad behaved by EA standards. I think a bunch of people thought he was not that far of an outlier by Silicon Valley founder standards, though I think they were wrong.
I do indeed a bunch of people are taking this quite seriously. I do think that in-general the FTX explosion is hugely changing a lot of people’s outlook on EA and how to relate to the world.
To be clear, I am quite confident the legal consequences for talking would be quite minor and have talked to some people with extensive legal experience about this. At this point there is no good legal reason to not talk.
I think people aren’t talking because it seems stressful and because they are worried being more associated with FTX would be bad for PR. I also think a lot of people kind of picked up on a vibe that core people aren’t supposed to talk about FTX stuff because it makes us look bad and because there used to be a bunch of organizational policies in-place that did prevent talking, but I think those are mostly gone by now.
While I agree with a strict reading of this comment, I want to point out that there was another red flag around FTX/Alameda that several people in the EA leadership likely knew about since at least late 2021, which in my opinion was more severe than the matters discussed in the Time article and which convinced me back in 2021 that FTX/Alameda were putting a lot of effort into consistently lying to the public.
In particular, in October 2021, I witnessed (sentence spread across bullet points to give emphasis to each part of it):
A high-status, important EA (though not one of the very top people)
who had worked at Alameda after FTX was founded, and left before October 2021
publicly offhandedly implying that “FTX” was the new name for Alameda (seemingly unaware that they were supposed to be distinct companies)
in a place where another very high-status EA (this time probably one of the very top people, or close to it) clearly took note of it
I won’t give more details publicly, in order to protect the person’s privacy, but this happened in a very public place.
It wasn’t just me who was aware of this. Nate Soares reported having heard the rumor that “Alameda Research had changed its name to FTX” as well, though I think he left out one important aspect of it: that this rumor was being shared by former insiders, not by e.g. random clueless crypto people.
In case you don’t understand why the rumor was a big deal, I explained it in my comment in Nate Soares’s post. Quoting from it:
I suspect you will not be very impressed by this, and ask me why I didn’t share my concerns widely at the time. But I was just a low-status person with no public platform and only one or two friends. I shared my concerns with my partner (in fact, more than once, because I was so puzzled by that comment) but not with people I’m not close to. [ETA: in retrospect, I think a more correct explanation would be to say that I probably stayed silent because I guessed I’d lose status if I’d spoken up. 🙁]
I’m not sure why this wasn’t taken seriously by the EA leadership. This seems to be a pretty clear example of the FTX/Alameda leadership blatantly lying to the public about their internal workings and prominent EAs knowing about that.
I of course knew that FTX and Alameda were very closely related, that Sam and Caroline were dating and that the two organizations had very porous boundaries. But I did not know that the rest of the world did not know this, and I had no idea that it mattered much legally. They were both located in the Bahamas, and I definitely did not know enough finance law to know that these two organizations were supposed to operate at arms length.
Maybe there was someone else who did successfully put the pieces together and then stayed quiet, but I would definitely not treat this as a “smoking gun” since I saw it myself, as did many people I know, and neither me nor other people realized that this was actually pretty shady.
And legally there were actually distinct companies, at least the same way GiveWell and OpenPhil were different companies back in 2018 when they still worked in the same office and shared some leadership, and that itself doesn’t raise any flags for me. FTX and Alameda definitely actually were different companies with some offices being just for Alameda (like the one in Berkeley), though they definitely did not operate “at arms length” as I think the legal situation required.
I’m similar. In general I’m noticing a mismatch between how the article leaves me feeling versus what it leaves me thinking. E.g. concerns about the other half of the FTX/Alameda split are labelled as just “internal politics,” but when EA leaders treat the concerns about SBF as “typical startup squabbles” that’s labelled “downplaying,” “rationalizing,” or “dismissing.” (Obviously with the benefit of hindsight we think that’s fair, but we don’t know how different the two sides actually looked to outsiders at the time.)
By the way, I really like your approach of separating out feelings and thoughts.
What do you mean by “concerns about the other half of the FTX/Alameda split”?
“[Name], who had perhaps raised the loudest concerns about Bankman-Fried, was distrusted by some EA leaders because of internal politics during her time at the Centre for Effective Altruism”
I tried to address this argument with the point about every other long-time EA leaving Alameda for the same reasons. I’ve avoided naming those other EAs out of respect for their privacy, but they include multiple very core and well-respected EAs. The parallel you’re trying to draw here just really doesn’t hold up.
I don’t think you should.