The scale of the harm from the fraud committed by Sam Bankman-Fried and the others at FTX and Alameda is difficult to comprehend. Over a million people lost money; dozens of projects’ plans were thrown into disarray because they could not use funding they had received or were promised; the reputational damage to EA has made the good that thousands of honest, morally motivated people are trying to do that much harder. On any reasonable understanding of what happened, what they did was deplorable. I’m horrified by the fact that I was Sam’s entry point into EA.
In these comments, I offer my thoughts, but I don’t claim to be the expert on the lessons we should take from this disaster. Sam and the others harmed me and people and projects I love, more than anyone else has done in my life. I was lied to, extensively, by people I thought were my friends and allies, in a way I’ve found hard to come to terms with. Even though a year and a half has passed, it’s still emotionally raw for me: I’m trying to be objective and dispassionate, but I’m aware that this might hinder me.
There are four categories of lessons and updates:
Undoing updates made because of FTX
Appreciating the new world we’re in
Assessing what changes we could make in EA to make catastrophes like this less likely to happen again
Assessing what changes we could make such that EA could handle crises better in the future
On the first two points, the post from Ben Todd is good, though I don’t agree with all of what he says. In my view, the most important lessons when it comes to the first two points, which also have bearing on the third and fourth, are:
Against “EA exceptionalism”: without evidence to the contrary, we should assume that people in EA are about average (given their demographics) on traits that don’t relate to EA. Sadly, that includes things like likelihood to commit crimes. We should be especially cautious to avoid a halo effect — assuming that because someone is good in some ways, like being dedicated to helping others, then they are good in other ways, too, like having integrity.
Looking back, there was a crazy halo effect around Sam, and I’m sure that will have influenced how I saw him. Before advising Future Fund, I remember asking a successful crypto investor — not connected to EA — what they thought of him. Their reply was: “He is a god.”
In my own case, I think I’ve been too trusting of people, and in general too unwilling to countenance the idea that someone might be a bad actor, or be deceiving me. Given what we know now, it was obviously a mistake to trust Sam and the others, but I think I’ve been too trusting in other instances in my life, too. I think in particular that I’ve been too quick to assume that, because someone indicates they’re part of the EA team, they are thereby trustworthy and honest. I think that fully improving on this trait will take a long time for me, and I’m going to bear this in mind in which roles I take on in the future.
Presenting EA in the context of the whole of morality.
EA is compatible with very many different moral worldviews, and this ecumenicism was a core reason for why EA was defined as it was. But people have often conflated EA with naive utilitarianism: that promoting wellbeing is the *only* thing that matters.
Even on pure utilitarian grounds, you should take seriously the wisdom enshrined in common-sense moral norms, and be extremely sceptical if your reasoning leads you to depart wildly from them. There are very strong consequentialist reasons for acting with integrity and for being cooperative with people with other moral views.
But, what’s more, utilitarianism is just one plausible moral view among many, and we shouldn’t be at all confident in it. Taking moral uncertainty into account means taking seriously the consequences of your actions, but it also means respecting common-sense moral prohibitions.[1]
I could have done better in how I’ve communicated on this score. In the past, I’ve emphasised the distinctive aspects of EA, treated the conflation with naive utilitarianism as a confusion that people have, and the response to it as an afterthought, rather than something built into the core of talking about the ideas. I plan to change that, going forward — emphasising more the whole of morality, rather than just the most distinctive contributions that EA makes (namely, that we should be a lot more benevolent and a lot more intensely truth-seeking than common-sense morality suggests).
Going even further on legibly acting in accordance with common-sense virtues than one would otherwise, because onlookers will be more sceptical of people associated with EA than they were before.
Here’s an analogy I’ve found helpful. Suppose it’s a 30mph zone, where almost everyone in fact drives at 35mph. If you’re an EA, how fast should you drive? Maybe before it was ok to go at 35, in line with prevailing norms. Now I think we should go at 30.
Being willing to fight for EA qua EA.
FTX has given people an enormous stick to hit EA with, and means that a lot of people have wanted to disassociate from EA. This will result in less work going towards the most important problems in the world today—yet another of the harms that Sam and the others caused.
But it means we’ll need, more than ever, for people who believe that the ideas are true and important to be willing to stick up for them, even in the face of criticism that’s often unfair and uncharitable, and sometimes downright mean.
On the third point — how to reduce the chance of future catastrophes — the key thing, in my view, is to pay attention to people’s local incentives when trying to predict their behaviour, in particular looking at the governance regime they are in. Some of my concrete lessons, here, are:
You can’t trust VCs or the financial media to detect fraud.[2] (Indeed, you shouldn’t even expect VCs to be particularly good at detecting fraud, as it’s often not in their self-interest to do so; I found Jeff Kaufman’s post on this very helpful).
The base rates of fraud are surprisingly high (here and here).
We should expect the base rate to be higher in poorly-regulated industries.
The idea that a company is run by “good people” isn’t sufficient to counterbalance that.
In general, people who commit white collar crimes often have good reputations before the crime; this is one of the main lessons from Eugene Soltes’s book Why They Do It.
In the case of FTX: the fraud was committed by Caroline, Gary and Nishad, as well as Sam. Though some people had misgivings about Sam, I haven’t heard the same about the others. In Nishad’s case in particular, comments I’ve heard about his character are universally that he seemed kind, thoughtful and honest. Yet, that wasn’t enough.
(This is all particularly on my mind when thinking about the future behaviour of AI companies, though recent events also show how hard it is to get governance right so that it’s genuinely a check on power.)
In the case of FTX, if there had been better aggregation of people’s opinions on Sam that might have helped a bit, though as I note in another comment there was a widespread error in thinking that the 2018 misgivings were wrong or that he’d matured. But what would have helped a lot more, in my view, was knowing how poorly-governed the company was — there wasn’t a functional board, or a risk department, or a CFO.
On how to respond better to crises in the future…. I think there’s a lot. I currently have no formal responsibilities over any community organisations, and do limited informal advising, too,[3] so I’ll primarily let Zach (once he’s back from vacation) or others comment in more depth on lessons learned from this, as well as changes that are being made, and planned to be made, across the EA community as a whole.
But one of the biggest lessons, for me, is decentralisation, and ensuring that people and organisations to a greater extent have clear separation in their roles and activities than they have had in the past. I wrote about this more here. (Since writing that post, though, I now lean more towards thinking that someone should “own” managing the movement, and that that should be the Centre for Effective Altruism. This is because there are gains from “public goods” in the movement that won’t be provided by default, and because I think Zach is going to be a strong CEO who can plausibly pull it off.)
In my own case, at the point of time of the FTX collapse, I was:
On the board of EV
An advisor to Future Fund
The most well-known advocate of EA
But once FTX collapsed, these roles interfered with each other. In particular, being on the board of EV and an advisor to Future Fund majorly impacted my ability to defend EA in the aftermath of the collapse and to help the movement try to make sense of what had happened. In retrospect, I wish I’d started building up a larger board for EV (then CEA), and transitioned out of that role, as early as 2017 or 2018; this would have made the movement as a whole more robust.
Looking forward, I’m going to stay off boards for a while, and focus on research, writing and advocacy.
I give my high-level take on what generally follows from taking moral uncertainty seriously, here: “In general, and very roughly speaking, I believe that maximizing expected choice- worthiness under moral uncertainty entails something similar to a value-pluralist consequentialism-plus-side-constraints view, with heavy emphasis on consequences that impact the long-run future of the human race.”
There’s a knock against prediction markets, here, too. A Metaculus forecast, in March of 2022 (the end of the period when one could make forecasts on this question), gave a 1.3% chance of FTX making any default on customer funds over the year. The probability that the Metaculus forecasters would have put on the claim that FTX would default on very large numbers of customer funds, as a result of misconduct, would presumably have been lower.
More generally, I’m trying to emphasise that I am not the “leader” of the EA movement, and, indeed, that I don’t think that the EA movement is the sort of thing that should have a leader. I’m still in favour of EA having advocates (and, hopefully, very many advocates, including people who hopefully get a lot more well-known than I am), and I plan to continue to advocate for EA, but I see that as a very different role.
Going even further on legibly acting in accordance with common-sense virtues than one would otherwise, because onlookers will be more sceptical of people associated with EA than they were before.
Here’s an analogy I’ve found helpful. Suppose it’s a 30mph zone, where almost everyone in fact drives at 35mph. If you’re an EA, how fast should you drive? Maybe before it was ok to go at 35, in line with prevailing norms. Now I think we should go at 30.
Wanting to push back against this a little bit:
The big issue here is that SBF was recklessly racing ahead at 60mph, and EAs who saw that didn’t prevent him from doing so. So, I think the main lesson here is that EAs should learn to become strict enforcers of 35mph speed limits among their collaborators, which requires courage and skill in speaking out, rather than being highly strictly law-abiding.
The vast majority of EAs were/are reasonably law-abiding and careful (going at 35mph) and it seems perfectly fine for them to continue the same way. Extra trustworthiness signalling is helpful insofar as the world distrusts EAs due to what happened at FTX, but this effect is probably not huge.
EAs will get less done, be worse collaborators, and lose out on entrepreneurial talent if they become overly cautious. A non-zero level of naughtiness is often desirable, though this is highly domain-dependent.
I hear Will not as saying that going 35mph is in itself wrong in this analogy (necessarily), but that EA is now more-than-average vulnerable to attack and mistrust, so we need to signal our trustworthiness more clearly than others do.
Since writing that post, though, I now lean more towards thinking that someone should “own” managing the movement, and that that should be the Centre for Effective Altruism.
I agree with this. Failing that, I feel strongly that CEA should change its name. There are costs to having a leader / manager / “coordinator-in-chief”, and costs to not having such an entity; but the worst of both worlds is to have ambiguity about whether a person or org is filling that role. Then you end up with situations like “a bunch of EAs sit on their hands because they expect someone else to respond, but no one actually takes the wheel”, or “an org gets the power of perceived leadership, but has limited accountability because it’s left itself a lot of plausible deniability about exactly how much of a leader it is”.
we should be a lot more benevolent and a lot more intensely truth-seeking than common-sense morality suggests
It concerns me a bit that when legal risk appears suddenly everyone gets verypragmaticin a way that I am not sure feels the same as integrity or truth-seeking. It feels a bit similar to how pragmatic we all were around FTX during the boom. Feels like in crises we get a bit worse at truth seeking and integrity, though I guess many communities do. (Sometimes it feels like in a crisis you get to pick just one thing and I am not convinced the thing the EA community picks is integrity or truth seekingness)
Also I don’t really trust my own judgement here, but while EA may feel more decentralised, a lot of the orgs feel even more centralised around OpenPhil, which feels a bit harder to contact and is doing more work internally. This is their prerogative I guess, but still.
I am sure while being a figurehead of EA has had a lot of benefits (not all of which I guess you wanted) but I strongly sense it has had a lot of really large costs. Thank you for your work. You’re a really talented communicator and networker and at this point probably a skilled board member so I hope that doesn’t get lost in all this.
There’s a knock against prediction markets, here, too. A Metaculus forecast, in March of 2022 (the end of the period when one could make forecasts on this question), gave a 1.3% chance of FTX making any default on customer funds over the year. The probability that the Metaculus forecasters would have put on the claim that FTX would default on very large numbers of customer funds, as a result of misconduct, would presumably have been lower.
Metaculus isn’t a prediction market; it’s just an opinion poll of people who use the Metaculus website.
Lessons and updates
The scale of the harm from the fraud committed by Sam Bankman-Fried and the others at FTX and Alameda is difficult to comprehend. Over a million people lost money; dozens of projects’ plans were thrown into disarray because they could not use funding they had received or were promised; the reputational damage to EA has made the good that thousands of honest, morally motivated people are trying to do that much harder. On any reasonable understanding of what happened, what they did was deplorable. I’m horrified by the fact that I was Sam’s entry point into EA.
In these comments, I offer my thoughts, but I don’t claim to be the expert on the lessons we should take from this disaster. Sam and the others harmed me and people and projects I love, more than anyone else has done in my life. I was lied to, extensively, by people I thought were my friends and allies, in a way I’ve found hard to come to terms with. Even though a year and a half has passed, it’s still emotionally raw for me: I’m trying to be objective and dispassionate, but I’m aware that this might hinder me.
There are four categories of lessons and updates:
Undoing updates made because of FTX
Appreciating the new world we’re in
Assessing what changes we could make in EA to make catastrophes like this less likely to happen again
Assessing what changes we could make such that EA could handle crises better in the future
On the first two points, the post from Ben Todd is good, though I don’t agree with all of what he says. In my view, the most important lessons when it comes to the first two points, which also have bearing on the third and fourth, are:
Against “EA exceptionalism”: without evidence to the contrary, we should assume that people in EA are about average (given their demographics) on traits that don’t relate to EA. Sadly, that includes things like likelihood to commit crimes. We should be especially cautious to avoid a halo effect — assuming that because someone is good in some ways, like being dedicated to helping others, then they are good in other ways, too, like having integrity.
Looking back, there was a crazy halo effect around Sam, and I’m sure that will have influenced how I saw him. Before advising Future Fund, I remember asking a successful crypto investor — not connected to EA — what they thought of him. Their reply was: “He is a god.”
In my own case, I think I’ve been too trusting of people, and in general too unwilling to countenance the idea that someone might be a bad actor, or be deceiving me. Given what we know now, it was obviously a mistake to trust Sam and the others, but I think I’ve been too trusting in other instances in my life, too. I think in particular that I’ve been too quick to assume that, because someone indicates they’re part of the EA team, they are thereby trustworthy and honest. I think that fully improving on this trait will take a long time for me, and I’m going to bear this in mind in which roles I take on in the future.
Presenting EA in the context of the whole of morality.
EA is compatible with very many different moral worldviews, and this ecumenicism was a core reason for why EA was defined as it was. But people have often conflated EA with naive utilitarianism: that promoting wellbeing is the *only* thing that matters.
Even on pure utilitarian grounds, you should take seriously the wisdom enshrined in common-sense moral norms, and be extremely sceptical if your reasoning leads you to depart wildly from them. There are very strong consequentialist reasons for acting with integrity and for being cooperative with people with other moral views.
But, what’s more, utilitarianism is just one plausible moral view among many, and we shouldn’t be at all confident in it. Taking moral uncertainty into account means taking seriously the consequences of your actions, but it also means respecting common-sense moral prohibitions.[1]
I could have done better in how I’ve communicated on this score. In the past, I’ve emphasised the distinctive aspects of EA, treated the conflation with naive utilitarianism as a confusion that people have, and the response to it as an afterthought, rather than something built into the core of talking about the ideas. I plan to change that, going forward — emphasising more the whole of morality, rather than just the most distinctive contributions that EA makes (namely, that we should be a lot more benevolent and a lot more intensely truth-seeking than common-sense morality suggests).
Going even further on legibly acting in accordance with common-sense virtues than one would otherwise, because onlookers will be more sceptical of people associated with EA than they were before.
Here’s an analogy I’ve found helpful. Suppose it’s a 30mph zone, where almost everyone in fact drives at 35mph. If you’re an EA, how fast should you drive? Maybe before it was ok to go at 35, in line with prevailing norms. Now I think we should go at 30.
Being willing to fight for EA qua EA.
FTX has given people an enormous stick to hit EA with, and means that a lot of people have wanted to disassociate from EA. This will result in less work going towards the most important problems in the world today—yet another of the harms that Sam and the others caused.
But it means we’ll need, more than ever, for people who believe that the ideas are true and important to be willing to stick up for them, even in the face of criticism that’s often unfair and uncharitable, and sometimes downright mean.
On the third point — how to reduce the chance of future catastrophes — the key thing, in my view, is to pay attention to people’s local incentives when trying to predict their behaviour, in particular looking at the governance regime they are in. Some of my concrete lessons, here, are:
You can’t trust VCs or the financial media to detect fraud.[2] (Indeed, you shouldn’t even expect VCs to be particularly good at detecting fraud, as it’s often not in their self-interest to do so; I found Jeff Kaufman’s post on this very helpful).
The base rates of fraud are surprisingly high (here and here).
We should expect the base rate to be higher in poorly-regulated industries.
The idea that a company is run by “good people” isn’t sufficient to counterbalance that.
In general, people who commit white collar crimes often have good reputations before the crime; this is one of the main lessons from Eugene Soltes’s book Why They Do It.
In the case of FTX: the fraud was committed by Caroline, Gary and Nishad, as well as Sam. Though some people had misgivings about Sam, I haven’t heard the same about the others. In Nishad’s case in particular, comments I’ve heard about his character are universally that he seemed kind, thoughtful and honest. Yet, that wasn’t enough.
(This is all particularly on my mind when thinking about the future behaviour of AI companies, though recent events also show how hard it is to get governance right so that it’s genuinely a check on power.)
In the case of FTX, if there had been better aggregation of people’s opinions on Sam that might have helped a bit, though as I note in another comment there was a widespread error in thinking that the 2018 misgivings were wrong or that he’d matured. But what would have helped a lot more, in my view, was knowing how poorly-governed the company was — there wasn’t a functional board, or a risk department, or a CFO.
On how to respond better to crises in the future…. I think there’s a lot. I currently have no formal responsibilities over any community organisations, and do limited informal advising, too,[3] so I’ll primarily let Zach (once he’s back from vacation) or others comment in more depth on lessons learned from this, as well as changes that are being made, and planned to be made, across the EA community as a whole.
But one of the biggest lessons, for me, is decentralisation, and ensuring that people and organisations to a greater extent have clear separation in their roles and activities than they have had in the past. I wrote about this more here. (Since writing that post, though, I now lean more towards thinking that someone should “own” managing the movement, and that that should be the Centre for Effective Altruism. This is because there are gains from “public goods” in the movement that won’t be provided by default, and because I think Zach is going to be a strong CEO who can plausibly pull it off.)
In my own case, at the point of time of the FTX collapse, I was:
On the board of EV
An advisor to Future Fund
The most well-known advocate of EA
But once FTX collapsed, these roles interfered with each other. In particular, being on the board of EV and an advisor to Future Fund majorly impacted my ability to defend EA in the aftermath of the collapse and to help the movement try to make sense of what had happened. In retrospect, I wish I’d started building up a larger board for EV (then CEA), and transitioned out of that role, as early as 2017 or 2018; this would have made the movement as a whole more robust.
Looking forward, I’m going to stay off boards for a while, and focus on research, writing and advocacy.
I give my high-level take on what generally follows from taking moral uncertainty seriously, here: “In general, and very roughly speaking, I believe that maximizing expected choice- worthiness under moral uncertainty entails something similar to a value-pluralist consequentialism-plus-side-constraints view, with heavy emphasis on consequences that impact the long-run future of the human race.”
There’s a knock against prediction markets, here, too. A Metaculus forecast, in March of 2022 (the end of the period when one could make forecasts on this question), gave a 1.3% chance of FTX making any default on customer funds over the year. The probability that the Metaculus forecasters would have put on the claim that FTX would default on very large numbers of customer funds, as a result of misconduct, would presumably have been lower.
More generally, I’m trying to emphasise that I am not the “leader” of the EA movement, and, indeed, that I don’t think that the EA movement is the sort of thing that should have a leader. I’m still in favour of EA having advocates (and, hopefully, very many advocates, including people who hopefully get a lot more well-known than I am), and I plan to continue to advocate for EA, but I see that as a very different role.
Wanting to push back against this a little bit:
The big issue here is that SBF was recklessly racing ahead at 60mph, and EAs who saw that didn’t prevent him from doing so. So, I think the main lesson here is that EAs should learn to become strict enforcers of 35mph speed limits among their collaborators, which requires courage and skill in speaking out, rather than being highly strictly law-abiding.
The vast majority of EAs were/are reasonably law-abiding and careful (going at 35mph) and it seems perfectly fine for them to continue the same way. Extra trustworthiness signalling is helpful insofar as the world distrusts EAs due to what happened at FTX, but this effect is probably not huge.
EAs will get less done, be worse collaborators, and lose out on entrepreneurial talent if they become overly cautious. A non-zero level of naughtiness is often desirable, though this is highly domain-dependent.
I hear Will not as saying that going 35mph is in itself wrong in this analogy (necessarily), but that EA is now more-than-average vulnerable to attack and mistrust, so we need to signal our trustworthiness more clearly than others do.
I agree with this. Failing that, I feel strongly that CEA should change its name. There are costs to having a leader / manager / “coordinator-in-chief”, and costs to not having such an entity; but the worst of both worlds is to have ambiguity about whether a person or org is filling that role. Then you end up with situations like “a bunch of EAs sit on their hands because they expect someone else to respond, but no one actually takes the wheel”, or “an org gets the power of perceived leadership, but has limited accountability because it’s left itself a lot of plausible deniability about exactly how much of a leader it is”.
It concerns me a bit that when legal risk appears suddenly everyone gets very pragmatic in a way that I am not sure feels the same as integrity or truth-seeking. It feels a bit similar to how pragmatic we all were around FTX during the boom. Feels like in crises we get a bit worse at truth seeking and integrity, though I guess many communities do. (Sometimes it feels like in a crisis you get to pick just one thing and I am not convinced the thing the EA community picks is integrity or truth seekingness)
Also I don’t really trust my own judgement here, but while EA may feel more decentralised, a lot of the orgs feel even more centralised around OpenPhil, which feels a bit harder to contact and is doing more work internally. This is their prerogative I guess, but still.
I am sure while being a figurehead of EA has had a lot of benefits (not all of which I guess you wanted) but I strongly sense it has had a lot of really large costs. Thank you for your work. You’re a really talented communicator and networker and at this point probably a skilled board member so I hope that doesn’t get lost in all this.
Metaculus isn’t a prediction market; it’s just an opinion poll of people who use the Metaculus website.
agree with “not a prediction market” but think “just an opinion poll” undersells it; people are evaluated and rewarded on their accuracy
Fair! That’s at least a super nonstandard example of an “opinion poll”.