Neither Sam nor Will think less of EA principles because of the FTX collapse. Sam says so in the intro (4:40), and towards the end mentions how unhappy he was to see people âdancing on the grave of EAâ (59:17) and likens the simple rejection of EA because of SBF to the rejection of vegetarianism because of Hitlerâs diet (57:10). Will also agrees that FTX hasnât made him reconsider EA principles (56:24), and says that the FTX debacle happened in spite of or not because of EA (54:52).[3]
On whether the beliefs of EA are responsible, Will says that longtermism is not the reason, since one could justify maximalist and extreme behaviours for more near-termist concerns as well (1:07:32). He also thinks that EA communicators consistently said that the ends do not justify the means (54:58).
Will is concerned about, in attempting to get into the mind of SBF and explain his behaviour, thereâs a danger of looking to exculpatory of him and wants to make it clear that the collapse of FTX caused huge damage (14:15) and itâs clear in the following section thatâs heâs read a lot of the evidence submitted to the case, including Twitter messages from people who lost dearly-needed savings in FTX.
However, some of the podcast does seem to come across that way sometimes. He recounts that his impression of SBF since 2012 was that of âa very thoughtful, incredibly morally motivated personâ (37:29), and reflections make him seem quite credulous of the explanations he was given (40:23).[4]He notices that his reflections are sounding defensive of SBF about halfway through the podcast (42:01).
The Sam Harris subreddit has had an overwhelmingly negative reaction to the podcast. Many people there are basically Sam and Will of either continuing to be duped, or running PR for SBF. Of course this is a self-selected-overly-online community, but I think it underscores the anger and incredulity over what happened here, and how very careful and tactful messaging needs to be over this.
Sam offers two major theories as to why SBF did what he did (11:52):
It was a conscious fraud and long-con from the beginning, where SBF used EA as a cover to gain influence/âpower/âgood-will
Due to SBFâs beliefs about ethics and approach to risk/âprobability, he was placing a series of bets that were eventually likely to fail and eventually blew up in his face.
Will actually rejects the above (18:17), and doesnât think SBFâs behaviour can be framed as a calculated decisions because it simply doesnât make sense (27:31), and that the âlong-conâ explanation canât explain why FTX wanted regulation and press attention, and even why it was associated with EA (24:10). Instead, he believes the main culprit was SBFâs Hubris (28:11).He also notes SBFâs unusual risk tolerance (23:04) and suggests the cause was not a risk calculation gone wrong but an overall attitude to risk (29:47).
Will seemingly derives a lot of his explanations from the work of Eugene Soltes, a Harvard Business School professor, especially his book Why They Do It, which he references throughout the podcast.
While Will says he rejects both of Samâs original propositions (theory 1 and 2), sometimes he shades into backing theory 2, such as when he says that SBF really did believe in the ideas of EA (49:46)
While Sam is more agnostic about what exactly was driving SBF, both make mention of how seriously SBF took ideas throughout. Will notes that SBF found the idea of earn-to-give compelling way before FTX existed (8:16), Sam references SBFâs interview with Tyler Cowen where he said heâd accept the St Petersburg paradox forever as a red flag (21:22). He raises SBFâs neuro-atypical background as another potential red flag (35:31) but Will didnât seem to buy it and the conversation moves on from that pretty quickly.
The discussion ends with a discussion about the effects of the whole saga on EA and what changes there have been. Will says that there has been huge harm to EA and that many now think ill of it (54:23)
Will thinks that EA should have been, and should be more decentralised (58:24). He also had legal guidance (or instruction?) telling him not to share information or his perspective during the internal investigation (58:20). The combination of these meant that he wasnât able to provide guidance to the EA community that he wanted to during the crisis(58:47).
Will often refers to GWWC and the 9000+ pledgers as normal people living normal lives (1:03:23), and that EAs are âpretty normal peopleâ that are willing to put their money and time where their mouth is (1:05:31)
Will mentions the FTX collapse was incredibly hard on him, that it was the hardest year and a half of his life, and that he nearly lost motivation for the EA project altogether (1:16:44).
He thinks that the strongest critique of longtermism heâs come across since WWOTF was published isnât the pascalâs mugging type stuff, but that AI risks might be much more of a near-term threat than a long-term one (1:10:44), that AI x-risk concerns have been increasingly vindicated in recent years (1:11:38) and that intelligence explosion arguments of the IJ Good variety are holding up and deserve consideration (1:14:57)
Personal Thoughts
So thereâs not an actual deep-dive into what happened with SBF and FTX, and how much Will or figures in EA actually knew. Perhaps the podcast was trying to cover too much ground in 80 minutes, or perhaps Sam didnât want to come off as too hostile of a host? I feel like both a talking about the whole thing at an oddly abstract level, and not referencing the evidence thatâs come out in court.
While I also agree with both that EA principles are still good, and that most EAs are doing good in the world, thereâs clearly a connection between EAâor at least a bastardised, naĂŻvely maximalist view of itâand SBF. The prosecution and the judge seemed to take the view that SBF was a high risk of doing the same or a similar thing again in the future, and that he has not shown remorse. This makes sense if Sam was acting in the way he did because he thought he was doing the right thing, and the fact that it was an attitude rather than a ârational calculationâ doesnât make it less driven by ideas.
So I think thatâs where Iâve ended up on this (Iâm not an expert on financials, or what precise laws FTX broke, and how their attempted scheme operated or how often they were brazenly lying for. Feels like those with an outside view are pretty damn negative on SBF). I think trying to fix the name âEAâ or ânot EAâ to what SBF and the FTX team believed is pretty unhelpful. I think Ellison and SBF had a very naĂŻve, maximalist view of the world and their actions. They believed they had special ability and knowledge to act in the world, and to break existing rules and norms in order to make the world better if they saw it, even if this incurred high risks, if their expectation was that it would work out in EV terms. An additional error here, and perhaps where the âHubrisâ theory does play in, is that there was no error mechanism to correct these beliefs. Even after the whole collapse, and a 25-year sentence, it still seems to me that SBF thinks he made the ârightâ call and got unlucky.
My takeaway is that this cluster of beliefs[5]is dangerous and the EA community should develop an immune system to reject these ideas. Ryan Carey refers to this as ârisky beneficentrismâ, and I think part of âThird-Waveâ EA should be about rejecting this cluster of ideas, making this publicly known, and disassociating EA from the individuals, leaders, or organisations who still hold on to it in the aftermath of this entire debacle.
Thanks for this summary. I listened to this yesterday & browsed through the SH subreddit discussion, and Iâm surprised that this hasnât received much discussion on here. Perhaps the EA community is talked out about this subject, which, fair enough. But as far as I can tell, itâs one of Willâs first at-length public remarks on SBF, so it feels discussion-worthy to me.
I agree that the discussion was oddly vague given all the actual evidence we have. I donât feel like going into much detail but a few things I noticed:
It seems that Will is still somewhat in denial that SBF was a fraud. I guess this is a perfectly valid opinion, Will knows SBF, but I canât help but feel that this is naive (or measured, if weâre to be charitable). We can quibble over his reasoning, but fraud is fraud, and SBF committed a lot of it, repeatedly and at scale. He doesnât have to be good at fraud to be one.
They barely touch on the dangers of a âmaximalistâ EA. If Will doesnât believe SBF was fraudulent for regular greedy reasons, then EA may have played a part, and thatâs worth considering. No need to be overly dramatic here, the average EA is well-intentioned and not going to do anything like this⌠but as long as EA is centralized and reliant on large donors, itâs something we need to further think through.
The podcast is a reminder of how difficult it is to understand motivations, and how difficult it is for âgoodâ people to understand what motivates âbadâ actions (adding ââ to acknowledge the big vast gray areas here). Given that there are a lot of neuro-atypical EAs, the community seems desensitized to potential red flags like claiming not to feel love. This is my hobbyhorseâlike any community, EA has smart capable people that I wouldnât ever want to have power. It sounds like there were people who felt this way about SBF, including a cofounder. Itâs a bit shocking to me that EA leadership didnât see SBF as much of a liability.
To be fair to Will, he does acknowledge that Nishadâs insurance fund code âseems like really quite clear fraudâ, if comparatively minor.
As for the other code snippets in your linkâthe âbackdoorââNishad and Gary said the intention was to support Alamedaâs role as a backstop liquidity provider (which FTX was heavily dependent on in its early days to function):
â[Wang] testified about several changes Bankman-Fried asked him to make to FTXâs software code to allow Alameda to withdraw unlimited funds from the exchange. . . . Wang agreed that the changes were necessary for Alameda to provide liquidity on the exchangeâ (Reuters)
âSingh also acknowledged that he originally thought some of the special treatment Bankman-Friedâs trading firm Alameda Research received on FTX was meant to protect customers by allowing it to more effectively âbackstopâ some trades. âMy view at the time [was that] it would be helpful for customers,â Singh saidâ (Financial Times)
After listening, here are my thoughts on the podcast (times refer roughly to youtube timestamps):[1]
Recap[2]
Neither Sam nor Will think less of EA principles because of the FTX collapse. Sam says so in the intro (4:40), and towards the end mentions how unhappy he was to see people âdancing on the grave of EAâ (59:17) and likens the simple rejection of EA because of SBF to the rejection of vegetarianism because of Hitlerâs diet (57:10). Will also agrees that FTX hasnât made him reconsider EA principles (56:24), and says that the FTX debacle happened in spite of or not because of EA (54:52).[3]
On whether the beliefs of EA are responsible, Will says that longtermism is not the reason, since one could justify maximalist and extreme behaviours for more near-termist concerns as well (1:07:32). He also thinks that EA communicators consistently said that the ends do not justify the means (54:58).
Will is concerned about, in attempting to get into the mind of SBF and explain his behaviour, thereâs a danger of looking to exculpatory of him and wants to make it clear that the collapse of FTX caused huge damage (14:15) and itâs clear in the following section thatâs heâs read a lot of the evidence submitted to the case, including Twitter messages from people who lost dearly-needed savings in FTX.
However, some of the podcast does seem to come across that way sometimes. He recounts that his impression of SBF since 2012 was that of âa very thoughtful, incredibly morally motivated personâ (37:29), and reflections make him seem quite credulous of the explanations he was given (40:23).[4] He notices that his reflections are sounding defensive of SBF about halfway through the podcast (42:01).
The Sam Harris subreddit has had an overwhelmingly negative reaction to the podcast. Many people there are basically Sam and Will of either continuing to be duped, or running PR for SBF. Of course this is a self-selected-overly-online community, but I think it underscores the anger and incredulity over what happened here, and how very careful and tactful messaging needs to be over this.
Sam offers two major theories as to why SBF did what he did (11:52):
It was a conscious fraud and long-con from the beginning, where SBF used EA as a cover to gain influence/âpower/âgood-will
Due to SBFâs beliefs about ethics and approach to risk/âprobability, he was placing a series of bets that were eventually likely to fail and eventually blew up in his face.
Will actually rejects the above (18:17), and doesnât think SBFâs behaviour can be framed as a calculated decisions because it simply doesnât make sense (27:31), and that the âlong-conâ explanation canât explain why FTX wanted regulation and press attention, and even why it was associated with EA (24:10). Instead, he believes the main culprit was SBFâs Hubris (28:11). He also notes SBFâs unusual risk tolerance (23:04) and suggests the cause was not a risk calculation gone wrong but an overall attitude to risk (29:47).
Will seemingly derives a lot of his explanations from the work of Eugene Soltes, a Harvard Business School professor, especially his book Why They Do It, which he references throughout the podcast.
While Will says he rejects both of Samâs original propositions (theory 1 and 2), sometimes he shades into backing theory 2, such as when he says that SBF really did believe in the ideas of EA (49:46)
While Sam is more agnostic about what exactly was driving SBF, both make mention of how seriously SBF took ideas throughout. Will notes that SBF found the idea of earn-to-give compelling way before FTX existed (8:16), Sam references SBFâs interview with Tyler Cowen where he said heâd accept the St Petersburg paradox forever as a red flag (21:22). He raises SBFâs neuro-atypical background as another potential red flag (35:31) but Will didnât seem to buy it and the conversation moves on from that pretty quickly.
The discussion ends with a discussion about the effects of the whole saga on EA and what changes there have been. Will says that there has been huge harm to EA and that many now think ill of it (54:23)
Will thinks that EA should have been, and should be more decentralised (58:24). He also had legal guidance (or instruction?) telling him not to share information or his perspective during the internal investigation (58:20). The combination of these meant that he wasnât able to provide guidance to the EA community that he wanted to during the crisis(58:47).
Will often refers to GWWC and the 9000+ pledgers as normal people living normal lives (1:03:23), and that EAs are âpretty normal peopleâ that are willing to put their money and time where their mouth is (1:05:31)
Will mentions the FTX collapse was incredibly hard on him, that it was the hardest year and a half of his life, and that he nearly lost motivation for the EA project altogether (1:16:44).
He thinks that the strongest critique of longtermism heâs come across since WWOTF was published isnât the pascalâs mugging type stuff, but that AI risks might be much more of a near-term threat than a long-term one (1:10:44), that AI x-risk concerns have been increasingly vindicated in recent years (1:11:38) and that intelligence explosion arguments of the IJ Good variety are holding up and deserve consideration (1:14:57)
Personal Thoughts
So thereâs not an actual deep-dive into what happened with SBF and FTX, and how much Will or figures in EA actually knew. Perhaps the podcast was trying to cover too much ground in 80 minutes, or perhaps Sam didnât want to come off as too hostile of a host? I feel like both a talking about the whole thing at an oddly abstract level, and not referencing the evidence thatâs come out in court.
While I also agree with both that EA principles are still good, and that most EAs are doing good in the world, thereâs clearly a connection between EAâor at least a bastardised, naĂŻvely maximalist view of itâand SBF. The prosecution and the judge seemed to take the view that SBF was a high risk of doing the same or a similar thing again in the future, and that he has not shown remorse. This makes sense if Sam was acting in the way he did because he thought he was doing the right thing, and the fact that it was an attitude rather than a ârational calculationâ doesnât make it less driven by ideas.
So I think thatâs where Iâve ended up on this (Iâm not an expert on financials, or what precise laws FTX broke, and how their attempted scheme operated or how often they were brazenly lying for. Feels like those with an outside view are pretty damn negative on SBF). I think trying to fix the name âEAâ or ânot EAâ to what SBF and the FTX team believed is pretty unhelpful. I think Ellison and SBF had a very naĂŻve, maximalist view of the world and their actions. They believed they had special ability and knowledge to act in the world, and to break existing rules and norms in order to make the world better if they saw it, even if this incurred high risks, if their expectation was that it would work out in EV terms. An additional error here, and perhaps where the âHubrisâ theory does play in, is that there was no error mechanism to correct these beliefs. Even after the whole collapse, and a 25-year sentence, it still seems to me that SBF thinks he made the ârightâ call and got unlucky.
My takeaway is that this cluster of beliefs[5] is dangerous and the EA community should develop an immune system to reject these ideas. Ryan Carey refers to this as ârisky beneficentrismâ, and I think part of âThird-Waveâ EA should be about rejecting this cluster of ideas, making this publicly known, and disassociating EA from the individuals, leaders, or organisations who still hold on to it in the aftermath of this entire debacle.
For clarity Sam refers to Sam Harris, and SBF refers to Sam Bankman-Fried
Not necessarily in order, Iâve tried to group similar points together
I think this makes some sense if you view EA as a set of ideas/âprinciples, less so if you view EA as a set of people and organisations
During this section especially I kinda wanted to shout at my podcast when Will asked rhetorically âwas he lying to me that whole time?â the answer is yes Will, it seems like they were. The code snippets from Nishad Singh and Gary Wang that the prosecution shared are pretty damning, for example.
See the following link in the text to Ryan Careyâs post. But I think the main dangerous ideas to my mind are:
1) NaĂŻve consequentialism
2) The ability and desire to rapidly change the world
3) A rejection of existing norms and common-sense morality
4) No uncertainty about whether the values above or the empirical consequences of the actions
5) Most importantly, no feedback or error correction mechanism for any of the above.
Thanks for this summary. I listened to this yesterday & browsed through the SH subreddit discussion, and Iâm surprised that this hasnât received much discussion on here. Perhaps the EA community is talked out about this subject, which, fair enough. But as far as I can tell, itâs one of Willâs first at-length public remarks on SBF, so it feels discussion-worthy to me.
I agree that the discussion was oddly vague given all the actual evidence we have. I donât feel like going into much detail but a few things I noticed:
It seems that Will is still somewhat in denial that SBF was a fraud. I guess this is a perfectly valid opinion, Will knows SBF, but I canât help but feel that this is naive (or measured, if weâre to be charitable). We can quibble over his reasoning, but fraud is fraud, and SBF committed a lot of it, repeatedly and at scale. He doesnât have to be good at fraud to be one.
They barely touch on the dangers of a âmaximalistâ EA. If Will doesnât believe SBF was fraudulent for regular greedy reasons, then EA may have played a part, and thatâs worth considering. No need to be overly dramatic here, the average EA is well-intentioned and not going to do anything like this⌠but as long as EA is centralized and reliant on large donors, itâs something we need to further think through.
The podcast is a reminder of how difficult it is to understand motivations, and how difficult it is for âgoodâ people to understand what motivates âbadâ actions (adding ââ to acknowledge the big vast gray areas here). Given that there are a lot of neuro-atypical EAs, the community seems desensitized to potential red flags like claiming not to feel love. This is my hobbyhorseâlike any community, EA has smart capable people that I wouldnât ever want to have power. It sounds like there were people who felt this way about SBF, including a cofounder. Itâs a bit shocking to me that EA leadership didnât see SBF as much of a liability.
Thanks for the summary. One nitpick:
To be fair to Will, he does acknowledge that Nishadâs insurance fund code âseems like really quite clear fraudâ, if comparatively minor.
As for the other code snippets in your linkâthe âbackdoorââNishad and Gary said the intention was to support Alamedaâs role as a backstop liquidity provider (which FTX was heavily dependent on in its early days to function):
â[Wang] testified about several changes Bankman-Fried asked him to make to FTXâs software code to allow Alameda to withdraw unlimited funds from the exchange. . . . Wang agreed that the changes were necessary for Alameda to provide liquidity on the exchangeâ (Reuters)
âSingh also acknowledged that he originally thought some of the special treatment Bankman-Friedâs trading firm Alameda Research received on FTX was meant to protect customers by allowing it to more effectively âbackstopâ some trades. âMy view at the time [was that] it would be helpful for customers,â Singh saidâ (Financial Times)