In summarising Why They Do It, Will says that usually, that most fraudsters aren’t just “bad apples” or doing “cost-benefit analysis” on their risk of being punished. Rather, they fail to “conceptualise what they’re doing as fraud”. And that may well be true on average, but we know quite a lot about the details of this case, which I believe point us in a different direction.
In this case, the other defendants have said they knew what they’re doing was wrong, that they were misappropriating customers’ assets, and investing them. That weighs somewhat against the misconceptualisation hypothesis, albeit without ruling it out as a contributing factor.
On the other hand, we have some support for the bad apples idea. SBF has said:
In a lot of ways I don’t really have a soul. This is a lot more obvious in some contexts than others. But in the end there’s a pretty decent argument that my empathy is fake, my feelings are fake, my facial reactions are fake.
So I agree with Spencer, that SBF was at least deficient in affective experience, whether or not he was psychopathic.
Regarding cost-benefit analysis, I would tend to agree with Will that it’s unlikely that SBF and company made a detailed calculation of the costs and benefits of their actions (and clearly they calculated incorrectly if they did), although the perceived costs and benefits could also be a contributing factor.
So based on the specific knowledge of the case, I think that the bad apples hypothesis makes more sense than the cost-benefit hypothesis and misconceptualisation hypotheses.
There is also a fourth category worth considering—whether SBF’s views on side constraints were a likely factor—and I think overwhelmingly yes. Sure, as Will points out, SBF may have commented approvingly about a recent article on side constraints. But more recently, he referred to ethics as “this dumb game we woke Westerners play where we say all say the right shibboleths and so everyone likes us.” Furthermore, if we’re doing Facebook archaeology, we should also consider his earlier writing. In May 2012, SBF wrote about the idea of stealing to give:
I’m not sure I understand what the paradox is here. Fundamentally if you are going to donate the money to [The Humane League] and he’s going to buy lots of cigarettes with it it’s clearly in an act utilitarian’s interest to keep the money as long as this doesn’t have consequences down the road, so you won’t actually give it to him if he drives you. He might predict this and thus not give you the ride, but then your mistake was letting Paul know that you’re an act utilitarian, not in being one. Perhaps this was because you’ve done this before, but then not giving him money the previous time was possibly not the correct decision according to act utilitarianism, because although you can do better things with the money than he can, you might run in to problems later if you keep in. Similarly, I could go around stealing money from people because I can spend the money in a more utilitarian way than they can, but that wouldn’t be the utilitarian thing to do because I was leaving out of my calculation the fact that I may end up in jail if I do so.
…
As others have said, I completely agree that in practice following rules can be a good idea. Even though stealing might sometimes be justified in the abstract, in practice it basically never is because it breaks a rule that society cares a lot about and so comes with lots of consequences like jail. That being said, I think that you should, in the end, be an act utilitarian, even if you often think like a rule utilitarian; here what you’re doing is basically saying that society puts up disincentives for braking rules and those should be included in the act utilitarian calculation, but sometimes they’re big enough that a rule utilitarian calculation approximates it pretty well in a much simpler fashion.
I’m sure people will interpret this passage in different ways. But it’s clear that, at least at this point in time, he was a pretty extreme act utilitarian.
Taking this and other information on balance, it seems clear in retrospect that a major factor is that SBF didn’t take side constraints that seriously.
Of course, most of this information wasn’t available or wasn’t salient in 2022, so I’m not claiming that we should have necessarily worried based on it. Nor am I implying that improved governance is not a part of the solution. Those are further questions.
Will says that usually, that most fraudsters aren’t just “bad apples” or doing “cost-benefit analysis” on their risk of being punished. Rather, they fail to “conceptualise what they’re doing as fraud”.
I agree with your analysis but I think Will also sets up a false dichotomy. One’s inability to conceptualize or realize that one’s actions are wrong is itself a sign of being a bad apple. To simplify a bit, on the one end of the spectrum of the “high integrity to really bad continuum”, you have morally scrupulous people who constantly wonder whether their actions are wrong. On the other end of the continuum, you have pathological narcissists whose self-image/internal monologue is so out of whack with reality that they cannot even conceive of themselves doing anything wrong. That doesn’t make them great people. If anything, it makes them more scary.
Generally, the internal monologue of the most dangerous types of terrible people (think Hitler, Stalin, Mao, etc.) doesn’t go like “I’m so evil and just love to hurt everyone, hahahaha”. My best guess is, that in most cases, it goes more like “I’m the messiah, I’m so great and I’m the only one who can save the world. Everyone who disagrees with me is stupid and/or evil and I have every right to get rid of them.” [1]
Of course, there are people whose internal monologues are more straightforwardly evil/selfish (though even here lots of self-delusion is probably going on) but they usually end up being serial killers or the like, not running countries.
Also, later when Will talks about bad applies, he mentions that “typical cases of fraud [come] from people who are very successful, actually very well admired”, which again suggests that “bad apples” are not very successful or not very well admired. Well, again, many terrible people were extremely successful and admired. Like, you know, Hitler, Stalin, Mao, etc.
Nor am I implying that improved governance is not a part of the solution.
Yep, I agree. In fact, the whole character vs. governance thing seems like another false dichotomy to me. You want to have good governance structures but the people in relevant positions of influence should also know a little bit about how to evaluate character.
In general, bad character is compatible with genuine moral convictions. Hitler, for example, was vegetarian for moral reasons and “used vivid and gruesome descriptions of animal suffering and slaughter at the dinner table to try to dissuade his colleagues from eating meat”. (Fraudster/bad apple vs. person with genuine convictions is another false dichotomy that people keep setting up.)
(This comment is basically just voicing agreement with points raised in Ryan’s and David’s comments above.)
One of the things that stood out to me about the episode was the argument[1] that working on good governance and working on reducing the influence of dangerous actors are mutually exclusive strategies, and that the former is much more tractable and important than the latter.
Most “good governance” research to date also seems to focus on system-level interventions,[2] while interventions aimed at reducing the impacts of individuals are very neglected, at least according to this review of nonprofit scandals:
It is notable that all the preventive tactics that have been studied and championed—audits, governance practices, internal controls—are aimed at the organizational level. It makes sense to focus on this level, as it is the level that managers have most control over. Prevention can also be implemented at the individual and sectoral levels. Training of staff, job-level checks and balances, and staff evaluations could all help prevent violations with individual-level causes. Sector-level regulation and oversight is becoming common in many countries. We, therefore, encourage future research on preventive measures to take a multilevel perspective, or at least consider the neglected sectoral and individual levels.
Six years before the review quoted above, this article called for psychopathy screening for public leadership positions (which would have represented one potential approach to interventions at the “individual level,” to adopt the terminology of the review quoted above).[3]
This leads me to wonder: what are the most compelling reasons for the lack of research (so far) on interventions to reduce the impact of dangerous actors, and which (if any) of these reasons provide strong arguments against doing at least some research in this neglected area? I think there are lots of possible answers here,[4] but none of them seem strong enough to justify the relative lack of research on this area so far (relative to the scale of the problem).
Here’s a quote from the episode (courtesy of Wei Dai’s transcript) demonstrating this claim:
[Will MacAskill:] There’s really two ways of looking at things: you might ask…is this a bad person—are we focusing on the character? Or you might ask…what oversight, what feedback mechanisms, what incentives does this person face? And yeah, one thing I’ve really taken away from this is to place even more weight than I did before on just the importance of governance, where that means the, you know, importance of people acting with oversight, with the feedback mechanisms and you know, with incentives to incentivize kind of good rather than bad behavior…
I agree that all these aspects of governance are important, but disagree that working on these things would entirely protect an organization from the negative impacts of malevolent actors.
To be clear, I am glad people are working on system-level solutions to low integrity and otherwise harmful behaviors, but I think it would be helpful if it wasn’t the *only* class of interventions that had substantial amounts of resources directed towards them.
Interestingly, one of the real-life cases Boddy refers to in support of his argument is the Enron scandal, a case which was also covered in the book Will MacAskill was talking about, Why They Do It.
Here are some of the reasons I’ve already thought about (listed roughly in order from most to least convincing to me as a reason to be pessimistic about this approach to risk reduction): potential lack of tractability; lower levels of social and political acceptability/feasibility; lack of existing evidence as to what methods work, to what extent, and in which contexts; and perhaps a perception that the problem (of dangerous actors) is small in scale. I’d be interested to know which (if any) of these reasons are the most important, and if there are other considerations I’m overlooking. Overall, despite these reasons against working on it, I still think this area is worth investigating to a greater extent than it has been to date.
Quote: (and clearly they calculated incorrectly if they did)
I am less confident that, if an amoral person applied cost-benefit analysis properly here, it would lead to “no fraud” as opposed to “safer amounts of fraud.” The risk of getting busted from less extreme or less risky fraud would seem considerably less.
Hypothetically, say SBF misused customer funds to buy stocks and bonds, and limited the amount he misused to 40 percent of customer assets. He’d need a catastrophic stock/bond market crash, plus almost all depositors wanting out, to be unable to honor withdrawals. I guess there is still the risk of a leak.
I don’t think we disagree much if any here—I think pointing out that cost-benefit analysis doesn’t necessarily lead to the “no fraud” result underscores the critical importance of side constraints!
He’d need a catastrophic stock/bond market crash, plus almost all depositors wanting out, to be unable to honor withdrawals.
I think this significantly under-estimates the likelihood of “bank run”-type scenarios. It is not uncommon for financial institutions with backing for a substantial fraction of their deposits to still get run out due a simple loss of confidence snowballing.
Could you say more about that? I suggest that “substantial fraction” may mean something quite different in the context of a bank than here. In the scenario I described, the hypothetical exchange would need to see 80-90% of deposits demanded back in a world where the stocks/bonds had to be sold at a 25-50% loss. It could be higher if the exchange had come up with an opt-in lending program that provided adequate cover for not returning (say) 10-15% of the customers’ funds on demand.
I’d also suggest that the “simple loss of confidence snowballing” in modern bank runs is often justified based on publicly-known (or discernable) information. I don’t think it was a secret that SVB had bought a bunch of long-term Treasuries that sank in value as interest rates increased, and thus that it did not have the asset value to honor 100% of withdrawals. It wasn’t a secret in ~2008 that banks’ ability to honor 100% withdrawals was based on highly overstated values for mortgage-backed securities.
In contrast, as long as the secret stock/bond purchases remained unknown to outsiders, a massive demand for deposits back would have to occur in the absence of that kind of information. Unlike the traditional banking sector, other places to hold crypto carry risks as well—even self-custody, which poses risks from hacking, hardware failure, forgetting information, etc. So people aren’t going to withdraw unless, at a minimum, convinced that they had a safer place to hold their assets.
Finally, in conducting the cost/benefit analysis, the hypothetical SBF would consider that the potential failure mode only existed in scenarios where 80-90%+ of deposits had been demanded back. Conditional on that having happened, the exchange’s value would likely be largely lost anyway. So the difference in those scenarios would be between ~0 and the negative effects of a smaller-scale fraud. If the hypothetical SBF thought the 80-90%+ scenario was pretty unlikely . . . .
(Again, all of this does not include the risk of the fraud leaking out or being discovered.)
Okay yes, I agree that a driver of bank runs is the knowledge that the bank usually can’t cover all deposits, by design. So as long as you keep that fact secret you’re much less likely to face a run.
I am now unsure how to reason about the likelihood of a run-like scenario in this case.
Interesting discussion. In the interview, MacAskill mentioned Madoff as an example of the idea that it’s not about “bad apples.” [1] Giving Madoff as an example in this context doesn’t make sense to me. But maybe MacAskill was meaning to say that it’s not about “bad apples that are identified as such before/at the time of their fraud”? That would be the only interpretation that makes sense to me, because Madoff sounds like he really was a “bad apple” based on the info in Why They Do It.
Here’s what Soltes says about Madoff in Why They Do It (quoted from the audiobook, with emphasis added):
[Madoff] cavalierly remarked, “The reality of it is my son couldn’t stand up amongst the pressure anyhow, so he took his own life.” In the many hours of conversations I had with Madoff, this statement stood out for its callousness. A father [who] couldn’t understand the impact that his actions had on his own son...
...Madoff remains dispassionate even about the circumstances that are of the greatest significance. In September 2014, a colleague emailed me news that Madoff’s second son, Andrew, had just died of cancer. As I was beginning to read the article, my office phone rang. I picked it up and was surprised to hear Madoff on the line.
He had heard the news of his second son’s death on the radio, and asked if I could read the obituary to him. Shaken by the fact that a father had called me to convey news of his son’s death, I turned my attention to describing the news in the most compassionate way I could. I wasn’t a professor or a researcher at that moment, just one person speaking to another.
I read him a writer’s article about his son’s death. When I reached the end, I was at a loss to know what to say. Instinctively, as we often do when hearing of a death, I asked him how he was doing.
Madoff responded, “I’m fine, I’m fine.” After a brief pause, he said that he had a question for me. I thought he might want me to send a copy of the obituary to him or deliver a message on his behalf to someone. It wasn’t that. Instead, he asked me whether I’d had a chance to look at the LIBOR rates we discussed in our prior conversation.
This particular phone call with Madoff stuck with me more than any other. Shortly after finding out his son had died, Madoff wanted to discuss interest rates. He didn’t lose a beat in the ensuing conversation, continuing to carry on an entirely fluid discussion on the arcane determinants of yields. It didn’t seem as though he wanted to switch topics because he was struggling to compose himself, and it didn’t seem as though he was avoiding expressing emotion because the news was so overwhelming. In some way, it almost seemed as though I was more personally moved by the death of Andrew in those moments than his father.
To a psychiatrist, Madoff displays many symptoms associated with psychopathy...while labels themselves are of little use, viewing Madoff through this lens helps place his prior actions and current rationalization of that behavior into context.
Madoff interprets and responds to emotion differently from most people. Regardless of how close he got to his investors, his personal limitations enabled him to continue his fraud without remorse or guilt…Madoff has an inability to empathize with his investors…he never experienced the gut feeling that he needed to stop…he managed to create extraordinary suffering for his investors, his friends, even his family, while experiencing little emotional turmoil himself.
So the lesson that Eugene Soltes takes in his study of white collar crime that actually like the normal case of fraud like typical cases of fraud comes from people who are very successful, actually very well admired, really not the sort of people where it’s like, Oh yeah, they were, everyone was talking all along about how this person’s, you know, bad apple or not up to no good. Instead, you know Bernie Madoff even was the chair of NASDAQ...And so you know what he really emphasizes instead is importance of kind of good feedback mechanisms, again, because, you know, people are not often making this these decisions in this careful calculated way. Instead, is this like mindless, incredibly irrational decision...
In summarising Why They Do It, Will says that usually, that most fraudsters aren’t just “bad apples” or doing “cost-benefit analysis” on their risk of being punished. Rather, they fail to “conceptualise what they’re doing as fraud”. And that may well be true on average, but we know quite a lot about the details of this case, which I believe point us in a different direction.
In this case, the other defendants have said they knew what they’re doing was wrong, that they were misappropriating customers’ assets, and investing them. That weighs somewhat against the misconceptualisation hypothesis, albeit without ruling it out as a contributing factor.
On the other hand, we have some support for the bad apples idea. SBF has said:
So I agree with Spencer, that SBF was at least deficient in affective experience, whether or not he was psychopathic.
Regarding cost-benefit analysis, I would tend to agree with Will that it’s unlikely that SBF and company made a detailed calculation of the costs and benefits of their actions (and clearly they calculated incorrectly if they did), although the perceived costs and benefits could also be a contributing factor.
So based on the specific knowledge of the case, I think that the bad apples hypothesis makes more sense than the cost-benefit hypothesis and misconceptualisation hypotheses.
There is also a fourth category worth considering—whether SBF’s views on side constraints were a likely factor—and I think overwhelmingly yes. Sure, as Will points out, SBF may have commented approvingly about a recent article on side constraints. But more recently, he referred to ethics as “this dumb game we woke Westerners play where we say all say the right shibboleths and so everyone likes us.” Furthermore, if we’re doing Facebook archaeology, we should also consider his earlier writing. In May 2012, SBF wrote about the idea of stealing to give:
I’m sure people will interpret this passage in different ways. But it’s clear that, at least at this point in time, he was a pretty extreme act utilitarian.
Taking this and other information on balance, it seems clear in retrospect that a major factor is that SBF didn’t take side constraints that seriously.
Of course, most of this information wasn’t available or wasn’t salient in 2022, so I’m not claiming that we should have necessarily worried based on it. Nor am I implying that improved governance is not a part of the solution. Those are further questions.
Great comment.
I agree with your analysis but I think Will also sets up a false dichotomy. One’s inability to conceptualize or realize that one’s actions are wrong is itself a sign of being a bad apple. To simplify a bit, on the one end of the spectrum of the “high integrity to really bad continuum”, you have morally scrupulous people who constantly wonder whether their actions are wrong. On the other end of the continuum, you have pathological narcissists whose self-image/internal monologue is so out of whack with reality that they cannot even conceive of themselves doing anything wrong. That doesn’t make them great people. If anything, it makes them more scary.
Generally, the internal monologue of the most dangerous types of terrible people (think Hitler, Stalin, Mao, etc.) doesn’t go like “I’m so evil and just love to hurt everyone, hahahaha”. My best guess is, that in most cases, it goes more like “I’m the messiah, I’m so great and I’m the only one who can save the world. Everyone who disagrees with me is stupid and/or evil and I have every right to get rid of them.” [1]
Of course, there are people whose internal monologues are more straightforwardly evil/selfish (though even here lots of self-delusion is probably going on) but they usually end up being serial killers or the like, not running countries.
Also, later when Will talks about bad applies, he mentions that “typical cases of fraud [come] from people who are very successful, actually very well admired”, which again suggests that “bad apples” are not very successful or not very well admired. Well, again, many terrible people were extremely successful and admired. Like, you know, Hitler, Stalin, Mao, etc.
Yep, I agree. In fact, the whole character vs. governance thing seems like another false dichotomy to me. You want to have good governance structures but the people in relevant positions of influence should also know a little bit about how to evaluate character.
In general, bad character is compatible with genuine moral convictions. Hitler, for example, was vegetarian for moral reasons and “used vivid and gruesome descriptions of animal suffering and slaughter at the dinner table to try to dissuade his colleagues from eating meat”. (Fraudster/bad apple vs. person with genuine convictions is another false dichotomy that people keep setting up.)
(This comment is basically just voicing agreement with points raised in Ryan’s and David’s comments above.)
One of the things that stood out to me about the episode was the argument[1] that working on good governance and working on reducing the influence of dangerous actors are mutually exclusive strategies, and that the former is much more tractable and important than the latter.
Most “good governance” research to date also seems to focus on system-level interventions,[2] while interventions aimed at reducing the impacts of individuals are very neglected, at least according to this review of nonprofit scandals:
Six years before the review quoted above, this article called for psychopathy screening for public leadership positions (which would have represented one potential approach to interventions at the “individual level,” to adopt the terminology of the review quoted above).[3]
This leads me to wonder: what are the most compelling reasons for the lack of research (so far) on interventions to reduce the impact of dangerous actors, and which (if any) of these reasons provide strong arguments against doing at least some research in this neglected area? I think there are lots of possible answers here,[4] but none of them seem strong enough to justify the relative lack of research on this area so far (relative to the scale of the problem).
Here’s a quote from the episode (courtesy of Wei Dai’s transcript) demonstrating this claim:
I agree that all these aspects of governance are important, but disagree that working on these things would entirely protect an organization from the negative impacts of malevolent actors.
To be clear, I am glad people are working on system-level solutions to low integrity and otherwise harmful behaviors, but I think it would be helpful if it wasn’t the *only* class of interventions that had substantial amounts of resources directed towards them.
Interestingly, one of the real-life cases Boddy refers to in support of his argument is the Enron scandal, a case which was also covered in the book Will MacAskill was talking about, Why They Do It.
Here are some of the reasons I’ve already thought about (listed roughly in order from most to least convincing to me as a reason to be pessimistic about this approach to risk reduction): potential lack of tractability; lower levels of social and political acceptability/feasibility; lack of existing evidence as to what methods work, to what extent, and in which contexts; and perhaps a perception that the problem (of dangerous actors) is small in scale. I’d be interested to know which (if any) of these reasons are the most important, and if there are other considerations I’m overlooking. Overall, despite these reasons against working on it, I still think this area is worth investigating to a greater extent than it has been to date.
Quote: (and clearly they calculated incorrectly if they did)
I am less confident that, if an amoral person applied cost-benefit analysis properly here, it would lead to “no fraud” as opposed to “safer amounts of fraud.” The risk of getting busted from less extreme or less risky fraud would seem considerably less.
Hypothetically, say SBF misused customer funds to buy stocks and bonds, and limited the amount he misused to 40 percent of customer assets. He’d need a catastrophic stock/bond market crash, plus almost all depositors wanting out, to be unable to honor withdrawals. I guess there is still the risk of a leak.
I don’t think we disagree much if any here—I think pointing out that cost-benefit analysis doesn’t necessarily lead to the “no fraud” result underscores the critical importance of side constraints!
I think this significantly under-estimates the likelihood of “bank run”-type scenarios. It is not uncommon for financial institutions with backing for a substantial fraction of their deposits to still get run out due a simple loss of confidence snowballing.
Could you say more about that? I suggest that “substantial fraction” may mean something quite different in the context of a bank than here. In the scenario I described, the hypothetical exchange would need to see 80-90% of deposits demanded back in a world where the stocks/bonds had to be sold at a 25-50% loss. It could be higher if the exchange had come up with an opt-in lending program that provided adequate cover for not returning (say) 10-15% of the customers’ funds on demand.
I’d also suggest that the “simple loss of confidence snowballing” in modern bank runs is often justified based on publicly-known (or discernable) information. I don’t think it was a secret that SVB had bought a bunch of long-term Treasuries that sank in value as interest rates increased, and thus that it did not have the asset value to honor 100% of withdrawals. It wasn’t a secret in ~2008 that banks’ ability to honor 100% withdrawals was based on highly overstated values for mortgage-backed securities.
In contrast, as long as the secret stock/bond purchases remained unknown to outsiders, a massive demand for deposits back would have to occur in the absence of that kind of information. Unlike the traditional banking sector, other places to hold crypto carry risks as well—even self-custody, which poses risks from hacking, hardware failure, forgetting information, etc. So people aren’t going to withdraw unless, at a minimum, convinced that they had a safer place to hold their assets.
Finally, in conducting the cost/benefit analysis, the hypothetical SBF would consider that the potential failure mode only existed in scenarios where 80-90%+ of deposits had been demanded back. Conditional on that having happened, the exchange’s value would likely be largely lost anyway. So the difference in those scenarios would be between ~0 and the negative effects of a smaller-scale fraud. If the hypothetical SBF thought the 80-90%+ scenario was pretty unlikely . . . .
(Again, all of this does not include the risk of the fraud leaking out or being discovered.)
Okay yes, I agree that a driver of bank runs is the knowledge that the bank usually can’t cover all deposits, by design. So as long as you keep that fact secret you’re much less likely to face a run.
I am now unsure how to reason about the likelihood of a run-like scenario in this case.
Interesting discussion. In the interview, MacAskill mentioned Madoff as an example of the idea that it’s not about “bad apples.” [1] Giving Madoff as an example in this context doesn’t make sense to me. But maybe MacAskill was meaning to say that it’s not about “bad apples that are identified as such before/at the time of their fraud”? That would be the only interpretation that makes sense to me, because Madoff sounds like he really was a “bad apple” based on the info in Why They Do It.
Here’s what Soltes says about Madoff in Why They Do It (quoted from the audiobook, with emphasis added):
Here’s a quote from MacAskill (emphasis added):