Note that Will does say a bit in the interview about why he doesn’t view SBF’s utilitarian beliefs as a major explanatory factor here (the fraud was so obviously negative EV, and the big lesson he took from the Soltes book on white-collar crime was that such crime tends to be more the result of negligence and self-deception than deliberate, explicit planning to that end).
I disagree with Will a bit here, and think that SBF’s utilitarian beliefs probably did contribute significantly to what happened, but perhaps somewhat indirectly, by 1) giving him large scale ambitions, 2) providing a background justifications for being less risk-averse than most, 3) convincing others to trust him more than they otherwise would. Without those beliefs, he may well not have gotten to a position where he started committing large scale fraud through negligence and self-deception.
I basically agree with the lessons Will suggests in the interview, about the importance of better “governance” and institutional guard-rails to disincentivize bad behavior
I’m pretty confused about the nature of morality, but it seems that one historical function of morality is to be a substitute for governance (which is generally difficult and costly; see many societies with poor governance despite near universal desire for better governance). Some credit the success of Western civilization in part to Christian morality, for example. (Again I’m pretty confused and don’t know how relevant this is, but it seems worth pointing out.)
I think it would be a big mistake to conflate that sort of “overconfidence in general” with specifically moral confidence (e.g. in the idea that we should fundamentally always prefer better outcomes over worse ones). It’s just very obvious that you can have the latter without the former, and it’s the former that’s the real problem here.
My view is that the two kinds of overconfidence seem to have interacted multiplicatively in causing the disaster that happened. I guess I can see why you might disagree, given your own moral views (conditional on utilitarianism being true/right, it would be surprising if high confidence in it is problematic/dangerous/blameworthy), but my original comment was written more with someone who has relatively low credence in utilitarianism in mind, e.g., Will.
BTW it would be interesting to hear/read a debate between you and Will about utilitarianism. (My views are similar to his in putting a lot of credence on anti-realism and “something nobody has thought of yet”, but I feel like his credence for “something like utilitarianism” is too low. I’m curious to understand both why your credence for it is so high, and why his is so low.)
We just wrote a textbook on the topic together (the print edition of utilitarianism.net)! In the preface, we briefly relate our different attitudes here: basically, I’m much more confident in the consequentialism part, but sympathetic to various departures from utilitarian (and esp. hedonistic) value theory, whereas Will gives more weight to non-consequentialist alternatives (more for reasons of peer disagreement than any intrinsic credibility, it seems), but is more confident that classical hedonistic utilitarianism is the best form of consequentialism.
I agree it’d be fun for us to explore the disagreement further sometime!
I disagree with Will a bit here, and think that SBF’s utilitarian beliefs probably did contribute significantly to what happened, but perhaps somewhat indirectly, by 1) giving him large scale ambitions, 2) providing a background justifications for being less risk-averse than most, 3) convincing others to trust him more than they otherwise would. Without those beliefs, he may well not have gotten to a position where he started committing large scale fraud through negligence and self-deception.
I’m pretty confused about the nature of morality, but it seems that one historical function of morality is to be a substitute for governance (which is generally difficult and costly; see many societies with poor governance despite near universal desire for better governance). Some credit the success of Western civilization in part to Christian morality, for example. (Again I’m pretty confused and don’t know how relevant this is, but it seems worth pointing out.)
My view is that the two kinds of overconfidence seem to have interacted multiplicatively in causing the disaster that happened. I guess I can see why you might disagree, given your own moral views (conditional on utilitarianism being true/right, it would be surprising if high confidence in it is problematic/dangerous/blameworthy), but my original comment was written more with someone who has relatively low credence in utilitarianism in mind, e.g., Will.
BTW it would be interesting to hear/read a debate between you and Will about utilitarianism. (My views are similar to his in putting a lot of credence on anti-realism and “something nobody has thought of yet”, but I feel like his credence for “something like utilitarianism” is too low. I’m curious to understand both why your credence for it is so high, and why his is so low.)
We just wrote a textbook on the topic together (the print edition of utilitarianism.net)! In the preface, we briefly relate our different attitudes here: basically, I’m much more confident in the consequentialism part, but sympathetic to various departures from utilitarian (and esp. hedonistic) value theory, whereas Will gives more weight to non-consequentialist alternatives (more for reasons of peer disagreement than any intrinsic credibility, it seems), but is more confident that classical hedonistic utilitarianism is the best form of consequentialism.
I agree it’d be fun for us to explore the disagreement further sometime!