fwiw, I wouldnât generally expect âhigh confidence in utilitarianismâ per se to be any cause for concern. (I have high confidence in something close to utilitarianismâin particular, I have near-zero credence in deontologyâbut I canât imagine that anyone who really knows how I thinkabout ethics would find this the least bit practically concerning.)
Note that Will does say a bit in the interview about why he doesnât view SBFâs utilitarian beliefs as a major explanatory factor here (the fraud was so obviously negative EV, and the big lesson he took from the Soltes book on white-collar crime was that such crime tends to be more the result of negligence and self-deception than deliberate, explicit planning to that end).
I basically agree with the lessons Will suggests in the interview, about the importance of better âgovernanceâ and institutional guard-rails to disincentivize bad behavior, along with warning against both âEA exceptionalismâ and SBF-style empirical overconfidence (in his ability to navigate risk, secure lasting business success without professional accounting support or governance, etc.).
I think it would be a big mistake to conflate that sort of âoverconfidence in generalâ with specifically moral confidence (e.g. in the idea that we should fundamentally always prefer better outcomes over worse ones). Itâs just very obvious that you can have the latter without the former, and itâs the former thatâs the real problem here.
Note that Will does say a bit in the interview about why he doesnât view SBFâs utilitarian beliefs as a major explanatory factor here (the fraud was so obviously negative EV, and the big lesson he took from the Soltes book on white-collar crime was that such crime tends to be more the result of negligence and self-deception than deliberate, explicit planning to that end).
I disagree with Will a bit here, and think that SBFâs utilitarian beliefs probably did contribute significantly to what happened, but perhaps somewhat indirectly, by 1) giving him large scale ambitions, 2) providing a background justifications for being less risk-averse than most, 3) convincing others to trust him more than they otherwise would. Without those beliefs, he may well not have gotten to a position where he started committing large scale fraud through negligence and self-deception.
I basically agree with the lessons Will suggests in the interview, about the importance of better âgovernanceâ and institutional guard-rails to disincentivize bad behavior
Iâm pretty confused about the nature of morality, but it seems that one historical function of morality is to be a substitute for governance (which is generally difficult and costly; see many societies with poor governance despite near universal desire for better governance). Some credit the success of Western civilization in part to Christian morality, for example. (Again Iâm pretty confused and donât know how relevant this is, but it seems worth pointing out.)
I think it would be a big mistake to conflate that sort of âoverconfidence in generalâ with specifically moral confidence (e.g. in the idea that we should fundamentally always prefer better outcomes over worse ones). Itâs just very obvious that you can have the latter without the former, and itâs the former thatâs the real problem here.
My view is that the two kinds of overconfidence seem to have interacted multiplicatively in causing the disaster that happened. I guess I can see why you might disagree, given your own moral views (conditional on utilitarianism being true/âright, it would be surprising if high confidence in it is problematic/âdangerous/âblameworthy), but my original comment was written more with someone who has relatively low credence in utilitarianism in mind, e.g., Will.
BTW it would be interesting to hear/âread a debate between you and Will about utilitarianism. (My views are similar to his in putting a lot of credence on anti-realism and âsomething nobody has thought of yetâ, but I feel like his credence for âsomething like utilitarianismâ is too low. Iâm curious to understand both why your credence for it is so high, and why his is so low.)
We just wrote a textbook on the topic together (the print edition of utilitarianism.net)! In the preface, we briefly relate our different attitudes here: basically, Iâm much more confident in the consequentialism part, but sympathetic to various departures from utilitarian (and esp. hedonistic) value theory, whereas Will gives more weight to non-consequentialist alternatives (more for reasons of peer disagreement than any intrinsic credibility, it seems), but is more confident that classical hedonistic utilitarianism is the best form of consequentialism.
I agree itâd be fun for us to explore the disagreement further sometime!
I donât necessarily disagree with most of that, but I think it is ultimately still plausible that people who endorse a theory that obviously says in principle bad ends can justify the means are somewhat (plausibly not very much though) more likely to actually do bad things with an ends-justifies-the-means vibe. Note that this is an empirical claim about what sort of behaviour is actually more likely to co-occur with endorsing utilitarianism or consequentialism in actual human beings. So itâs not refuted by âthe correct understanding of consequentialism mostly bars things with an ends justifies the means vibe in practiceâ or âactually, any sane view allows that sometimes itâs permissible to do very harmful things to prevent a many orders of magnitude greater harmâ. And by âsomewhat plausibleâ I mean just that. I wouldnât be THAT shocked to discover this was false, my credence is like 95% maybe? (1 in 20 things happen all the time.) And the claim is correlational, not causal (maybe both endorsement of utilitarianism and ends-justifies-the-means type behaviour are both caused partly by prior intuitive endorsement of ends-justifies-the-means type behaviour, and adopting utilitarianism doesnât actually make any difference, although I doubt that is entirely true.)
I donât necessarily disagree with any of that, but the fact that you asserted it implicates you think it has some kind of practical relevance which is where I might want to disagree.
I think itâs fundamentally dishonest (a kind of naive instrumentalism in its own right) to try to discourage people from having true beliefs because of faint fears that these beliefs might correlate with bad behavior.
I also think itâs bad for people to engage in âmoral profilingâ (cf. racial profiling), spreading suspicion about utilitarians in general based on very speculative fears of this sort.
I just think itâs very obvious that if youâre worried about naive instrumentalism, the (morally and intellectually) correct response is to warn against naive instrumentalism, not other (intrinsically innocuous) views that you believe to be correlated with the mistake.
Actually, I have a lot of sympathy with what you are saying here. I am ultimately somewhat inclined to endorse âin principle, the ends justify the means, just not in practiceâ over at least a fairly wide range of cases. I (probably) think in theory you should usually kill one innocent person to save five, even though in practice anything that looks like doing that is almost certainly a bad idea, outside artificial philosophical thought experiments and maybe some weird but not too implausible scenarios involving war or natural disaster. But at the same time, I do worry a bit about bad effects from utilitarianism because I worry about bad effects from anything. I donât worry too much, but thatâs because I think those effects are small, and anyway there will be good effects of utilitarianism too. But I donât think utilitarians should be able to react with outrage when people say plausible things about the consequences of utilitarianism. And I think people who worry about this more than I do on this forum are generally acting in good faith. And yeah, I agree utilitarians shouldnât (in any normal context) lie about their opinions.
fwiw, I wouldnât generally expect âhigh confidence in utilitarianismâ per se to be any cause for concern. (I have high confidence in something close to utilitarianismâin particular, I have near-zero credence in deontologyâbut I canât imagine that anyone who really knows how I think about ethics would find this the least bit practically concerning.)
Note that Will does say a bit in the interview about why he doesnât view SBFâs utilitarian beliefs as a major explanatory factor here (the fraud was so obviously negative EV, and the big lesson he took from the Soltes book on white-collar crime was that such crime tends to be more the result of negligence and self-deception than deliberate, explicit planning to that end).
I basically agree with the lessons Will suggests in the interview, about the importance of better âgovernanceâ and institutional guard-rails to disincentivize bad behavior, along with warning against both âEA exceptionalismâ and SBF-style empirical overconfidence (in his ability to navigate risk, secure lasting business success without professional accounting support or governance, etc.).
I think it would be a big mistake to conflate that sort of âoverconfidence in generalâ with specifically moral confidence (e.g. in the idea that we should fundamentally always prefer better outcomes over worse ones). Itâs just very obvious that you can have the latter without the former, and itâs the former thatâs the real problem here.
[See also: âThe Abusability Objectionâ at utilitarianism.net]
I disagree with Will a bit here, and think that SBFâs utilitarian beliefs probably did contribute significantly to what happened, but perhaps somewhat indirectly, by 1) giving him large scale ambitions, 2) providing a background justifications for being less risk-averse than most, 3) convincing others to trust him more than they otherwise would. Without those beliefs, he may well not have gotten to a position where he started committing large scale fraud through negligence and self-deception.
Iâm pretty confused about the nature of morality, but it seems that one historical function of morality is to be a substitute for governance (which is generally difficult and costly; see many societies with poor governance despite near universal desire for better governance). Some credit the success of Western civilization in part to Christian morality, for example. (Again Iâm pretty confused and donât know how relevant this is, but it seems worth pointing out.)
My view is that the two kinds of overconfidence seem to have interacted multiplicatively in causing the disaster that happened. I guess I can see why you might disagree, given your own moral views (conditional on utilitarianism being true/âright, it would be surprising if high confidence in it is problematic/âdangerous/âblameworthy), but my original comment was written more with someone who has relatively low credence in utilitarianism in mind, e.g., Will.
BTW it would be interesting to hear/âread a debate between you and Will about utilitarianism. (My views are similar to his in putting a lot of credence on anti-realism and âsomething nobody has thought of yetâ, but I feel like his credence for âsomething like utilitarianismâ is too low. Iâm curious to understand both why your credence for it is so high, and why his is so low.)
We just wrote a textbook on the topic together (the print edition of utilitarianism.net)! In the preface, we briefly relate our different attitudes here: basically, Iâm much more confident in the consequentialism part, but sympathetic to various departures from utilitarian (and esp. hedonistic) value theory, whereas Will gives more weight to non-consequentialist alternatives (more for reasons of peer disagreement than any intrinsic credibility, it seems), but is more confident that classical hedonistic utilitarianism is the best form of consequentialism.
I agree itâd be fun for us to explore the disagreement further sometime!
I donât necessarily disagree with most of that, but I think it is ultimately still plausible that people who endorse a theory that obviously says in principle bad ends can justify the means are somewhat (plausibly not very much though) more likely to actually do bad things with an ends-justifies-the-means vibe. Note that this is an empirical claim about what sort of behaviour is actually more likely to co-occur with endorsing utilitarianism or consequentialism in actual human beings. So itâs not refuted by âthe correct understanding of consequentialism mostly bars things with an ends justifies the means vibe in practiceâ or âactually, any sane view allows that sometimes itâs permissible to do very harmful things to prevent a many orders of magnitude greater harmâ. And by âsomewhat plausibleâ I mean just that. I wouldnât be THAT shocked to discover this was false, my credence is like 95% maybe? (1 in 20 things happen all the time.) And the claim is correlational, not causal (maybe both endorsement of utilitarianism and ends-justifies-the-means type behaviour are both caused partly by prior intuitive endorsement of ends-justifies-the-means type behaviour, and adopting utilitarianism doesnât actually make any difference, although I doubt that is entirely true.)
I donât necessarily disagree with any of that, but the fact that you asserted it implicates you think it has some kind of practical relevance which is where I might want to disagree.
I think itâs fundamentally dishonest (a kind of naive instrumentalism in its own right) to try to discourage people from having true beliefs because of faint fears that these beliefs might correlate with bad behavior.
I also think itâs bad for people to engage in âmoral profilingâ (cf. racial profiling), spreading suspicion about utilitarians in general based on very speculative fears of this sort.
I just think itâs very obvious that if youâre worried about naive instrumentalism, the (morally and intellectually) correct response is to warn against naive instrumentalism, not other (intrinsically innocuous) views that you believe to be correlated with the mistake.
[See also: The Dangers of a Little Knowledge, esp. the âShould we lie?â section.]
Actually, I have a lot of sympathy with what you are saying here. I am ultimately somewhat inclined to endorse âin principle, the ends justify the means, just not in practiceâ over at least a fairly wide range of cases. I (probably) think in theory you should usually kill one innocent person to save five, even though in practice anything that looks like doing that is almost certainly a bad idea, outside artificial philosophical thought experiments and maybe some weird but not too implausible scenarios involving war or natural disaster. But at the same time, I do worry a bit about bad effects from utilitarianism because I worry about bad effects from anything. I donât worry too much, but thatâs because I think those effects are small, and anyway there will be good effects of utilitarianism too. But I donât think utilitarians should be able to react with outrage when people say plausible things about the consequences of utilitarianism. And I think people who worry about this more than I do on this forum are generally acting in good faith. And yeah, I agree utilitarians shouldnât (in any normal context) lie about their opinions.