Deontology is not the solution
This is a lightly-edited extract from a longer post I have been writing about the problems Effective Altruism has with power. That post will likely be uploaded soon, but I wanted to upload this extract first since I think it’s especially relevant to the kind of reflection that is currently happening in this community, and because I think it’s more polished than the rest of my work-in-progress. Thank you to Julian Hazell and Keir Bradwell for reading and commenting on an earlier draft.
In the wake of revelations about FTX and Sam Bankman-Fried’s behaviour, Effective Altruists have begun reflecting on how they might respond to this situation, and if the movement needs to reform itself before ‘next time’. And I have begun to notice a pattern emerging: people saying that this fuck-up is evidence of too little ‘deontology’ in Effective Altruism. As this diagnosis goes, Bankman-Fried’s behaviour was partly (though not entirely) the result of attitudes that are unfortunately general among Effective Altruists, such as a too-easy willingness to violate side-constraints, too little concern with honesty and transparency, and sometimes a lack of integrity. This thread by Dustin Moskovitz and this post by Julian Hazell both exemplify the conclusion that EA needs to be a bit more ‘deontological’.
I’m sympathetic here: I’m an ethics guy by background, and I think it’s an important and insightful field. I understand that EA and longtermism emerged out of moral philosophy, that some of the movement’s most prominent leaders are analytic ethicists in their day jobs, and that the language of the movement is (in large part) the language of analytic ethics. So it makes sense that EAs reach for ethical distinctions and ideas when trying to think about a question, such as ‘what went wrong with FTX?’. But I think that it is completely the wrong way to think about cases where people abuse their power, like Bankman-Fried abused his.
The problem with the abuse of power is not simply that having power lets you do things that fuck over other people (in potentially self-defeating ways). You will always have opportunities to fuck people over for influence and leverage, and it is always possible, at least in principle, that you will get too carried away by your own vision and take these opportunities (even if they are self-defeating). This applies no matter if you are the President of the United States or if you’re just asking your friend for £20; it applies even if you are purely altruistically motivated.
However, morally thoughtful people tend to have good ‘intuitions’ about everyday cases: it is these that common-sense morality was designed to handle. We know that it’s wrong to take someone else’s money and not pay it back; we know that it’s typically wrong to lie solely for our own benefit; we understand that it’s good to be trustworthy and honest. Indeed, in everyday contexts certain options are just entirely unthinkable. For example, a surgeon won’t typically even ask themselves ‘should I cut up this patient and redistribute their organs to maximise utility?’—the idea to do such a thing would never even enter their mind—and you would probably be a bit uneasy with a surgeon who had indeed asked himself this question, even if he had concluded that he shouldn’t cut you up.
This kind of everyday moral reasoning is exactly what is captured by the kinds of deontological ‘side constraints’ most often discussed in the Effective Altruism community. As this post makes wonderfully clear, the reason why even consequentialists should be concerned with side-constraints is because you can predict ahead of time that you will face certain kinds of situations, and you know that it would be better if you acted according to these maxims. The same kind of reasoning applies to ideas like ‘integrity’ and ‘reputation’; the entire point is that you can predict ahead of time the kind of situations you are likely to face, so can draw on the resources of moral philosophy to come up with strategies for facing them responsibly. (Yes, for those who speak fluent LessWrong, this is essentially just self-modification.) The steady, predictable nature of these kinds of cases is precisely why everyday moral thinking is so well-suited to them.
The problem is more specific to circumstances of power, because everyday moral thinking is not all that well-suited to the high-stakes choices that you have to make when you have power. Very often, these choices force you to make trade-offs and moral compromises that would be unacceptable by everyday standards. Further, these situations often involve pervasive uncertainty, with no ‘given’ probabilities or regular laws that can be exploited to structure your decision-making, only your own judgment and best guesses. With no single obvious reference class and a huge degree of unknown unknowns, it’s not clear ahead of time what situations you will even face—and, as a result, not clear at all what the relevant side-constraints should be. You just don’t know which types of moral reasoning will be helpful or necessary, and which would stop you from making the necessary trade-offs.
The possibility of motivated reasoning makes this problem even more serious. Many people who do care about side-constraints nonetheless find ways to justify obvious-seeming violations, through rhetorical redescription and motivated reasoning, if they are put in contexts that provide them with the opportunity to do so. This means that a proposed deontological rule must be specific enough to avoid this problem, while also general enough to actually be workable. The maxim ‘never commit murder’ is nice, but wouldn’t have prevented a fiasco like Bankman-Fried’s; the maxim ‘do not misuse customer funds’ is better, but far too open to motivated reasoning about the meaning of ‘misuse’; and while a maxim of ‘do not use customer funds to prop up your insolvent hedge fund while just hoping to make the money back later’ is perfect, you can only formulate it after the fact.
To be sure, it seems as though Bankman-Fried didn’t take side-constraints particularly seriously. But this doesn’t necessarily mean that he would have done any better had he taken them seriously. Intending to obey deontological constraints is no more a guarantee that you will obey them than intending to maximise utility is a guarantee that you will maximise utility. What’s needed is a more systematic analysis of how to act in situations of power, potentially including proposals for reform at the institutional level—not simplistic rules drawn from intuition.
I think this idea of the role of power in the question of deontological vs consequentialist reasoning is interesting. I don’t have a lot of background in formal ethics, so I’m not sure how I would classify my own ethical camp, but generally I’ve always thought that deontological values can be taken into consideration in a consequentialist framework—when we are asking whether it is acceptable to violate a general rule for the “greater good,” we should consider the consequences of eroding the sense of integrity, honesty, human rights, etc. in our society. In cases where bending a rule can do a lot of good (i.e. lying to get money for life-saving medical care), this seems perfectly fine to me. But when someone is too quick to justify breaking intuitive moral rules, they are probably undervaluing the harm they are doing by eroding the values that underlie those rules. I’m sure I’m not the first person to have this thought (feel free to let me know if there’s a name for this position, as I’d be curious to know).
The question this post raises for me is whether there are circumstances where the deontological rules have more or less weight. And power seems like a relevant criteria. If your actions are extremely influential, then it may be more likely that you’ll face decisions where the immediate consequences are simply of far more importance than the deontological rules you may be bending or breaking. I certainly wouldn’t want a world leader unwilling to fall short of absolute honesty when dealing with terrorist threats, for instance. Or the choices available might simply be so influential and complex that there is no pure choice—for instance, setting healthcare policies where there aren’t enough resources to save everyone, and so any decision will involve a choice to let people die.
But ultimately, I don’t think power fundamentally changes the paradigm. For one thing, the more influential a person is, the more consequential their violations of deontological rules will be, which cautions against being too quick to think that a high-stakes decision shouldn’t be constrained by the sorts of moral rules we apply to more mundane decisions. In this case, it is obvious that the choices made by FTX have been extremely harmful and eroded a public sense of trust and integrity.
In addition, there’s a difference between people who are entrusted to make difficult moral tradeoffs, and people who are not. Where someone is elected to determine policy, they may have to make high-stakes decisions and moral tradeoffs that can’t totally align with intuitive deontological rules. In other words, they’re given a license to make those calls. But it’s a different case where, as here, nobody entrusted FTX to make the decision whether to use customer funds the way it did. Assuming for the sake of this discussion that FTX made that decision because it believed the ends justified the means, FTX wasn’t just making that judgment call—it was also deciding that it should be the decision-maker. To me, that is the real problem. Because FTX wasn’t just contributing to a world where people are lied to for the greater good, or where people’s wealth is gambled without their for the greater good. It was also contributing to a world where everyone makes that decision for themselves rather than deferring to the rules society has decided to impose. That world would be utter chaos, with everyone in a position of power, however they got it, deciding to substitute their judgment for the judgment of society.
I’m not saying nobody should ever make a decision they weren’t entrusted with. If I had the chance to assassinate a president about to hit the red button and start a nuclear apocalypse, I wouldn’t worry too much about the arrogance inherent in making that decision myself. And I hope others in that situation would do the same. But that’s in part because I’m really, really, really confident that it’s the right call. And that confidence is strengthened by fact that I don’t think the applicable laws (don’t kill the president) properly address the situation where it’s the only way to save the world from certain doom. And I think that if I could ask society for permission, I would get it, but I just don’t have time. But in FTX’s situation, it wasn’t dealing with some unforeseeable circumstance, and there is no reason to think that society would have approved of its choice if asked. And nobody deliberately entrusted FTX with the authority to make these moral tradeoffs. So the only justification is that FTX simply knew better than everyone else, and when that is the only justification, I would guess in the vast majority of real-world cases it is an example of arrogance and motivated reasoning, not of a super-intelligent entity saving society from its own misguided sense of morality.
In short, I guess what I’m saying is that I agree that the precise intuitions that guide our mundane daily decisions are less applicable to people in positions of power. But there are still intuitive rules that do apply—Were you entrusted to make this sort of decision? Would fully informed people be likely to agree that the benefits of your choice outweigh the harm done? Are you breaking a rule that clearly wasn’t established with the situation you’re facing in mind? Or are you just deciding that you know better than everyone else and that the consequences are so important that it justifies not only your arrogance, but the real harm done to society by promoting the idea that individuals should override social judgment calls with their own?
Ultimately, I agree that the answer isn’t simply more deontology. Rather, it’s a greater respect for the moral values of the society we live in and the harm we cause if we violate them, as well as more humility with regards to our ability to determine when we know better and are therefore justified in breaking the rules. I won’t pretend that there’s any side constraint or rule that I would never break, no matter the positive consequences of doing so. For instance, if the entire world voted in favor of nuclear Armageddon, I’d still try to stop it. But I can still say with confidence that, unless I’m entrusted with a position of power where I’m not expected or able to strictly adhere to those side constraints, I think it’s extremely unlikely that a real-world situation would arise where I would feel justified doing so.