This is a lightly-edited extract from a longer post I have been writing about the problems Effective Altruism has with power. That post will likely be uploaded soon, but I wanted to upload this extract first since I think it’s especially relevant to the kind of reflection that is currently happening in this community, and because I think it’s more polished than the rest of my work-in-progress. Thank you to Julian Hazell and Keir Bradwell for reading and commenting on an earlier draft.
In the wake of revelations about FTX and Sam Bankman-Fried’s behaviour, Effective Altruists have begun reflecting on how they might respond to this situation, and if the movement needs to reform itself before ‘next time’. And I have begun to notice a pattern emerging: people saying that this fuck-up is evidence of too little ‘deontology’ in Effective Altruism. As this diagnosis goes, Bankman-Fried’s behaviour was partly (though not entirely) the result of attitudes that are unfortunately general among Effective Altruists, such as a too-easy willingness to violate side-constraints, too little concern with honesty and transparency, and sometimes a lack of integrity. This thread by Dustin Moskovitz and this post by Julian Hazell both exemplify the conclusion that EA needs to be a bit more ‘deontological’.
I’m sympathetic here: I’m an ethics guy by background, and I think it’s an important and insightful field. I understand that EA and longtermism emerged out of moral philosophy, that some of the movement’s most prominent leaders are analytic ethicists in their day jobs, and that the language of the movement is (in large part) the language of analytic ethics. So it makes sense that EAs reach for ethical distinctions and ideas when trying to think about a question, such as ‘what went wrong with FTX?’. But I think that it is completely the wrong way to think about cases where people abuse their power, like Bankman-Fried abused his.
The problem with the abuse of power is not simply that having power lets you do things that fuck over other people (in potentially self-defeating ways). You will always have opportunities to fuck people over for influence and leverage, and it is always possible, at least in principle, that you will get too carried away by your own vision and take these opportunities (even if they are self-defeating). This applies no matter if you are the President of the United States or if you’re just asking your friend for £20; it applies even if you are purely altruistically motivated.
This kind of everyday moral reasoning is exactly what is captured by the kinds of deontological ‘side constraints’ most often discussed in the Effective Altruism community. As this post makes wonderfully clear, the reason why even consequentialists should be concerned with side-constraints is because you can predict ahead of time that you will face certain kinds of situations, and you know that it would be better if you acted according to these maxims. The same kind of reasoning applies to ideas like ‘integrity’ and ‘reputation’; the entire point is that you can predict ahead of time the kind of situations you are likely to face, so can draw on the resources of moral philosophy to come up with strategies for facing them responsibly. (Yes, for those who speak fluent LessWrong, this is essentially just self-modification.) The steady, predictable nature of these kinds of cases is precisely why everyday moral thinking is so well-suited to them.
The problem is more specific to circumstances of power, because everyday moral thinking is not all that well-suited to the high-stakes choices that you have to make when you have power. Very often, these choices force you to make trade-offs and moral compromises that would be unacceptable by everyday standards. Further, these situations often involve pervasive uncertainty, with no ‘given’ probabilities or regular laws that can be exploited to structure your decision-making, only your own judgment and best guesses. With no single obvious reference class and a huge degree of unknown unknowns, it’s not clear ahead of time what situations you will even face—and, as a result, not clear at all what the relevant side-constraints should be. You just don’t know which types of moral reasoning will be helpful or necessary, and which would stop you from making the necessary trade-offs.
The possibility of motivated reasoning makes this problem even more serious. Many people who do care about side-constraints nonetheless find ways to justify obvious-seeming violations, through rhetorical redescription and motivated reasoning, if they are put in contexts that provide them with the opportunity to do so. This means that a proposed deontological rule must be specific enough to avoid this problem, while also general enough to actually be workable. The maxim ‘never commit murder’ is nice, but wouldn’t have prevented a fiasco like Bankman-Fried’s; the maxim ‘do not misuse customer funds’ is better, but far too open to motivated reasoning about the meaning of ‘misuse’; and while a maxim of ‘do not use customer funds to prop up your insolvent hedge fund while just hoping to make the money back later’ is perfect, you can only formulate it after the fact.
To be sure, it seems as though Bankman-Fried didn’t take side-constraints particularly seriously. But this doesn’t necessarily mean that he would have done any better had he taken them seriously. Intending to obey deontological constraints is no more a guarantee that you will obey them than intending to maximise utility is a guarantee that you will maximise utility. What’s needed is a more systematic analysis of how to act in situations of power, potentially including proposals for reform at the institutional level—not simplistic rules drawn from intuition.
Deontology is not the solution
This is a lightly-edited extract from a longer post I have been writing about the problems Effective Altruism has with power. That post will likely be uploaded soon, but I wanted to upload this extract first since I think it’s especially relevant to the kind of reflection that is currently happening in this community, and because I think it’s more polished than the rest of my work-in-progress. Thank you to Julian Hazell and Keir Bradwell for reading and commenting on an earlier draft.
In the wake of revelations about FTX and Sam Bankman-Fried’s behaviour, Effective Altruists have begun reflecting on how they might respond to this situation, and if the movement needs to reform itself before ‘next time’. And I have begun to notice a pattern emerging: people saying that this fuck-up is evidence of too little ‘deontology’ in Effective Altruism. As this diagnosis goes, Bankman-Fried’s behaviour was partly (though not entirely) the result of attitudes that are unfortunately general among Effective Altruists, such as a too-easy willingness to violate side-constraints, too little concern with honesty and transparency, and sometimes a lack of integrity. This thread by Dustin Moskovitz and this post by Julian Hazell both exemplify the conclusion that EA needs to be a bit more ‘deontological’.
I’m sympathetic here: I’m an ethics guy by background, and I think it’s an important and insightful field. I understand that EA and longtermism emerged out of moral philosophy, that some of the movement’s most prominent leaders are analytic ethicists in their day jobs, and that the language of the movement is (in large part) the language of analytic ethics. So it makes sense that EAs reach for ethical distinctions and ideas when trying to think about a question, such as ‘what went wrong with FTX?’. But I think that it is completely the wrong way to think about cases where people abuse their power, like Bankman-Fried abused his.
The problem with the abuse of power is not simply that having power lets you do things that fuck over other people (in potentially self-defeating ways). You will always have opportunities to fuck people over for influence and leverage, and it is always possible, at least in principle, that you will get too carried away by your own vision and take these opportunities (even if they are self-defeating). This applies no matter if you are the President of the United States or if you’re just asking your friend for £20; it applies even if you are purely altruistically motivated.
However, morally thoughtful people tend to have good ‘intuitions’ about everyday cases: it is these that common-sense morality was designed to handle. We know that it’s wrong to take someone else’s money and not pay it back; we know that it’s typically wrong to lie solely for our own benefit; we understand that it’s good to be trustworthy and honest. Indeed, in everyday contexts certain options are just entirely unthinkable. For example, a surgeon won’t typically even ask themselves ‘should I cut up this patient and redistribute their organs to maximise utility?’—the idea to do such a thing would never even enter their mind—and you would probably be a bit uneasy with a surgeon who had indeed asked himself this question, even if he had concluded that he shouldn’t cut you up.
This kind of everyday moral reasoning is exactly what is captured by the kinds of deontological ‘side constraints’ most often discussed in the Effective Altruism community. As this post makes wonderfully clear, the reason why even consequentialists should be concerned with side-constraints is because you can predict ahead of time that you will face certain kinds of situations, and you know that it would be better if you acted according to these maxims. The same kind of reasoning applies to ideas like ‘integrity’ and ‘reputation’; the entire point is that you can predict ahead of time the kind of situations you are likely to face, so can draw on the resources of moral philosophy to come up with strategies for facing them responsibly. (Yes, for those who speak fluent LessWrong, this is essentially just self-modification.) The steady, predictable nature of these kinds of cases is precisely why everyday moral thinking is so well-suited to them.
The problem is more specific to circumstances of power, because everyday moral thinking is not all that well-suited to the high-stakes choices that you have to make when you have power. Very often, these choices force you to make trade-offs and moral compromises that would be unacceptable by everyday standards. Further, these situations often involve pervasive uncertainty, with no ‘given’ probabilities or regular laws that can be exploited to structure your decision-making, only your own judgment and best guesses. With no single obvious reference class and a huge degree of unknown unknowns, it’s not clear ahead of time what situations you will even face—and, as a result, not clear at all what the relevant side-constraints should be. You just don’t know which types of moral reasoning will be helpful or necessary, and which would stop you from making the necessary trade-offs.
The possibility of motivated reasoning makes this problem even more serious. Many people who do care about side-constraints nonetheless find ways to justify obvious-seeming violations, through rhetorical redescription and motivated reasoning, if they are put in contexts that provide them with the opportunity to do so. This means that a proposed deontological rule must be specific enough to avoid this problem, while also general enough to actually be workable. The maxim ‘never commit murder’ is nice, but wouldn’t have prevented a fiasco like Bankman-Fried’s; the maxim ‘do not misuse customer funds’ is better, but far too open to motivated reasoning about the meaning of ‘misuse’; and while a maxim of ‘do not use customer funds to prop up your insolvent hedge fund while just hoping to make the money back later’ is perfect, you can only formulate it after the fact.
To be sure, it seems as though Bankman-Fried didn’t take side-constraints particularly seriously. But this doesn’t necessarily mean that he would have done any better had he taken them seriously. Intending to obey deontological constraints is no more a guarantee that you will obey them than intending to maximise utility is a guarantee that you will maximise utility. What’s needed is a more systematic analysis of how to act in situations of power, potentially including proposals for reform at the institutional level—not simplistic rules drawn from intuition.