Consequentialism does not endorse frauding-to-give
Mainstream media coverage of the FTX crash frequently suggests that a utilitarian ethic is partially to blame for the irresponsible behavior of top executives. However, consequentialist reasoning—even in its most extreme “ends justify the means” form—does not endorse committing crimes with the goal of making money to donate to charity.
Disclaimers
This post is not about FTX. I want to abstract away from that specific circumstance and make a broader point about consequentialism and applied ethics. These arguments are relevant whether or not fraud was committed by FTX leadership.
This post is nothing revolutionary; I just think these arguments need to be reiterated succinctly.
I do not consider myself a hardcore consequentialist. In general, I find it strange to believe that a single ethical theory could/should possibly guide all aspects of one’s life.
I am not a trained philosopher; please use the comments if my understanding of consequentialism is flawed.
Main claim
In my opinion, the heart (and most interesting feature) of consequentialism is determining the downstream consequences of an action, especially those consequences which influence others’ actions. However, this calculus is rarely mentioned in popular discourses on utilitarianism. When somebody brings up the drowning child problem, they don’t ask you to consider how your decision will impact the future of the pond’s availability for public bathing. That issue is hardly relevant to whether or not you choose to save the child. But real-life decisions are not thought experiments, and if we want to be serious about consequentialism, downstream effects are crucial to every moral calculus.
This is not a novel idea within consequentialist thought. Consider the famous transplant thought experiment. The experiment imagines that a healthy patient walks into a hospital, and the doctor must decide whether to kill her and harvest her organs to save five dying patients. The most intuitive consequentialist response is: “I don’t care if it saves five lives; if hospitals begin killing healthy patients our entire health system will crumble.”
The same intuitive response should also apply to breaking the law in order to make money to later donate to charity. Off the top of my head, here are a few downstream consequences which make that decision a bad idea:
You’re caught and you never get the chance to donate the money because you are forced to forfeit it.
You ruin your reputation and lose opportunities to perform good actions in the future.
Your moral calculus was incorrect, and the illegal action does more harm than your donation does good.
If you are part of a movement that advocates doing good in the world, the exposure of your actions causes harm to that larger movement.
These are all consequentialist arguments[1] - they rely on expected value calculations not rights violations or virtue ethics. Taken together they demonstrate why, in the vast majority of imaginable circumstances, the ends simply do not justify the means when it comes to breaking the law with the goal of making money to give away.
Counterarguments
Naive vs. sophisticated consequentialism
I’ve been talking a lot about “downstream consequences”. If you’ve spent some time in EA circles, you might object that I’ve only considered “sophisticated consequentialism”, but “naive consequentialism” might support immoral behavior to benefit some greater good.
I disagree. The EAForum post on naive vs. sophisticated consequentialism states that:
Naive consequentialism is the view that, to comply with the requirements of consequentialism, an agent should at all times be motivated to perform the act that consequentialism requires. By contrast, sophisticated consequentialism holds that a consequentialist agent should adopt whichever set of motivations will cause her to in fact act in ways required by consequentialism.
Given this definition, my entire argument has been, counterintuitively, based off naive, not sophisticated, consequentialism. Moreover, my argument stands under the above definition of sophisticated consequentialism, too, because it’s hard to imagine a set of motivations which include committing crimes and also lead to the actualization of ideal consequences.
But sometimes naive consequentialism is defined another way. The same EAForum post states that an even more naive consequentialist does not “consider less direct, less immediate, or otherwise less visible consequences”. Interestingly, this definition makes the illegal behavior even more immoral. Because, under this form of naive consequentialism, you cannot consider the downstream consequences of your action, you cannot consider the fact that you will later donate the money to help people. The only consequences you can take into account are the immediate effects of the illegal action itself, and in all relevant cases, those will be bad.
Therefore, in both its naive and sophisticated forms, consequentialism does not endorse the illegal behavior.
It’s the ideas that matter, not whether they were applied correctly
You might object that, even accepting the conclusion that a good consequentialist wouldn’t commit the crime, what matters more is that actors might misconstrue consequentialism and use it as moral backing for their fraudulent behavior.
I agree that this is a real problem, but I don’t see it as a valid objection to my claims in this post. Anybody can misconstrue any theory and “use” it to “endorse” any action. In other words, a theory is not inherently wrong just because it can be incorrectly understood and then leveraged to justify harm.
That being said, the EA movement is broadly consequentialist, so we should examine our own theoretical endorsements under a broadly consequentialist framework. If we determine that publicly advocating consequentialism directly causes many people to act immorally “in the name of consequentialism”, we should either 1. change our messaging or 2. stop advocating consequentialism even if it’s still what we truly believe[2].
Conclusion
I didn’t write this post to advocate for consequentialism. I wrote it because I think consequentialism should be taken seriously as a moral theory. And consequentialism taken seriously does not entail that any ends justify any means. Consequentialism is so interesting precisely because it asks us to at least consider the ends when we examine the means. But when the means are potentially catastrophic, they are unlikely to be justified by any ends, no matter how good.
Just disagree with this :
“I do not consider myself a hardcore consequentialist. In general, I find it strange to believe that a single ethical theory could/should possibly guide all aspects of one’s life.”
How is it difficult to believe that trying to promote good conscious experiences and minimize bad conscious experiences could be the key guide to one’s behavior? A lot of EAs, myself included, consider this to be the ultimate goal for our actions… Of course, we need many other areas of study and theory to guide in specific areas.
I understand that you disagree with hardcore consequentialism, but I don’t see why you think it is strange for others to adopt it. This is especially true when you acknowledge the complexity in consequentialist decision-making, as you did in this post.
Thanks for the insight. Fortunately, you don’t have to agree with this disclaimer in my post for the rest of the argument to remain sound.
That being said, I find it perfectly reasonable for one’s actions to be primarily (or even almost entirely) guided by consequentialist reasoning. However, I cannot understand never considering reasons stemming from deontology or virtue ethics. For example, it’s impossible for me to imagine condemning a gross rights violation purely based on its consequences without even considering that perhaps violating personal rights has some intrinsic dis-value.
I believe that rights have value insofar as they promote positive conscious states and prevent negative conscious states. Their value or disvalue would be a function of whether they make lives better. Assigning weight to them beyond that is simply creating a worse world.
I do, however, find the assignment of intrinsic value, imaginable, though mistaken. I do not take umbrage so much at you disagreeing with me so much as you finding my view unimaginable.
That’s a very fair point—unimaginable is the wrong word. I guess I’ll say I find it curious.
To use a stronger example, suppose a dictator spends all day violating the personal rights of her subjects and by doing so increases overall well-being. I find it curious to believe she’s acting morally. You don’t need to believe in the intrinsic badness of rights violations to hold this point of view. You just have to believe that objective moral truth cannot be fully captured using a single, tidy theory. Moral/ethical life is complex, and I think that even if you are committed to one paradigm, you still ought to occasionally draw from other theories/thinkers to inform your moral/ethical decision making.
This agrees with what you said in your first comment: “We need many other areas of study and theory to guide in specific areas.” As long as this multifaceted approach is at least a small part of your overall theory, I can definitely imagine holding it, even if I don’t agree.
I think the complexity arises in evaluating the value and disvalue of different subjective states as well as determining what courses of action, considering all aspects involved, have the highest expected value.
You discuss the example of the despot regularly violating rights of subjects, yet increasing utility. Such a scenario seems inherently implausible, because if rights are prudently delineated, general respect for them, in the long run, will tend to cultivate a happier, more stable world (I.e, higher expected utility). And perhaps incursions upon these rights would be warranted in some situations. For instance, perhaps the public interest may allow someone’s property rights to be violated if there is a compelling public interest (eminent domain). This is why we have exceptions to rights (I. E.- free speech and instigating imminent violence). If the rights you are advancing tend to lower the welfare of conscious beings, I would think such formulation of rights is immoral.
You are correct that moral life is complex, but I think the complexity comes down to how we can navigate ourselves and our societies to optimize conscious experience. If you are incorporating factors into your decisions that don’t ultimately boil down to improving conscious experience, in my view, you are not acting fully morally.
This post argues against a strawman—it’s not credible that utilitarianism endorses frauding to give. It’s also not quite a question of whether Sam “misconstrued” utilitarianism, in that I doubt that he did a rational calculus on whether fraud was +EV, and he denies doing so.
The potential problem, rather, is that naive consequentialism/act utilitarianism removes some of the ethical guardrails that would ordinarily make fraud very unlikely. As I’ve said: In order to avoid taking harmful actions, an act utilitarian has to remember to calculate, and then to calculate correctly. (Whereas rules are often easier to remember and to properly apply.) The way Sam tells it, he became “less grounded” or “cocky”, leading to these mistakes. Would this have happened if he followed another theory? We can’t know, but we should be clear-eyed about the fact that hardcore utilitarians, despite representing maybe 1/1M of the world’s population, are responsible for maybe 1⁄10 of the greatest frauds, i.e. they’re over-represented by a factor of 100k, in a direction that would be pretty expected, based on the (italicised) argument above (which must surely have been made previously by moral philosophers). For effective altruists, we can lop off maybe one order of magnitude, but it doesn’t look great either.
I disagree that I argue against a strawman. The media’s coverage of Bankman-Fried frequently implies that he used consequentialism to justify his actions. This, in turn, implies that consequentialism endorses fraud so long as you give away your money. Like I said, the arguments in the post are not revolutionary, but I do think they are important.
You give no evidence for your claim that hardcore utilitarians commit 1⁄10 of the “greatest frauds”. I struggle to even engage with this claim because it seems so speculative. But I will say that I agree that utilitarianism has been (incorrectly) used to justify harm. As I stated:
Part of my motivation for making this post was helping consequentialists think about our actions—specifically those around the idea of earning to give. In other words, the post is intended to clarify some “ethical guardrails” within a consequentialist framework.
I mean that the dollar value of lost funds would seem to make it one of the top ten biggest frauds of all time (assuming that fraud is what happened). Perusing a list on Wikipedia, I can only see four times larger sums were defrauded: Madoff, Enron, Worldcom, Stanford.
Okay, I see now. I read that as “one-tenth” not one out of 10.
I’m on board with your lack-of-guardrails argument against utilitarianism. I hope arguments like the one made in this post help to construct them so we don’t end up with another catastrophe.