We must be very clear: fraud in the service of effective altruism is unacceptable
I care deeply about the future of humanity—more so than I care about anything else in the world. And I believe that Sam and others at FTX shared that care for the world.
Nevertheless, if some hypothetical person had come to me several years ago and asked “Is it worth it to engage in fraud to send billions of dollars to effective causes?”, I would have said unequivocally no.
At this stage, it is quite unclear just from public information exactly what happened to FTX, and I don’t want to accuse anyone of anything that they didn’t do. However, I think it is starting to look increasingly likely that, even if FTX’s handling of its customer’s money was not technically legally fraudulent, it seems likely to have been fraudulent in spirit.
And regardless of whether FTX’s business was in fact fraudulent, it is clear that many people—customers and employees—have been deeply hurt by FTX’s collapse. People’s life savings and careers were very rapidly wiped out. I think that compassion and support for those people is very important. In addition, I think there’s another thing that we as a community have an obligation to do right now as well.
Assuming FTX’s business was in fact fraudulent, I think that we—as people who unknowingly benefitted from it and whose work for the world was potentially used to whitewash it—have an obligation to condemn it in no uncertain terms. This is especially true for public figures who supported or were associated with FTX or its endeavors.
I don’t want a witch hunt, I don’t think anyone should start pulling out pitchforks, and so I think we should avoid a focus on any individual people here. We likely won’t know for a long time exactly who was responsible for what, nor do I think it really matters—what’s done is done, and what’s important now is making very clear where EA stands with regards to fraudulent activity, not throwing any individual people under the bus.
Right now, I think the best course of action is for us—and I mean all of us, anyone who has any sort of a public platform—to make clear that we don’t support fraud done in the service of effective altruism. Regardless of what FTX did or did not do, I think that is a statement that should be clearly and unambiguously defensible and that we should be happy to stand by regardless of what comes out. And I think it is an important statement for us to make: outside observers will be looking to see what EA has to say about all of this, and I think we need to be very clear that fraud is not something that we ever support.
In that spirit, I think it’s worth us carefully confronting the moral question here: is fraud in the service of raising money for effective causes wrong? This is a thorny moral question that is worth nuanced discussion, and I don’t claim to have all the answers.
Nevertheless, I think fraud in the service of effective altruism is basically unacceptable—and that’s as someone who is about as hardcore of a total utilitarian as it is possible to be.
When we, as humans, consider whether or not it makes sense to break the rules for our own benefit, we are running on corrupted hardware: we are very good at justifying to ourselves that seizing money and power for own benefit is really for the good of everyone. If I found myself in a situation where it seemed to me like seizing power for myself was net good, I would worry that in fact I was fooling myself—and even if I was pretty sure I wasn’t fooling myself, I would still worry that I was falling prey to the unilateralist’s curse if it wasn’t very clearly a good idea to others as well.
Additionally, if you’re familiar with decision theory, you’ll know that credibly pre-committing to follow certain principles—such as never engaging in fraud—is extremely advantageous, as it makes clear to other agents that you are a trustworthy actor who can be relied upon. In my opinion, I think such strategies of credible pre-commitments are extremely important for cooperation and coordination.
Furthermore, I will point out, if FTX did engage in fraud here, it was clearly in fact not a good idea in this case: I think the lasting consequences to EA—and the damage caused by FTX to all of their customers and employees—will likely outweigh the altruistic funding already provided by FTX to effective causes.
- Some comments on recent FTX-related events by 10 Nov 2022 22:23 UTC; 644 points) (
- Some comments on recent FTX-related events by 10 Nov 2022 22:23 UTC; 644 points) (
- Reflections and lessons from Effective Ventures by 28 Oct 2024 16:01 UTC; 186 points) (
- Effective Altruism: Not as bad as you think by 24 Nov 2022 13:11 UTC; 169 points) (
- FTX FAQ by 13 Nov 2022 5:00 UTC; 144 points) (
- Sadly, FTX by 17 Nov 2022 14:26 UTC; 134 points) (
- Sadly, FTX by 17 Nov 2022 14:30 UTC; 133 points) (LessWrong;
- How could we have avoided this? by 12 Nov 2022 12:45 UTC; 116 points) (
- The FTX Situation: Wait for more information before proposing solutions by 13 Nov 2022 20:28 UTC; 109 points) (
- 13 Nov 2022 21:11 UTC; 109 points) 's comment on The FTX Situation: Wait for more information before proposing solutions by (
- Critique of the notion that impact follows a power-law distribution by 14 Mar 2024 10:28 UTC; 88 points) (
- FTX, ‘EA Principles’, and ‘The (Longtermist) EA Community’ by 25 Nov 2022 17:24 UTC; 66 points) (
- Future Matters #6: FTX collapse, value lock-in, and counterarguments to AI x-risk by 30 Dec 2022 13:10 UTC; 58 points) (
- Who is Sam Bankman-Fried (SBF) really, and how could he have done what he did? - three theories and a lot of evidence by 11 Nov 2023 1:04 UTC; 46 points) (
- EA & LW Forums Weekly Summary (7th Nov − 13th Nov 22′) by 16 Nov 2022 3:04 UTC; 38 points) (
- Who is Sam Bankman-Fried (SBF) really, and how could he have done what he did? - three theories and a lot of evidence by 11 Nov 2023 1:04 UTC; 36 points) (LessWrong;
- 12 Dec 2023 15:31 UTC; 26 points) 's comment on Who is Sam Bankman-Fried (SBF) really, and how could he have done what he did? - three theories and a lot of evidence by (
- 13 Nov 2022 21:43 UTC; 24 points) 's comment on The FTX Situation: Wait for more information before proposing solutions by (
- 28 Nov 2022 10:52 UTC; 24 points) 's comment on If you received FTX grant money you should return it by (
- 24 Nov 2022 20:35 UTC; 22 points) 's comment on Rethink Priorities’ Leadership Statement on the FTX situation by (
- Non-performative speech in EA by 15 Nov 2022 0:07 UTC; 21 points) (
- EA & LW Forums Weekly Summary (7th Nov − 13th Nov 22′) by 16 Nov 2022 3:04 UTC; 19 points) (LessWrong;
- “Slow Boring” on marginal charity and how to pitch friends on effective giving by 1 Dec 2022 18:45 UTC; 16 points) (
- 13 Dec 2022 18:54 UTC; 13 points) 's comment on EA is probably undergoing “Evaporative Cooling” right now by (
- 28 Nov 2022 22:13 UTC; 10 points) 's comment on Rethink Priorities’ Leadership Statement on the FTX situation by (
- 12 Nov 2022 14:24 UTC; 6 points) 's comment on Some comments on recent FTX-related events by (
- 28 Nov 2022 10:32 UTC; 5 points) 's comment on If you received FTX grant money you should return it by (
- 26 Nov 2022 3:53 UTC; 3 points) 's comment on FTX, ‘EA Principles’, and ‘The (Longtermist) EA Community’ by (
- Responsibility and Reform: Are we learning the right lessons from FTX? by 13 Nov 2022 14:48 UTC; 1 point) (
- FTX, ‘EA Principles’, and ‘The (Longtermist) EA Community’ by 26 Nov 2022 0:00 UTC; 1 point) (
From CEA’s guiding principles:
Hey, crypto insider here.
sbf actions seem to be directly inspired by his effective altruism believes. He mentioned a few times on podcasts that his philosophy was: Make the most money possible, whatever the way, and then donate it all in the best way to improve the world. He was only in crypto because he thought this was the place where he could make the most money.
sbf was first a trader for Alameda and then started FTX
some actions that Alameda/FTX was known for:
*Using exchange data to trade against their own customers
*Paying twitter users money to post tweets with the intention of promoting ftx, hurting competitors, and manipulating markets
*Creating ponzi coins with no usage with the only intention of selling these for the highest price possible to naive users. Entire ecosystems were created for this goal.
The typical plan was:
1.Fund a team to create a new useless token. 2% of coins to public, 98% to investors who get it year later. 2. Create manipulation story for why this project is useful. 3.Release news item: Alameda invested in x coin (because alameda had a good reputation at first). 4. pump up the price as high as they can using twitter influencers. 5. list the coin on FTX so investors can hedge position. 6. Alameda has another coin they can trade around and liquidate speculators based on the data they get from ftx.7. repeat x20
*Lying and predatory behavior
It seems like they took most actions based on a “expected value” approach, calculating which of possible options would on average make them more money.
including decisions like lying or telling the truth , breaking the law yes or no ,and building reputation for only goal of being more effective at manipulation later on.
I think this expected value approach made them super succesful traders. And they stayed with strictly the same approach when running the exchange. This is where things are going wrong. In social interactions and situations when your actions impact other people you should also think about things like your reputation, or what would happen if everyone starts acting like you.
Otherwise, you could rationalize pretending to be friends with your neighbours and then murdering them to give their money away to the poor.Maybe it seems like a good action on paper for a naive utilitarian but if everyone would act like this things would break down.
I think another factor of this outcome with sbf is narcissistic personality. Something like Effective Altruism can feel emotionally attractive for people like this because it implies, they can do things “better”, or “more effective”. It feeds the need for superiority. And then they rationalize everything with, its good for the world, i will be 10x as effective with the money than others etc. It could be true, but it could also not be true and it might not be the real reason why they are acting in this way.
I think effective altruism works better when blended with normal human behavior and moral principles like: “try generally to tell the truth”, “dont steal from your users.”
This is very important if true, because it suggests with due diligence, EA leaders could have known that it was morally dodgy to be associated with FTX, even before the current blow-up. In comparison if the story is “previously reasonably ethical by finance standards trader steals to cover losses in a panic”, then while you can say there is always some risk of something like that, it’s not really the kind of thing where you can blame people for associating with someone with beforehand. I think it’d be good if some EA orgs had a proper look into which of these narratives is more correct when they do a post-mortem on this whole disasters.
I deep-dived into crypto in the latter half of 2020 because I was curious what was going on there. It took me a few months to see but what’s said in the top-level comment were basically all true back then. I started my learning from scratch with an open mind, I would imagine had one looked into SBF activities with due diligence in mind, questionable behavior would be obvious to see.
Counterpoint to “had one looked into SBF activities with due diligence in mind, questionable behavior would be obvious to see”:
Many high profile VC firms invested in FTX (e.g. Sequoia, SoftBank, BlackRock). They raised at astronomical valuations as recently as January of this year. It seems unlikely to me that any due diligence into the inner workings of the business by EA higher-ups would have come up with something that these VCs apparently did not.
OTOH, I’ve been following crypto closely for a little over a year now and I had heard rumblings along the lines of the top-level comment. It is my impression that most (maybe all) of the assumptions of bad behavior were based on speculation and circumstantial evidence rather than hard proof.
Perhaps that speculation and circumstantial evidence was enough to be careful with too closely associating with SBF/FTX, I don’t know, but it seems unlikely to me that due diligence before the last few days would have revealed obvious bad behavior.
Counterpoint to this: a lot of VC investments in crypto were very dodgy. I can’t recall exact project names, but I remember regularly seeing news of the form “a16z just backed us with 300M!” on projects which are clearly zero-sum and don’t have the market cap to generate >300M in fees, like blockchain games. VC investment doesn’t seem like as strong a signal in the crypto space as in other spaces.
This seems to be a case for ‘trust but verify’ - it’s also worth remembering that reputational risk and purposes rebound differently for different participants.
You really need to provide proof for these sweeping allegations. I know people are worried with the current situation and I agree fraud is likely, but I’m concerned that someone making such extreme claims with 0 links or evidence besides claiming to be an insider is so highly upvoted.
If you make an extreme claim, the burden of providing sources is on you.
These aren’t as extreme as they seem. They are genuinely just the way crypto functions. Here is a link to SBF, this past April, explaining how one of the largest “income” generating systems in crypto (that he also engaged heavily in, and in a way helped to popularize) is a ponzy scheme, and being totally unworried about stating this fact. https://www.bloomberg.com/news/articles/2022-04-25/sam-bankman-fried-described-yield-farming-and-left-matt-levine-stunned
This is not ‘just the way crypto functions’. There is very wide variance in the ethical integrity of different crypto protocols and projects.
Bitcoin is one thing.
Highly decentralized layer-1 protocols such as Cardano and Ethereum are another thing.
Oracle protocols such as Chainlink are another.
Centralized exchanges vary a lot—Kraken seems to have quite high openness, integrity, transparency, and auditability; whereas FTX did not.
There are lots of scammers in crypto. There are also many highly ethical, honest, and constructive leaders.
Just as it would be a shame for outsiders to reject EA as fraudulent just because FTX was, it would be a shame for EAs to reject all crypto as fraudulent just because FTX was.
Thanks a lot for joining the discussion and sharing these observations, that’s super valuable info and imo extremely damning if true. Do you happen to have some sources I could check which corroborate what you’ve written here?
Do you have any evidence for these two? Not challenging you, just curious. E.g. Twitter users who admitted to being paid by FTX, or examples of coins that FTX/Alameda created in the way you describe, that sort of thing.
it was mostly the SOLANA ecosystem coins: like Oxygen, Raydium, MAPS, All of them were created with the playbook of a very low float (initial available tokens) and very high fully diluted valuation (98% of the tokens would be released to investors later on).
SBF on Twitter: “11) Paypal is likely the product with the largest userbase in crypto, at around 300m. Soon, the second largest will probably be MAPS.” / Twitter
You could check the graph of these coins , all these dropped 95-99% in value after investor tokens unlocks started. By this time the big investors already hedged (shorted) the tokens on ftx so they could lock in the value at the higher prices.
Hsaka on Twitter: “The greatest transfer of wealth this cycle has been from ignorant plebs to the Alameda/Solana/FTX VC crew running the low float high FDV scam. Tis a feature, not a bug, since people still continue to willingly donate their money.” / Twitter
As somebody in the industry I have to say Alameda/FTX pushing MAPS was surreal and cannot be explained as good faith investing by a competent team.
Thanks, that seems really bad and deceptive. Do you also have examples of tweets or people that were paid off by FTX to promote one of those coins?
As a note, while I agree people though that via Alamada, FTX was “Using exchange data to trade against their own customers”, the fact that Alamaeda lost so much money confuses me as to if this was actually true.
I agree! As a founder, I promise to never engage in fraud, either personally or with my business, even if it seems like doing so would result in large amounts of money (or other benefits) to good things in the world. I also intend to discourage other people who ask my advice from making similar trade-offs.
This should obviously go without saying, and I already was operating this way, but it is worth writing down publicly that I think fraud is of course wrong, and is not in line with how I operate the philosophy of EA.
I endorse the sentiment but I think anyone who was planning to commit fraud would say the same thing, so I don’t think that promise is particularly useful.
Earlier this year ARC received a grant for $1.25M from the FTX foundation. We now believe that this money morally (if not legally) belongs to FTX customers or creditors, so we intend to return $1.25M to them.
It may not be clear how to do this responsibly for some time depending on how bankruptcy proceedings evolve, and if unexpected revelations change the situation (e.g. if customers and creditors are unexpectedly made whole) then we may change our decision. We’ll post an update here when we have a more concrete picture; in the meantime we will set aside the money and not spend it.
We feel this is a particularly straightforward decision for ARC because we haven’t spent most of the money and have other supporters happy to fill our funding gap. I think the moral question is more complex for organizations that have already spent the money, especially on projects that they wouldn’t have done if not for FTX, and who have less clear prospects for fundraising.
(Also posted on our website.)
ARC returned this money to the FTX bankruptcy estate in November 2023.
I really appreciate the content and tone of this; I want us to have a lot of integrity in our responses, and keep cultivating it.
(Edited to add month later: I really liked the intensity and the connection to consequentialist reasons to care about deontological and virtue ethical considerations. I have updated that there was a sweepingness to this post I might not endorse and I suspect I got swept up in appreciation that the EA community had people who were going to stand strong and condemn bad behavior, over and above the specifics of the argument made).
Assuming fraud occured: the harder question is whether those who received funding have an obligation to return it, at least under some circumstances. Verbally condemning fraud is a rather low bar; presumably few would openly defend any fraudulent behavior that occured. But some people may be holding grants funded by fraud, and any future avoidable spending of those funds could be seen as condoning the fraud.
I think this point is really important. Statements like those mentioned in the post are important, but now that FTX doesn’t look like it’s going to be funding anyone going forward, they are also clearly quite cheap. The discussion we should be having is the higher stakes one, where the rubber meets the road. If it turns out that this was fraudulent, but then SBF makes a few billion dollars some other way, do we refuse that money then? That is the real costly signal of commitment, the one that actually makes us trustworthy.
For those interested in discussions on this, linking out to other posts, too, see the question Under what conditions should FTX grantees voluntarily return their grants? by sawyer.
I think the Utilitarian arguments you presented are quite strong, such as precommiting to certain principles being very advantageous, but surely they’re not infinitely advantageous right? A few billion is quite a lot.
TLDR because I got long-winded: If you ever find yourself planning to commit some morally horrible thing in the name of a good outcome, stop. Those kinds of choices aren’t made in the real world, they are a thought exercise (normally a really stupid one too.)
Long version:
Sorry that you got downvoted hard, keep in mind that knee-jerk reactions are probably pretty strong right now. While the disagrees are justified, the downvotes are probably not (I’m assuming this is a legit question.)
I’m constantly looking to learn more about ethics, philosophy, etc and I recently got introduced to this website: What is Utilitarianism? | Utilitarianism.net which I really liked. There are a few things that I disagree with or feel could have been more explored, but I think it’s overall good.
To restate and make sure that I understand where you’re coming from, I think that you’re framing the current objections like a trolley problem, or its more advanced version the transplant case. (Addressed in 8. Objections to Utilitarianism and Responses – Utilitarianism.net second paragraph under “General Ways of Responding to Objections to Utilitarianism”) if I was going to reword it, I would put it something like this:
“When considered in large enough situations, the ideal of precommitment would be swamped by the potential utility gains for defecting.”
This is the second response commonly used in defense of the utilitarian framework “debunk the moral intuition” (paragraph 5 in the same chapter and section.)
I believe, and I think most of us believe that this isn’t the appropriate response (to this situation) because in this case, the moral intuition is correct. Any misbehavior on this scale results in a weaker economic system, harms thousands if not millions of people, and erodes trust in society itself.
A response you might think would be something like “but if the stakes were even higher.”
And I agree, it would be pretty ridiculous if after the Avengers saved NYC from a chitauri invasion someone tried to sue the Hulk for using his car to crush an alien or something. We would all agree with you there, the illegal action (crushing a car) is justified by the alternative (aliens killing us all.)
The problem with that kind of scale, however, is that if you ever find yourself in a situation where you think “I’m the only one that can save everyone, all it takes is ‘insert thing that no one else wants me to do.’” stop what you’re doing and do what the people around you tell you to do.
If you think you’re Jesus, you’re probably not Jesus. (or in this case the Hulk.)
That’s why the discussions of corrupted hardware and the unilateralist’s curse (links provided by OP) are so important.
For more discussion on this you can look in Elements and Types of Utilitarianism – Utilitarianism.net “Multi-level Utilitarianism Versus Single-level Utilitarianism.”
One must-read section says that “In contrast, to our knowledge no one has ever defended single-level utilitarianism, including the classical utilitarians.26 Deliberately calculating the expected consequences of our actions is error-prone and risks falling into decision paralysis.”
I would encourage you to read that whole section (and the one that follows it if you think much of rule utilitarianism) as I think one of the most common problems with most people’s understanding of utilitarianism is the single-level vs multi-level distinction.
…is what you tell yourself before you get exposed for committing massive fraud, costing far more billions than you ended up with.
If SBF did commit fraud, it looks like he did it to keep Alameda from going bankrupt. If that’s the case, he ended up destroying billions of dollars of potential donations from FTX. “Take a risk that might let you earn a billion dollars illegally” and “Make 0 dollars” are not your only options here! You could have taken not-illegal risks that might have won big instead. Those tend to have higher EV.
Even if they weren’t infinitely advantageous, it seems like you’d have to be unrealistically sure that you can get away with shadiness and no bad consequences before risking it. If the downsides of getting caught are bad enough, then you can never be sufficiently confident in practice. And if the downside risk of some action isn’t quite as devastating as “maybe the entire EA movement has its reputation ruined,” then it might anyway be the better move to come clean right away. For instance, if you’re only 0.5 billion in the hole out of 30 billion total assets (say), and you’ve conducted your business with integrity up to that point, why not admit that you fucked up and ask for a bailout? The fact that you come clean should lend you credibility and goodwill, which would mitigate the damage. Doubling down, on the other hand, makes things a lot worse. Gambling to get back multiple billions really doesn’t seem wise because if it was risk-free to make billions then a lot more people would be billionaires…
In any case, faced with the choice of whether to precommit to always act with integrity, it’s not necessary for the pro-integrity arguments to be “infinitely strong.” The relevant question is “is the precommitment better in EV or not?” (given the range of circumstances you expect in your future). And the answer here seems”yes.” (Somewhat separately, I think people tend to underestimate how powerful and motivating it can be to have leadership with high integrity – it opens doors that would otherwise stay closed.)
You might say “That’s a false dilemma, that choice sounds artificially narrow. What if I can make a sophisticated precommitment that says I’ll act with integrity under almost all circumstances, except if the value at stake is (e.g.) 100 billion and I’m ultra-sure I can get away with it?” Okay, decent argument. But I don’t think it goes through. Maybe if you were a perfect utilitarian robot with infinitely malleable psychology and perfect rationality, maybe then it would go through. Maybe you’d have some kind of psychological “backdoor” programmed in where you activate “deceitful mode” if you ever find yourself in a situation where you can get away with >100 billion in profits. The problem though, in practice, is “when do you notice whether it’s a good time to activate ‘deceitful mode’?” To know when to activate it, you have to think hypothetically-deceitful-thoughts even earlier than the point of actually triggering the backdoor. Moreover, you have to take actions to preserve your abilities to be a successful deceiver later on. (E.g., people who deceive others tend to have a habit of generally not proactively sharing a lot of information about their motives and “reasons for acting,” while high-integrity people do the opposite. This is a real tradeoff – so which side do you pick?) These things aren’t cost free! (Not even for perfect utilitarian robots, and certainly not for humans where parts of our cognition cannot be shut off at will.) In reality, the situation is like this: you either train your psychology, your “inner elephant in the brain,” to have integrity to the very best of your abilities (it’s already hard enough!), or you do not. Retaining the ability to turn into a liar and deceitful manipulator “later on” doesn’t come cost-free; it changes you. If you’re planning to do it when 100 billion are at stake, that’ll reflect on how you approach other issues, too. (See also my comment in this comment section for more reasons why I don’t think it’s psychologically plausible for people to simultaneously be great liars and deceivers but also act perfectly as though they have high integrity.)
I think this post is my favourite for laying out why a really convincing utilitarian argument for something which is common sense very bad shouldn’t move you. From memory Eliezer says something like ~Thinking there’s a really good utilitarian argument doesn’t mean the ends justify the means, it just means your flawed brain with weird motivations feels like there’s a really good utilitarian argument. Your uncertainty in that always dominates and leaves room for common sense arguments, even when you feel really extra super sure. Common sense morality rules like “the ends shouldn’t justify the means” arose because people in practice are very miscallibrated about when the ends actually justify the means so we should take the outside view and assume we are too.~
(By miscallibrated I think I could defend a claim like “90% of the time people think the ends definitely justify the means and this clashes with common sense morality they are wrong”)
I might be butchering the post though so you should definitely read it.
https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t-justify-means-among-humans
I agree that the arguments people are making here don’t seem very scope-sensitive, and I’m not about to make a very scope-sensitive argument either. However, it’s worth considering the possibility that the damage to the community and public trust in EA could be greater. How many potential EAs do we lose out on? How much harder is it to engage politically and with institutions? We’ve been having a hard time spending money well, at least within longtermism, so the extra donations (including extra donations had they not been caught) were plausibly worth much less per dollar than the marginal grants. Poor public perception will make using our resources effectively harder going forward.
To the people voting ‘disagree’, what OP said above is clearly true. Perhaps people are taking it to imply that the utilitarian course of action here is correct, but I see no such implication.
I think a better forum norm would be for someone to comment spelling out the perceived implication and why they disagree with it, and have other people upvote that.
I’ve been repeatedly astonished by the level of moral outrage amongst EAs and expressions of prior cluelessness over FTX ‘fraud’. As an EA newcomer, I was assuming most everyone was aware and okay with it “because consequentialism”. Ignoring the specific egregious act of asset misallocation that brought FTX down, I thought it’s been more or less common knowledge that FTX has been engaged in, at a bare minimum, knowingly facilitating the purchase and sale of shares in Ponzi schemes, and that Alameda has been trading in the same, against counterparties made up in large part by a population of people who did not understand these assets and would have lacked the financial sophistication to be allowed into other, better regulated leveraged financial markets. I say ‘knowingly’ because SBF all but admitted this (with regard to ‘yield farming’) in an interview, and there’s also an old video going around of the Alameda CEO expressing her initial discomfort with the schemes as well. I was aware of these schemes going on within maybe 1 week of first having heard of FTX & SBF back in May of this year. My immediate take was “Billionaire ‘Robin Hood’ figure is re-allocating wealth from crypto-bros to the global poor, animals, and the longterm future of humanity… eh, why not? But I sure hope he cashes out before the house of cards comes crashing down”.
The few times I mentioned any of this at a gathering, it was always met by something along the lines of “Yeah, I guess… meh”. It never seemed to be a particularly surprising or contentious take.
The other thing that’s weird to me is the idea of taking this firm stance that the ponzi schemes we did know about weren’t going over the line, but that ‘re-investing customer funds’ was going over the line. This feels like a fairly arbitrary line from which to go, on one side, “eh, whatevs” to “this is an outrage!” on the other side. It’s convenient that the title of this post uses the term ‘fraud’ rather than ‘theft’; that makes this criticism much easier to levy because ponzi schemes are by definition ‘fraud’. In both cases, people are being taken advantage of. They’re both against norms, both involve misleading customers, both involve customers losing a lot of money, and they’re both illegal within well-regulated financial markets (which I know crypto is not, but still).
All of this to say… I don’t think now is the time for handwringing about this… that time was many months ago for anyone who had a principled stance on the matter and was aware of the DeFi schemes FTX was openly invovled in; handwringing now comes off sort of as lamenting getting caught, with an after-the-fact rationalization for the arbitrary placement of the line that was crossed.
To be fair, I can’t moralize about this either; I don’t get to say “I told you so” because I didn’t tell many people so, and certainly not anyone in a position of authority to do anything about it. Personally, I didn’t have a principled stance on the matter, and I would have needed a quite strong principled stance to justify going against the social incentives for keeping that opinion to myself.
On the other question of the day, whether to give the money back: if you’re in the subset who were aware of the FTX DeFi shenanigans and weren’t lobbying for giving back or rejecting the money 3-6 months ago, little has materially changed about the issue on a moral level since then.
EA Forum moderators: If you strongly believe this post is net-negative for EA, please delete it.
I do see a significant moral difference between allowing people to make potentially risky decisions and deceiving them about how much risk is involved. As an exchange, FTX was theoretically just serving to coordinate buyers and sellers who wanted to transact in the first place. If you believe that at least a portion of crypto is merely volatile and not fraudulent, then you’re just facilitating risky decisions, not scamming people. Doubly so if you believe even a tiny subset of DeFi provides net value, as many of FTX’s customers still believe.
But in practice FTX was taking much more risky behavior, without telling its users, and in fact explicitly denying that such behavior was occurring. Nobody thought it was risky to deposit USD into FTX, if you hadn’t bought any crypto. FTX assured users it wasn’t. But if you have USD sitting on the site right now, there’s a good chance you’re never getting it back. To state the obvious: that’s fraud, and it’s wrong. And I think it’s different than letting people take risks if they want to.
To be clear, I completely agree that the latter is worse than the former. I am arguing that the two wrongs (the known Ponzi schemes and the unknown-till-now squandering of depositor funds) exist on the same spectrum of “dishonesty” and “cheating people”.
That said, “allowing people to make potentially risky decisions” is not a fair representation of promoting and benefitting from Ponzi schemes. Ponzi schemes are fraud. People who knowingly promote them are acting as con men when they do so. SBF has publicly described the process and its absurdity in great detail… he knew exactly what he was selling.
I’m disturbed by the inability of many even now to acknowledge, in retrospect (and independent of whether they ‘should’ have known before the collapse), that these known schemes were fraudulent. I see a lot of scrambling to justify them under the guise of “they weren’t technically lying” or “they weren’t technically illegal” (which isn’t entirely clear to me, though it is clear that if the same schemes had been happening in the open in US jurisdiction and not within the crypto-realm they would have been massively and obviously illegal, and the FTC/SEC would have destroyed them).
This statement does not logically follow, and does not align with finance industry norms (and laws) which obligate brokers to conduct due diligence before selling a given security. If the head of NASDAQ went on the news and said “Yeah, XYG [traded on our exchange] is basically a total Ponzi scheme, lol” (as SBF basically did with Matt Levine), there would be an immediate and colossal legal and ethical shitstorm. The existence of all the remaining, legitimate companies also being traded on the NASDAQ would not be relevant for the ensuing lawsuits. You appear to be arguing that as long as SBF wasn’t dealing solely in frauds, it’s okay; whereas the sensible view for someone taking a strong moral stance is that it’s only okay if SBF wasn’t knowingly dealing in any frauds.
I agree here, since I don’t think crypto is at all viable without crimes.
Here’s a link to why I dislike crypto:
https://www.currentaffairs.org/2022/05/why-this-computer-scientist-says-all-cryptocurrency-should-die-in-a-fire
To answer for myself: I didn’t participate in the crypto and FTX discussions, so I am very afraid that my only source of knowledge about FTX is what the general public has, plus a distrustful prior over crypto that got it right here.
I agree witch hunts are bad, I agree we should collectively be extremely unambiguous in condemning fraud, and I agree focusing on individuals can be unhealthy and not the most productive.
But I do think the community should do some reflection and have a postmortem process, part of which is developing a detailed understanding of how events unfolded, so we can develop strategies for avoiding similar situations in the future.
Agree with the postmortem process, there is a reasonable chance that SBF used EA type thinking to justify his behaviour and we certainly celebrated him as some kind of hero. I think it is important to not just condemn fraud but also really try to figure out if there is stuff EA did or advice it gives that incentivizes this kind of behaviour.
Possibly the most visible element in EA utilitarianism is literally called “longtermism,” so I am not sure this objection is relevant to utilitarianism as practiced here.
But I understand your objection: conceivably, you could find yourself in a situation where, in your honest judgment, the very best thing you can do for the world is to commit a terrible crime.
The problem is that when people design these thought experiments, they often set it up in such a way as to make people reject the crime on utilitarian grounds. For example, I’m sure you’ve heard the surgeon example—should a surgeon kill one healthy patient to harvest their organs and transplant them into 5 other patients to save their lives?
For most people, they feel this is repugnant. But the natural way to argue against it is with utilitarianism itself. If we did this, patients would flee from surgeons, even fight them. Sick people who didn’t want to have somebody murdered to save their own lives would die rather than seek medical treatment. We probably get a lot more QALYs by leaving healthy people alive than by killing them for their organs to put in people who probably have other underlying pathologies.
These are just natural, obvious consequences of trying to implement this rule. By contrast, deontological and virtue ethics objections to this practice sound weak. “Doctors SWORE AN OATH to do no harm!” “Medicine is about practicing the virtue of beneficence!” Those sound like slogans.
Utilitarianism may, in specific and, for all practical purposes, exceedingly rare circumstances, cause somebody to do something awful to achieve a good outcome. But at all other times, utilitarianism motivates you working as hard as you can to avoid ever being put in such circumstances in the first place.
I think there’s additional factors that make classical total utilitarians in EA more likely to severely violate rules:
x-risk mitigation has close to infinite expected value.
And
AI timelines mean that violating rules is likely to not have harmful long-term effects.
Yes, I agree that believing the world may be about to end would tend to motivate more rules-breaking behavior in order to avoid that outcome. I’ll say that I’ve never heard anybody make the argument “Yes, AGI is about to paperclip the world, but we should not break any rules to avoid that from happening because that would be morally wrong.”
Usually, the argument seems to be “Yes, AGI is about to paperclip the world, but we still have time to do something about it and breaking rules will do more harm than good in expectation,” or else “No, AGI is not about to paperclip the world, so it provides no justification for breaking rules.”
I would be interested to see somebody bite the bullet and say:
The world is about to be destroyed.
There is one viable strategy for averting that outcome, but it requires a lot of rule-breaking.
We should not take that strategy, due to the world-breaking, and let the world be destroyed instead.
I doubt it’s the clarity of EA’s take on fraud that is the problem because there are real world consequences of committing fraud which includes a potential jail term which in most people’s eyes is a bigger deterrent than EA’s opinion on fraud. EA can be as clear as it wants on its normative positions but to the extent that senior folks in the EA community are able to convince people that the fate of humanity is on the line then there will be more norm breaking in the future. Think very hard of the entire distribution of things one can justify if they think that humanity is at risk of extinction. Most people will not try the extreme measures but you should take seriously the idea that some will try even the most extreme measures especially when the conventional tactics keeps failing and it seems like time is running out.
“Don’t do fraud in service of EA because it’s bad PR” (my read of what you said) is not, in fact, a condemnation of fraud. Nor is it good PR.
Hardcore utilitarians may not have a better condemnation to make, but that’s not a problem, because only a minority of EAs are actually full utilitarians, rather than having one foot in common-sense morality or other moralities, or in uncertainty.
If hardcore utilitarians can’t say something to unequivocally condemn fraud, they should leave it to those who have one foot in common-sense morality to do so.
Hardcore utilitarians can endorse a norm that says “don’t commit fraud” because they think such a norm will have better consequences than an alternative norm that says “generally don’t commit fraud unless it seems like it could achieve more good than harm”.
The former norm is likely to avoid instances of fraud, which isn’t only good because fraud can lead to bad PR, but also because a society with widespread fraud is unlikely to be a pleasant one.
So I do think hardcore utilitarians can be justified in condemning fraud in the strongest possible terms, although I accept one could debate this point.
Good point
taking that seriously: wouldn’t it be the best for EA, to officially say that any fraud is bad (thus getting good PR), but at the same time internally looking away, thus not to be forced to see fraud?
Would still using the money already be that?
This is a perfect example of Goodhart’s law. More specifically, assuming you don’t value fraud or lying (in a moral anti-realist framework), not seeing fraud or lying does not equal no fraud or lying is occuring.
This is a thermonuclear idea bound to fail due to Extremal Goodhart.
That’s a pretty wild misreading of my post. The main thesis of the post is that we should unequivocally condemn fraud. I do not think that the reason that fraud is bad is because of PR reasons, nor do I say that in the post—if you read what I wrote about why I think it’s wrong to commit fraud at the end, what I say is that you should have a general policy against ever committing fraud, regardless of the PR consequences one way or another.
The main thesis of your post (we should unequivocally condemn fraud) is correct, but the way you defend it is in conflict with it (by saying it’s wrong for instrumental reasons).
Here’s the PR argument:
This weakens the condemnation, by making it be about the risks of being found out, not the badness of the action.
When you explain that pre-committing to not commit fraud is an advantageous strategy, I read this as another instrumental argument.
It’s hard to condemn things unequivocally from a purely utilitarian point of view, because then all reasons are instrumental. I’m not saying your reasons are untrue, but I think that when non-utilitarians read them, they won’t see an unequivocal condemnation, but a pragmatic argument that in other contexts could be turned in defence of fraud, if the consequences come out the other way.
That said, Jack Malde’s reply to me is a pretty good attempt at unequivocal condemnation from within a utilitarian frame, because it doesn’t talk about conditions that might not hold for some instance of fraud. (But it’s not necessarily correct.)
The portion you quote is included at the very end as an additional point about how even if you don’t buy my primary arguments that fraud in general is bad, in this case it was empirically bad. It is not my primary reason for thinking fraud is bad here, and I think the post is quite clear about that.
It’s easy to say that no one should do what SBF did. If the rumours were true, there are very few ethical systems that would justify the behavior. What’s harder and more action relevant is to specify ahead of time very clearly the exact lines of you find acceptable
What is “fraud” and how much “fraud” do we allow? You can argue good advertisements always toe the line. at some point, you seriously screw over business opportunities. Now, not gambling with customer money is again an obvious case far exceeding this.
Lots of companies toe the line of honesty but we would never expect huge backlash, there is a level that is accepted by society.
What about anti-competitive practices and monopolistic behavior? Should we ask our founders to not profit max? Should we ask them to not lobby the government for tax breaks and favorable licensing, etc. ?
If so, do we feel bad about taking money from bill gates? Microsoft has a history of anti-competitive behavior.
What about businesses that probably harm society overall?
Facebook has probably been net-bad for society in my view- though it is hard to imagine what would exist if it was gone—Should we feel bad about taking money from meta?
If I’m in a VC meeting trying to get funding for my startup, should I not significantly exaggerate my product? Isn’t this what everyone does while in a VC meeting?
What if we know for sure that we can lie to consumers, make a ton of money, and there won’t be any backlash for it (yes this is probably not a real situation). Are we not doing that? if so we aren’t utilitarians, which is fine but then why are we utilitarians in our cause prioritization? Seems arbitraryetc.
Also, I would encourage people to read Elephant in the Brain, which backs up this paragraph.
Also, Goodhart’s law would appear as soon as you actually try to optimize for seeming good, when it’s not actually a good thing.
The situation at FTX is illustrative of a central flaw in utilitarianism. When you start thinking the ends justify the means, anything becomes justifiable.
Trust is so important. Doing the right thing is so important.
I don’t really know what else to say.
There is a possibility SBF committed fraud motivated directly by his own utilitarian beliefs—a charitable Ponzi scheme.
But your argument is that utilitarianism systematically generates fraud in a way alternative moral systems do not. Finding one potential bad example is nowhere near enough to justify such a claim.
I’m not sure the argument is specifically about fraud.
I think the argument is more that “when ends justify the means, you are far more likely to break norms / rules / laws”, which is a very old objection to utilitarianism and doesn’t rely on the FTX example.
No, the argument is self-contradictory in a way that your version is not. “When the ends justify the means,” only those means that are, in fact, justified by the ends become justifiable. Means that are not justified by the ends do not become justifiable.
It would be fair to say “some forms of utilitarianism license fraudulent behavior in exchange for a sufficient altruistic outcome.”
Of course, we can also say “some forms of deontology advocate we allow the world to be destroyed before we break a rule.”
I don’t think either line of argument leads to productive moral debate.
Right, but utilitarianism has a lower bar for deciding that means are justifiable than other ethical views do (things just need to be overall net positive, even if means are extremely harmful).
I think these weaknesses of utilitarianism and deontology are useful and given that EA contains lots of utilitarians / is closer to utilitarianism than common sense ethics / is watered down utilitarianism, I think it’s good for EAs to keep the major weaknesses of utilitarianism at the front of their minds.
Claiming this as a “weakness” of utilitarianism needs justification, and I stridently disagree with characterizing EA utilitarianism as “watered down.” It is well-thought-through and nuanced.
A weakness in the sense that it severely contradicts our intuitions on morality and severely violates other moral systems, because under classical total utilitarianism this would not only justify fraud to donate to AI safety, it would justify violence against AI companies too.
(I understand that not everyone agrees that violating moral intuitions makes a moral system weaker, but I don’t want to debate that because I don’t think there’s much point in rehashing existing work on meta-ethics).
I mean that EA is watered-down classical utilitarianism.
I don’t think that’s bad because classical utilitarianism would support committing fraud to give more money to AI safety, especially with short AI timelines. And my understanding is that the consensus in EA is that we should not commit fraud.
I will try to read everyone’s comments and the related articles that have been shared. I haven’t yet, but I’m going on a trip today — I may have time on the way.
To be clear: I am against utilitarianism. It is not my personal value system. It seems like an SBF-type-figure could justify any action if the lives of trillions of future people are in the balance.
The utilitarians who aren’t taking radical actions to achieve their ends just have a failure of imagination and ambition relative to SBF.
This doesn’t seem specific to utilitarianism. I think most ethical views would suggest that many radical actions would be acceptable if billions of lives hung in the balance. The ethical views that wouldn’t allow such radical actions would have their own crazy implications. Utilitarianism does make it easier to justify such actions, but with numbers so large I don’t think it generally makes a difference.
Even if other views in fact have the same implications as utilitarianism here, it’s possible that that the effects of believing in utiltarianism are particularly psychological pernicious in this sort of context. (Though my guess is the psychologically important things are just take high stakes seriously, lack of risk aversion, and being prepared to buck common-sense, and that those are correlated with believing utilitarianism but mostly not caused by it. But that is just a guess.)
‘The utilitarians who aren’t taking radical actions to achieve their ends just have a failure of imagination and ambition relative to SBF.’ Quite clearly, though, this has blown up in SBF’s face. Maybe the expected value was still good, but it’s entirely possible that the (many) utilitarians who think bucking conventional morality and law to this degree nearly always does more harm than good are simply correct, in which case utilitarianism itself condemns doing so (at least absent very strong evidence that your case is one of the exceptions).
Thanks for this post. Wondering if ‘earning to give’ advice should be updated to more clearly argue for going for (the most) ethical ways to earn instead of just the most ethical ways to give. To me it seems like a lot of the fastest ways to make money can be unethical (which should bother us as EA’s more than usual) or outright fraudulent, so arguing for making as much money as possible can incentivize behaviour like this (in that sense I do think EA bears some responsibility for Sam’s behaviour). I love giving what you earn, just not trying to maximize what you earn unless it is done by doing good.
Surely it’s at least implied that people shouldn’t earn to give through fraud/criminal behaviour?
It’s more than implied, e.g. https://80000hours.org/articles/harmful-career/
Edit: removed a quote to encourage people to skim the full article
Thanks for the responses. I was not aware of the article on harmful careers and think it is very good (I recognize that many of these issues are hard, so even tough I might be a bit more skeptical about some of these high paying jobs and examples I could easily be wrong). Thanks for bringing it to my attention, it shows that some of my criticism was a bit misguided.
Maybe ‘vast majority of cases’ is too ambiguous or allows too much wiggle room for SBF-alikes.
Good point, I removed the quote. The article is pretty nuanced and I think I wasn’t making it justice by quoting just two sentences.
I wish you kept the quote. The effect is that I didn’t read the quote and did not read the article.
Independent audits should be installed in EA organizations especially if it is handling tremendous amount of resources or funds, so fraud can be avoided in the future...
As far as I can tell there is no reason to condemn fraud, but not the stuff SBF openly endorsed, except that fraud happened and hit the “bad” outcome.
From https://conversationswithtyler.com/episodes/sam-bankman-fried/
COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?
BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.
COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.
BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical.
COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence?
BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.
One of my friends literally withdrew everything from FTX after seeing this originally, haha. Pretty sure the EV on whatever scheme occurred was higher than 51⁄49, so it follows....
That’s so interesting, I listened to this interview but don’t remember this answer, I don’t know if I stopped paying attention or just didn’t find it noteworthy. Definitely something to reflect on if it’s the latter.
Evan do we really have enough information to conclude this? The only real pieces of information I am aware of is that (1) binance declined to acquire, (2) Alameda owned a lot of FTT, (3) SBF’s tweets from yesterday.
I don’t think that merely lending out deposits is ‘fraudulent in spirit’. That’s standard operating procedure in ordinary banking. For example, in Vanguard terms of service:
> The Program Banks will use Your Sweep Deposits in the Omnibus Accounts to support their investment lending and other activities. [...] Program Banks will receive substantial deposits from the Bank Sweep at a price that may be less than alternative funding sources. Sweep Deposits in the Omnibus Accounts held at a Program Bank provide a stable source of funds for such bank.
FTX has been accused of much worse than merely lending out depositor’s funds, but I’m not aware of any real information about these further claims.
I realise I have 18 hours more information at hand, but I think yes, we can conclude this with high confidence:
SBF claimed FTX had enough cash to cover all withdrawals and FTX US was totally fine (tweets deleted; see https://cointelegraph.com/news/ftx-founder-sam-bankman-fried-removes-assets-are-fine-flood-from-twitter).
Now they are both in bankruptcy proceedings, along with Alameda. Proceedings (https://storage.courtlistener.com/recap/gov.uscourts.deb.188448/gov.uscourts.deb.188448.1.0.pdf). Several executives SBF reached out to to discuss bailouts have shared that the deposit windfall is $5-10b (can’t find the link any more, but I’ve seen this claimed by several people). SBF has resigned.
$200M-1B of FTX’s reserves have been withdrawn after bankruptcy filing. FTX claims they were hacked. (https://www.coindesk.com/business/2022/11/12/ftx-crypto-wallets-see-mysterious-late-night-outflows-totalling-more-than-380m/).
I don’t think the bank analogy is super accurate, because fractional reserve banking is heavily regulated: you can only loan so much, you’re restricted in how risky these loans can be, and you have the FDIC backstopping deposits in the case of crises/fraud. On the other hand, it seems very likely FTX violated their own ToS to loan most of their reserves to SBF’s insolvent crypto prop shop. There’s no backstop and no accountability
FTX’s terms of service did not allow for this. The deposits were “lent” to plug a hole at a corporation owned by SBF. Vanguard is talking about sending certain monies, not your core investment, to a heavily-regulated entity which posed very low risk. That’s OK in my book if disclosed.
They were an exchange, not a bank, so this still is bad.
I strongly agree with the spirit of this post, and strong upvoted it. However I would criticise this...
… as being too abstract, both in the sense that to a lay person it could sound manipulative, like we’re only saying this for PR reasons, and in the theoretical sense that it’s a murky concept at best, and arguably nonsensical. Any con artist can assert ‘precommitment’ as a statement of intent as easily as they can assert any other kind of intent—the only thing that could prove intent is making a physically inescapable commitment, which the EA community has no way of doing here.
Eh, we can always have it be an acausal deal to some simulator via multiverse cooperation and use some game theory to actually make sure that they won’t cheat, after all...
As long as they are rational.
There’s a literature on this.
Insofar as it does, one reason is moral parliament. Utilitarianism “plays nice” with other moralities: if we are unsure of the correct moral theory, utilitarianism advocates we hedge our risk of a fundamental moral wrong by giving some credence to other value systems.
Let’s say we’re faced with a choice of situations, A and B. They are of exactly equal utility from a “simple utilitarian” standpoint of weighing up the materialistic utilons. However, A is morally fraught from a deontological standpoint, while B is not. Utilitarianism would not say that these two situations are of equal moral weight. It would say that, since we can’t be sure utilitarianism is right, we ought to strongly prefer situation B, which is compatible with deontological ethics as well.
But I’d also say that my point wasn’t that utilitarianism wants us to run from controversy. It motivates us to avoid strictly worse situations to strictly better ones. A situation in which reward X can only be obtained by committing a (lesser) cost C is strictly worse than a situation in which we can obtain X without C. Utilitarianism motivates such a search.
Honestly, sounds like you are taking a utilitarian approach to evaluating other people’s ethical schemes. If it seems “better,” you think it is better. Quite logical, and quite properly utilitarian. If it would produce more utils for us to all forget utilitarianism even existed and take up virtue ethics, that is what utilitarianism would advocate we do.
I want to agree with this, but I think that if SBF had “gotten away with it” we’d have taken his money, which makes me doubt our sincerity here. It sounds a lot more like “don’t get caught doing fraud”
Utilitarianism factors in uncertainty, moral and epistemic. Sure, if you can find a way to criticize factoring in uncertainty into utilitarianism, I’m all ears! But of course, then whatever was the superior solution is what utilitarianism recommends as well. Utilitarianism is best thought of as something engineered, not given.
I’ve always heard of moral parliament as being primarily about an individual reconciling their own different moral intuitions into a single aggregate judgment. Never heard it used in the sense you’re describing. Here’s Newberry & Ord, which is clearly about reconciling one’s own diverse moral intuitions, rather than a way of aggregating the moral judgments of a group.
It does seem helpful to have a term for aggregating moral judgments of multiple people, but “moral parliament” is already taken.
I was going to keep arguing, but I wanted to ask—it seems like you might be concerned that utilitarianism is “morally unfalsifiable.” In general, my own argument here may convey the idea that “whatever moral frameowrk is correct is utilitarian.” In which case, it’s only tautologically “true” and doesn’t provide any actual decision-making guidance of its own. I don’t think this is actually true about utilitarianism, but I can see how my own writing here could give that impression. Is this getting at the point you’re making?
Can you please explain how utilitarianism factors in moral uncertainty?
As far as I’m aware it has nothing to say on the matter.
I’ll have to think about that. I’ve been working on a response, but on consideration, perhaps it’s best to reserve “utilitarianism” for the act of evaluating world-states according to overall sentient affinity for those states.
Utilitarianism might say that X is bad insofar as people experience the badnesss of X. The sum total of badness that people subjectively experience from X determines how bad it is.
Deontology would reject that idea.
And it might be useful to have utilitarianism refuse to accept that “deontology might have a point,” and vice versa.
I don’t think talking about fraud right now is a good move. If somebody asks you whether EAs should do fraud, of course your answer should be an unqualified ‘no’. But if you bring it up, you are implying that SBF actually did fraud, which (1) may not be true and (2) is bad PR.