Who’s at fault for FTX’s wrongdoing
Caroline Ellison, co-CEO and later CEO of Alameda, had a now-deleted blog, “worldoptimization” on Tumblr. One does not usually post excerpts from deleted blogs—the Internet has, of course, saved it by now—but it looks like Caroline violated enough deontology to be less protected than usual in turn, and also I think it’s important for people to see what signals are apparently not reliable signs of honesty and goodness.
In a post on Oct 10 2022, Caroline Ellison crossposted her Goodreads review of The Golden Enclaves, book 3 of Scholomance by Naomi Novik. Caroline Ellison writes, including very light / abstract spoilers only:
A pretty good conclusion to the series.
Biggest pro was the resolution of mysteries/open questions from the first two books. It wrapped everything up in a way that felt very satisfying.
Biggest con was … I think I felt less bought into the ethics of the story than I had for the previous two books?
The first two books often have a vibe of “you can either do the thing that’s easy and safe or you can do the thing that’s hard and scary but right, and being a good person is doing the right thing.” And I’m super on board with that.
Whereas if I had to sum up the moral message of the third book I might go with “there is no ethical consumption under late capitalism.”
For someone like myself, this is a pretty shocking thing to hear somebody say, on a Tumblr blog not then associated with their main corporate persona, not in a way that sounds like the usual performativity, not like it’s meant to impress anybody (because then you’re probably not writing about anything as undignified as fantasy fiction in the first place). It sounds like—Caroline might have been under the impression, as late as Oct 10, that what she was doing at FTX was the thing that’s hard and scary but right? That she was doing, even, what Naomi Novik would have told her to do?
The Scholomance novels feature a protagonist, Galadriel Higgins, with unusually dark and scary powers, with a dark and scary prophecy about herself, trying to do the right thing anyways and being misinterpreted by her classmates, in an incredibly hostile environment.
The line of causality seems clear—Naomi Novik, by telling her readers to do the right thing, probably contributed to Caroline Ellison doing what she thought was the right thing—misusing Alameda’s customer deposits. Furthermore, the Scholomance novels romanticized people with dark and scary powers, and those people not just immediately killing themselves in the face of a prophecy that they’d do immense harm later, i.e., sending the message that it’s okay for them to take huge risks with other people’s interests.
I expect this to be a very serious blow to Naomi Novik’s reputation, possibly the reputation of fantasy fiction in general. The now-deleted Tumblr post is tantamount to a declaration that Caroline Ellison was doing this because she thought Naomi Novik told her to. We can infer that probably at least $30 of Scholomance sales are due to Caroline Ellison, and with the resources that Ellison commanded as co-CEO of Alameda, some unknown other fraction of Scholomance’s entire revenues could have been due to phantom purchases that Ellison funded in order to channel customer deposits to her favorite author.
My moral here? It can also be summed up in an old joke that goes as follows: “He has no right to make himself that small; he is not that great.”
The best summary of the FTX affair that I’ve read so is Milky Eggs’s “What Happened at Alameda Research?” If you haven’t read it already, and you’re at all interested in this affair, I recommend that you go read it right now.
Pieced together from various sources, including some allegedly shared from FTX-employees (and including some comments posted by those to the Effective Altruism forum), Milky Eggs pieces together a harrowing story of how Alameda Research probably lost in excess of $15 billion dollars. Primary causative factors:
Their actual arb strategies stopped working, and were frog-boilingly gradually replaced with long bets on crypto that paid out during the boom and exploded during the bust;
Poor accounting, possibly just no really global accounting or sense of where the money was going;
Excessive use of stimulants, including those known to result in compulsive gambling behavior;
A corporate acquisitions spree, possibly partially motivated by buying up corporate entities that held the FTT token and could have taked the market by dumping it, maybe even raiding those companies for their own customer deposits;
A general lack of spending discipline: for example, buying naming rights to the e-sports organization TSM for $210M, which was way out of line to comparable deals in e-sports.
Completely missing from Milky Eggs’s account: Any mention of effective altruism, except that the EA Forum is listed as a source for some of their alleged-ex-FTX-employee accounts.
Why?
Because—and I say this meaning it gently, and with kindness—you were not that fucking important.
The amount that FTX spent on e-sports naming rights for TSM was greater than everything they donated to effective altruism.
Can you imagine how you’d judge it if, rather than my writing it as a joke, Naomi Novik had gone online and sincerely tried to accept blame for FTX’s fall, because she thought she hadn’t been careful enough to put messages about good corporate governance and careful accounting into her fantasy novels, and Novik had talked about how she was planning to donate an appropriate portion of her Scholomance book royalties back to FTX’s ruined customers? Depending on her state of mind, you might either try to gently console her and somehow get her to realize that she was being way too scrupulous and might possibly want to try standard meds for OCD at some point; or, on another hypothesis about Novik’s state of mind, you might try to gently explain that she’s not the center of the universe and that this wasn’t mostly about her.
This would be true even if Sam Bankman-Fried himself had presented as a Naomi Novik fan, if he had told others that he wanted to be a Novik-style DoTheRightThingist just like Galadriel Higgins the Scholomance protagonist, and he had funneled $140M to causes having to do with things that were on-theme for some of Novik’s books. The $140M would still be less than FTX had spent on e-sports naming rights. SBF calling himself a Novikian RightThingist would not have been much of a factor in why he was trusted, compared to their claims of being the first GAAP-audited crypto exchange and so on.
There probably would be some sort of weird blowup in the Novik fandom, in that case, it would make more sense for them to wonder if they were responsible. But I’d expect people in the Novik fandom to also vastly overestimate how much it was all about them, in that case; because they would know all about Novik, but have less daily exposure to the much wider world in which FTX operated. They’d have heard about the money donated to RightThingism but not about the e-sports naming rights. They would not realize that there were other and bigger fish in the pond.
(Be it clear, I’m not analogizing myself to Novik in that metaphor. I’m analogizing Peter Singer and classical Givewell-style EA to Novik. I asked SBF if he wanted to meet with me ever, he never got around to it, I do not think he was a Yudkowsky fan and he hung out with some EAs who definitely weren’t.)
(ADDED: I am not saying that EA influence on Alameda was comparable in magnitude to Novik’s influence on Caroline Ellison; I am giving an example of the mental motion of trying to grab too much responsibility because you don’t know about all the parts of the universe that aren’t yourself.)
It wouldn’t, even, reflect all that badly on the spirit running across many fantasy novels of RightThingism. Not just because “no true Scotsman”, not even because SBF would have really actually missed the point of fantasy-novel RightThingism. But because the amount that FTX spent on e-sports naming rights vs the amount they gave to RightThingist causes, and how they didn’t take a billion off the table for RightThingism while they still had a billion, maybe belied a bit the idea that RightThingism was in fact that central to their mental lives.
Also Milky Eggs’s account says that FTX’s own employees were encouraged to keep all their salaries on the exchange, which… I don’t really have words. It’s not—what you’d expect somebody to do if they still had even fantasy-novel RightThingism inside them. The Milky Eggs account says that Caroline Ellison was one of four FTX employees who knew. I wish I had a reliable printout of what Caroline Ellison was actually thinking at the time she wrote that Tumblr post. I would bet that, even without the benefit of hindsight on how it turned out, Naomi Novik wouldn’t have agreed with it at the time.
And whatever Caroline Ellison was thinking when she wrote that, it is obvious—when you look at it from safely outside—that it wasn’t Naomi Novik’s fault.
If Caroline Ellison had worn a Naomi Novik T-shirt and put the Scholomance books in her Twitter profile and told her crypto clients “Trust me, I read fantasy novels and I know what the Right Thing is,” it would still not have been Naomi Novik’s fault.
It wouldn’t have been the fault of the abstract concept of “you can either do the thing that’s easy and safe or you can do the thing that’s hard and scary but right, and being a good person is doing the right thing”. Plenty of people have read fantasy novels like that and not wrecked depository institutions. Not just in terms of moral responsibility, but actual causality, I’d be surprised if that was really in actual fact a key driver in the decisions that Caroline Ellison made; maybe she used that to rationalize that afterwards, but I doubt it’s what was going through her mind on the fatal day that FTX used customer deposits to pay back Alameda creditors (if that’s in fact when FTX first touched customer deposits). Pride did it, I’d sooner guess, or the desire to not not not be in this universe going so badly and taking the only step that preserved the feeling that everything could still be okay.
Who’s at fault for FTX’s wrongdoing?
FTX.
Ask a simple question, get a simple answer.
You have no right to blame yourself any more than that. You weren’t that important.
If there’s anyone other than FTX who’s really to blame, here, it’s me. I’ve written some fiction that tries to walk people through the experience of abandoning sunk costs and facing reality. Including my most recent work.
Caroline Ellison, according to her tumblr, had even started reading it...
But her liveblogs cut out before she got very far in.
I just wasn’t a good-enough writer; I lost my reader’s attention, and with it, perhaps, the world.
Now, some people might say here: “But Eliezer, aren’t you co-writing that story with another author?” And to this I can only reply: I see no reason why the existence of any other people in the universe ought to detract from my own sole accountability for everything that anyone does inside it.
- EA is a global community—but should it be? by 18 Nov 2022 8:23 UTC; 200 points) (
- Reflections and lessons from Effective Ventures by 28 Oct 2024 16:01 UTC; 186 points) (
- Sadly, FTX by 17 Nov 2022 14:26 UTC; 134 points) (
- Sadly, FTX by 17 Nov 2022 14:30 UTC; 133 points) (LessWrong;
- EA & LW Forums Weekly Summary (14th Nov − 27th Nov 22′) by 29 Nov 2022 22:59 UTC; 22 points) (
- EA & LW Forums Weekly Summary (14th Nov − 27th Nov 22′) by 29 Nov 2022 23:00 UTC; 21 points) (LessWrong;
- 14 Dec 2022 8:21 UTC; 13 points) 's comment on A better society shortcuts away the “longterm consequences versus rules” dilemma by (
DM conversation I had with Eliezer in response to this post. Since it was a private convo and I was writing quickly I had somewhat exaggerated in a few places that I’ve now indicated with edits.
Commending Habryka for willing to share about these things. It takes courage and I think reflections/discussions like this could be really valuable (perhaps essential) to the EA community having the kind of reckoning about FTX that we need.
Great points, all. Even if most people could do nothing and Sam was not motivated by a core problem with EA philosophy, that doesn’t mean there was nothing that EAs close to the situation could have done differently. I would love to see a public airing of what genuine evidence people think they might have had that should have changed those people’s behavior around Sam.
Assuming that this means that the FTX leadership is friends with prominent EAs, I think that this fact raises some questions that many people might consider important.
For instance, I think some people might find it important to know what those friends have been doing with respect to this situation for the past week. What sort of communication have they had with the FTX leadership? Do they still feel loyalty toward SBF/Caroline/etc.? Are they in any way aiding or abetting them to commit crimes or avoid the legal or reputational consequences of their actions?
These might be dumb questions, and I apologize if so. They occurred to me because I model people as being quite likely to aid and abet with their close friends’ criminal or malicious activity, but I acknowledge that that model could be wrong and/or not very applicable to this situation.
What about the parts of EA that isn’t Peter Singer and classical GiveWell-style EA? If those parts of EA were somewhat responsible, would it be reasonable to call that EA as well?
I don’t think the analogy is helpful. Naomi Novik presumably does not claim to emphasize the importance of understanding tail risks. Naomi presumably didn’t meet Caroline and encourage her to earn a lot of money so she can donate to fantasy authors, nor did Caroline say “I’m earning all of this money so I can fund Naomi Novik’s fantasy writing”. Naomi Novik did not have Caroline on her website as a success story of “this is why you should earn money to buy fantasy books or support other fantasy writers”. Naomi didn’t have a “Fantasy writer’s fund” with the FTX brand on it.
I think it’s reasonable to preach patience if you think people are jumping too quickly to blame themselves. I think it’s reasonable to think that EA is actually less responsible than the current state of discourse on the forum. And I’m not making a claim about the extent EA is in fact responsible for the events. But the analogy as written is pretty poor, and doesn’t really make a good case for saying EA has zero responsibility here (emphasis added):
I agree that if I, personally, had steered SBF into crypto, and uncharacteristically failed to add on a lot of “hey but please don’t scam people, only do this if you find a kind of crypto you can feel good about” I might consider myself more at fault. I even think that the Singer side of EA in fact does less talking about deontology, less writing of fiction that exemplifies the feelings and reasoning behind that deontology, less cautioning of people against twisting up their brains by chasing good ideas; on my view, the Singer side explicitly starts by trying to twist people’s brains up internally, and at some point we should all maybe have a conversation about that.
The thing is, if you want to be sane about this sort of thing, even so and regardless I think Peter Singer himself would not have approved this, would obviously not have approved this. When somebody goes that far off the rails, I just don’t see how you could reasonably hold responsible people who didn’t tell them to do that and would’ve obviously not wanted them to do that.
Given how big of a role EA apparently had in the origin of Alameda (Singh says in the Sequoia puff piece that it wouldn’t have started without EA), there very likely are many members of the community who offered more encouragement and/or didn’t give as many warnings as they should have.
I don’t know what point that fault transcends the individual and attributes to the community, but at the very least, adding up other individuals’ culpabilities in steering SBF to crypto without appropriate caution would seem to put a lot of the blame you say you personally avoid on EA as a whole.
Here are some excerpts from Sequoia Capital’s profile on SBF (published September 2022, now pulled).
On career choice:
On deciding what to do after leaving Jane Street:
On setting up the initial Japanese Bitcoin arbitrage at Alameda:
On the early days at Alameda:
On how he was thinking about future earnings:
On what differentiates FTX in crypto:
On the EA community in the Bahamas that congealed around FTX:
Following your analogy, if a fan of Novik had:
been convinced by Novik to dedicate their career to the Novikian ethic
been pointed by Novik to a promising first job in that career path
decided to leave that promising first job on the basis of Novikian reasoning, framing the question of what to do next in Novikian terms
worked with a global network of Novikians to implement an international crypto arbitrage
received seed funding from a prominent Novikian to scale up this arbitrage
exclusively hired Novikians to continue scaling the arbitrage once it started working
thought about forward-facing professional decisions strictly in terms of the Novikian ethic
used their commitment to Novikianism to garner a professional edge in their industry
used a large portion of the proceeds of their business to fund Novikian projects, overseen by a foundation staffed exclusively by elite Novikians and advised by Novik herself
fostered a community of Novikians around their lavish corporate headquarters
… then I think it would be fair to attribute some of the impact of their actions to Novikianism.
Some corrections of the Sequoia info:
I’ve never been a grad student.
I’m neither Japanese nor a Japanese citizen.
I ‘volunteered’ in the sense that people at Alameda reached out to me, I said ok and then got paid by the hour for my help.
‘(obscure, rural)’ is an exaggeration. ‘provincial’ would be a more apt adjective for the location. The main bank we used was SMBC, the second-largest bank in Japan.
‘for a fee’ sounds as if it was some sort of bribe to get them to do what we wanted. But we only paid the usual transaction fees and margin that any bank would charge.
But mostly, if https://forum.effectivealtruism.org/posts/xafpj3on76uRDoBja/the-ftx-future-fund-team-has-resigned-1?commentId=hpP8EjEt9zTmWKFRy is accurate, I’m bummed that the money I helped earn was squandered right away.
Definitely: you are obviously right and Eliezer obviously wrong about this, imho.
BUT
I do think it is hindsight bias to some degree to think that “EA” as a collective or Will MacAskill as an individual are recorded as doing something wrong, in the sense of “predictably a bad idea” at any point in the passages you quote. (I know you didn’t actually claim that!) It’s not immoral to tell some to found a business, so it’s definitely not immoral to tell someone to found a business and give to charity. It’s not immoral to help someone make a legal, non-scammy trade, as the anonymous Japanese EA apparently did (“buy low and sell high” is not poor business ethics as far as I know, though I’m prepared to be corrected about that by someone who actually knows finance.) It’s a bit more controversial to say it’s not wrong to take very rich people’s money to do the sort of work EA charities do, but it’s certainly not obvious that it is, and nothing in the quoted passages actually shows that any individual had evidence that FTX were a bad org to be associated with. (They may well have, I’m not saying no one did wrong, I’m just saying no wrong-doing is suggested by the information quoted here.) Furthermore “take money from rich people for philanthropy and speculative academic research” isn’t exactly a uniquely EA practice!
That leaves suggesting FTX think in utilitarian terms about maximizing, but I think it is obviously a complicated question whether that was a knowably bad idea when it was done, and depends on the details of how it was done.
Of course, there may well have been wrong-doing at some point, but we need proper investigation before we decided. And furthermore, we can’t just assume that any wrongdoing, even severe wrongdoing, that did occur would have saved the depositors SBF stole from, who are the main victims of this whole mess. My guess is that once the early decision to encourage SBF to found Alameda was made by Will, and SBF received some early help from the community, withdrawing our support later would not have done very much to prevent FTX from becoming a successful business that stole from its customers. But those early decisions are probably the least morally suspicious, in that they were taken early when there was the least available information about the business ethics of SBF and FTX/Alameda available. To repeat: I don’t think telling someone to found a business to earn to give, or helping out a business make a legal, non-scammy trade, is itself immoral. (Again, I’m assuming the trade was legal and non-scammy, but very willing to be corrected!). The suspicious decisions that might have been decisive was maybe “get SBF and other FTX/Alameda high-ups to think in a utilitarian way’. But as I say, I don’t think its reasonable to hold that was clearly wrong at the time.
Thanks for this comment.
I’m more interested in reflecting on the foundational issues in EA-style thinking that contributed to the FTX debacle than in abscribing wrongdoing or immorality (though I agree that the whole episode should be thoroughly investigated).
Examples of foundational issues:
FTX was an explicitly maximalist project, and maximization is perilous
Following a utilitarian logic, FTX/Alameda pursued a high-leverage strategy (Caroline on leverage); the decision to pursue this strategy didn’t account for the massive externalities that resulted from its failure
The Future Fund failed to identify an existential risk to its own operation, which casts doubt on their/our ability to perform risk assessment
EA’s inability and/or unwillingness to vet FTX’s operations (lack of financial controls, lack of board oversight, no ring-fence around funds committed to the Future Fund) and SBF’s history of questionable leadership points to overeager power-seeking
MacAskill’s attempt to broker an SBF <> Elon deal re: purchasing Twitter also points to overeager power-seeking
Consequentialism straightforwardly implies that the ends justify the means at least sometimes; protesting that the ends don’t justify the means is cognitive dissonance
EA leadership’s stance of minimal communication about their roles in the debacle points to a high weight placed on optics / face-saving (Holden’s post and Oli’s commenting are refreshing counterexamples though I think it’s important to hear more about their involvement at some point too)
Sounds right to me!
I agree with Eliezer that a lot of EAs are over-blaming EA for the FTX implosion, based on the facts currently known. But the Scholomance case is obviously a lot weaker than the EA case in real life, and this is a great summary of why.
The point is not “EA did as little to shape Alameda as Novik did to shape Alameda” but “here is an example of the mental motion of trying to grab too much responsibility for yourself”.
Fair!
This seems to be a false equivalence. There’s a big difference between asking “did this writer, who wrote a bit about ethics and this person read, influence this person?” vs “did this philosophy and social movement, which focuses on ethics and this person explicitly said they were inspired by, influence this person?”
I agree with you that the question
has the answer
But the question
Is nevertheless sensible and cannot have the answer FTX.
Couldn’t agree more strongly.
The inferential jump from someone reading a book in their spare time, making a pretty superficial Goodreads review about a main takeaway, to
Is a pretty big one, and kinda egregious honestly.
Agree, we shouldn’t give a pass to irrational (frankly, egocentric) thinking just because it feels like taking responsibility.
I feel especially irritated with people who are ready to change their entire utilitarian philosophy just because someone associated with ours (probably) committed a major crime and got caught, as if they didn’t understand last week that they lived in a world where surprises like that can happen. I don’t understand how else they could update their moral philosophy so fast based on the info we have.
Maybe they weren’t familiar with the overwhelming volume of previous historical incidents, hadn’t had their brains process history or the news as real events rather than mythology, or were genuinely unsure about how often these sorts of things happened in real life rather than becoming available on the news. I’m guessing #2.
I agree that this is pretty weird. There were presumably a bunch of historical contingencies that went into whether the FTX implosion occurred; it seems weird if we should endorse some moral philosophy X in the world where all those contingencies occurred, and some different moral philosophy Y in the world where not all of those contingencies occurred.
And it also seems weird if we should endorse the same moral philosophy in both worlds, but this one data point—an important data point EV-wise, but still a single event, historical contingency and all—is crucial evidence about such a high-level proposition. Evidence that we somehow didn’t acquire via looking at the entirety of human history, the entire psychology and sociology literature, etc.
The least-weird versions of this update I can imagine are:
“This isn’t a large update about high-level questions like that, but it’s at least an interesting case study. We shouldn’t treat it as a huge deal evidentially, but having a Schelling case study we can all drill down on is still a useful exercise, since we usually don’t take the time to be this thorough.”
“This is a large update for me, exactly because my perspective on the world is heavily influenced by things like the status hierarchies I perceive, which things are seen as socially acceptable or unacceptable, which people I personally like or dislike, etc. Events that cause a realignment in the status hierarchy are a bit like taking antidepressants, and observing that some of my world-models change when I’m on the antidepressants.
There’s no a priori guarantee that my epistemics are more accurate on antidepressants versus off them; but having the extra vantage point can help me reflect on these two perspectives, and it’s not weird if I end up deciding that one vantage point is better than the other, and thereby updating my object-level world-models to better match that vantage point.”
There’s also a 3rd option—we should have been updating based on what was already talked about re SBF before the implosion (his pathological behaviour, his public statements essentially agreeing he’s running a Ponzi scheme, and people warning other people about these). So the implosion makes us realise that, in a world where FTX didn’t implode we still should have disassociated from SBF very early on, and be doing some soul searching about why UK EA leaders were [/are, in this hypothetical world] choosing to hype up someone with a track record of being so terrible
I think it’s very worth reflecting on strategic decisions that were made around Sam. I just don’t think what happened is very significant to whether utilitarianism is the correct moral philosophy.
I agree that these events are separate from arguments for & against utilitarianism as a criterion of rightness. But they do undermine the viability of the act utilitarian calculus as a decision procedure. Sam seems to have thought of himself as an act utilitarian, but by neglecting to do the utilitarian calculus correctly or at all, he did massive harm, making it clear that we can’t rely on this decision procedure to avoid such harms. Instead, we need utilitarians to adopt a decision procedure that includes constraints on certain behaviour.
In practice I think utilitarians should adopt mostly a skillful combination of virtue ethics, deontic rules, and explicit calculations.
I think what does the FTX case provides some evidence for, is some fraction of smart EAs exposed to utilitarianism being prone to attempt to rely on the explicit act utilitarianism, despite the warnings.
I think part of the story here is the a weird status dynamic where...
1. I would basically trust some people to try the explicit direct utilitarian thing: eg I think it is fine for Derek Parfit or Toby Ord.
2. This creates some weird correlation where the better you are on some combination of (smartness/understanding of ethics/power in modelling the world), the more you can try to be actually guided by consequences
3. This can make being ‘hardcore’ consequentialist …sort of cool and “what the top people do”
4. … which is a setup where people can start goodhart/signal on it
Yeah, I think it’s a severe problem that if you are good at decision theory you can in fact validly grab big old chunks of deontology directly out of consequentialism including lots of the cautionary parts, or to put it perhaps a bit more sharply, a coherent superintelligence with a nice utility function does not in fact need deontology; and if you tell that to a certain kind of person they will in fact decide that they’d be cooler if they were superintelligences so they must be really skillful at deriving deontology from decision theory and therefore they can discard the deontology and just do what the decision theory does. I’m not sure how to handle this; I think that the concept of “cognitohazard” gets vastly overplayed around here, but there’s still true facts that cause a certain kind of person to predictably get their brain stuck on them, and this could plausibly be one of them. It’s also too important of a fact (eg to alignment) for “keep it completely secret” to be a plausible option either.
I completely agree that a motivated person could easily believe that any decision is the right act utilitarian decision because there aren’t clear rules for determining the right act utilitarian decision and checking your answer. Totally.
But idk if it’s even fair to say Sam was using act utilitarianism as a decision procedure. It’s not clear to me if he even believed that while he was (allegedly) committing the fraud.
I totally agree. But even if we conservatively say that it’s a 50% chance that he was using act utilitarianism as his decision procedure, that’s enough to consider it compromised, because it could lead to
bad consequencesmultiple billions of dollars of damages (edited).There are also subtler issues: if you intend to be act utilitarian but aren’t and do harm, that’s still an argument against intending to use the decision procedure. And if someone says they’re act utilitarian but aren’t and does harm, that’s an argument against trusting people who say they’re act utilitarian.
Not trying to take this out on you, but I’m annoyed by how much all this advocacy of deontology all of sudden overlaps with covering our own asses. I don’t buy it as a massive update about morality or psychology from the events themselves but a massive update about optics.
Reposting from twitter: It’s a moderate update on the prevalence of naive utilitarians among EAs.
Expanded:
Classical problem with this debate on utilitarianism is the vocabulary used makes motte-and-bailey defense of utilitarianism too easy.
1. Someone points to a bunch problems with a act consequentialist decision procedure / cases where naive consequentialism tells you to do bad things
2. The default response is “but this is naive consequentialism, no one actually does that”
3. You may wonder that while people don’t advocate for or self-identify as naive utilitarians … they actually make the mistakes
The case provides some evidence that the problems can actually happen in practice in important enough situations to care. [*]
Also, you have the problem that sophisticated naive consequentialists could be tempted to lie to you about their morality (“no worries, you can trust me, I’m following the sensible deontic constraints!”). Personally, before the recent FTX happenings, I would be more of the opinion “nah, this sounds too much like an example from a philosophical paper, unlikely with typical human psychology ”. Now I take it as more real problem.
[*] What I’m actually worried about …
Effective altruism motivated thousands of people to move into highly leveraged domains, with large and potentially deadly consequences—powerful AI stuff, pandemics, epistemic tech. I think that if just 15% of them believe in some form of hardcore utilitarianism where you drop integrity constrains and trust your human brain ability to evaluate when to be constrained and when not, it’s … actually a problem?
I’d agree with this statement more if it acknowledged the extent to which most human minds have the kind of propositional separation between “morality” and “optics” that obtained financially between FTX and Alameda.
This will be a relief if true. I am much more worried about people not having principles (or their principles guided by something other than morality) than people being overly concerned about optics. The latter is a tactical concern (albeit a big one) and hopefully fixable, the former is evidence that people in our movement is too conformist or otherwise too weak or too evil to confront moral catastrophes.
I don’t think they know they are concerned about optics. My suspicion was that the bad optics suddenly made utilitarian ideas seem false or reckless.
This strikes me as a bad play of “if there was even a chance”. Is there any cognitive procedure on Earth that passes the standard of “Nobody ever might have been using this cognitive procedure at the time they made $mistake?” That more than three human beings have ever used? I think when we’re casting this kind of shade we ought to be pretty darned sure, preferably in the form of prior documentation that we think was honest, about what thought process was going on at the time.
Why require surety, when we can reason statistically? There’ve been maybe ten comparably-sized frauds ever, so on expectation, hardline act utilitarians like Sam have been responsible for 5% of the worst frauds, while they represent maybe 1/50M of the world’s population (based on what I know of his views 5-10yrs ago). So we get a risk ratio of about a million to 1, more than enough to worry about.
Anyway, perhaps it’s not worth arguing, since it might become clearer over time what his philosophical commitments were.
I guess it’s some new evidence that one person was maybe using act utilitarianism as a decision procedure and messed up? Also not theoretically impossible he was correct in his assessment of the possible outcomes, chose the higher EV option, and we just ended up in one of the bad outcome worlds.
I don’t understand this argument at all. I assume nobody thought it was literally impossible for the implementation of a moral theory (any moral theory!) to lead to bad consequences before. Maybe I’d understand your point more if you stated it quantitatively. Like:
“Previously, I thought it was x% likely that a random act utilitarian would be led by their philosophy to do worse stuff than if they’d endorsed most other moral theories. After seeing the case of SBF, I now think the probability is y% instead, because our sample size is small enough that a single data point can be a large update.”
Looks like Eliezer was similarly confused by your phrasing; your new argument (“almost no multibillion dollar frauds have ever happened, so we should do a very large update about the badness of everything that might have contributed to SBF defrauding people”) sounds very different, and makes more sense to me, though I suspect it won’t end up working.
I think you’re right—I could have avoided some confusion if I said it could lead to “multi-billion-dollar-level bad consequences”. Edited to clarify.
This seems to me like it’s overstating the strength of evidence, as though FTX is a disproof rather than one data point among many.
It is a disproof for extremely strong claims like “people who endorse act utilitarianism never do unethical things”, but those claims should have had extremely low probability pre-FTX.
How much of an update is this really, though? Am I wrong that it’s already the majority utilitarian view that act utilitarianism may be theoretically correct, but individual humans don’t have the foresight to know the full consequences of every act and humans trying to work together need to be able to predict what others will do --> something like rule utilitarianism or observing constraints? Seems like the update should be about much you can know how things will turn out and whether you can get away with cutting corners.
It does seem like Sam had pathological beliefs re:St. Petersburg paradox but that seems like more than wanting to maximize EV too much—it’s not caring about the longterm future (where everyone’s inevitably dead after enough coin flips) enough. I really don’t see how that can be attributed to act utilitarianism either.
I agree that most utilitarians already thought act utilitarianism as a decision procedure was bad. Still, it’s important that more folks can see this, with higher confidence, so that this can be prevented from happening again.
I think I agree that the St Petersburg paradox issue is orthogonal to choice of decision procedure (unless placing the bet requires engaging in a norm-violating activity like fraud).
Risking the entire earth seems like a norm violation to me
Here are some jumping-off points for reflecting on how one might update their moral philosophy given what we know so far.
idk, when people explicitly endorse your ideology as why they endorse “high leverage and double-or-nothing flips” I think it’s at least worth taking a look at yourself. Now quite probably the person in question has misunderstood your ideology and doesn’t understand why EAs do in fact care about the risk of ruin and why stealing money isn’t ok, but then perhaps try to correct them?
Fwiw I think it very unlikely that the decision to use customer funds was a one-off decision made in 2022. My view is that that FTX was set up from the start to use customer money as a source of cheap capital for Alameda. In 2018 Alameda was offering potential investors a 15% guaranteed return on loans. It seems fairly likely that at some point SBF figured “fuck this, why are we offering these dorks 15% when we can just set up our own exchange and access huge amounts of capital at 0%”. Never mind that the fact that privileged information from the exchange may well have opened up for Alameda more ways to make money!
The plan, imo, was always to accrue as much as wealth as possible as fast as possible with as few ethical constraints as possible. This worked for a while because Alameda’s trades were profitable and crypto was in a bull market. This plan may or may not have been a EA-aligned, but if you have short enough AI/pandemic timelines (I don’t), it doesn’t seem obviously non-compatible and given the career backgrounds and interest set of all the major people involved, yes, I think they were committed and sincere EAs who really believed this stuff. SBF’s own weird version of EA, at least, seems to have played a fairly large role in why they took on so much risk, as he himself explained in an overly long and boring twitter thread somewhere and Caroline also mentioned on her blog.
It also makes zero sense to compare FTX’s spending on stadiums vs the Future Fund as a sign for how much they cared about these respective things. The Future Fund would almost certainly have got way more money in subsequent years, while the stadium rights purchase was a form of advertising designed to help grow the business faster. I can’t imagine SBF is a big sports fan and was doing that sort of thing because he really enjoyed seeing the FTX logo on umpire shirts.
Not to Godwinpost, but this isn’t really “were Nietzsche and Wagner at fault for the Nazis”, it’s more “were Nietzsche and Wagner at fault for the Nazis if they’d actually lived throughout the 1930s and worked in prominent cultural education posts in the German state bureaucracy.”
I mean he is a big sports fan, at least baseball, at least when he was younger. I got linked to his blog from 10 years ago from something, and the number one and two sets of posts were about baseball statistics.
The role of the EA movement in the case of FTX seems surely to meet the level of influence for some of the impact win’s that EA has had so far here.
Perhaps most prominently, the movement:
Gave the idea of ‘Earning to Give’ to Sam
Provided a primary motivation to Sam and other FTX leadership to build the exchange
For example, when comparing to the case of Sendwave, the influence seems at least comparable and if not larger e.g. played a motivational role in founding a company, for the purpose of improving the world. (I’m not familiar with Wave’s founders motivations, so could be wrong here)
In welfare terms alone, the impact of FTX’s collapse on it’s customers seems plausibly comparable to some of the impact win’s of the movement to date. I.e. of the order of $1bn in lost funds. Given this, I think that an honest impact evaluation of the EA movement would include the harm caused to customers through FTX’s collapse.
This is relevant not for blame assignment, but because it’s very decision-relevant to EA’s mission of improving the world. For example, when in the future deciding how much to emphasise harm avoidance when encouraging the (good and novel) idea of Earning to Give.
Agreed. However:
Are you talking about welfare terms or financial terms? Because $1bn in lost savings of FTX customers seems very different in welfare terms to $1bn spent on bed-nets etc. I think there are strong reasons FTX shouldn’t have acted the way it did, but suggesting these two things are comparable in welfare terms because they are similar in financial terms seems like an error to me.
Yeah I agree, I just mean that $1bn in funds lost to customers across the world is plausibly comparable in welfare terms to other wins on that list. E.g. dividing by 10 to account for differences in income of those affected, it would be around the amount attributed to GiveDirectly on the EA impact page.
(without wanting to make a very direct crude comparison, or getting into the details of that)
Okay yes, they may well be.
I’m also pretty hesitant to attempt to make direct crude comparisons - and I’ll say again that I think there are strong reasons FTX shouldn’t have acted as it did in addition to the direct harm to customers—but I’ll just say that I seem to remember 100x or 1000x multipliers being more common than 10x in similar scenarios.
It is well-written, but I am not particularly convinced by the fantasy fiction analogy — it feels a lot more like “Here’s this very different situation, and you agree that the conclusions would be different. That would even be true if we modify it in several hard-to-imagine ways.”
In particular, I don’t see any reasonable analogies for:
EA’s “Earning to Give” career path, up to and including 80k featuring a profile on SBF as an exemplar.
The specific logic of “my marginal money is going to be donated” ⇒ “I should be closer to risk-neutral”, which I haven’t really seen rebutted on the facts (most instead argue that in reality, SBF/FTX/Alameda went too far and were risk-seeking).
That SBF ultimately contributed such a paltry amount of his apparent fortune is more impactful, but mainly as a reminder of how small and vulnerable EA actually is. It might very well be true that we didn’t mean that much to SBF, but he meant a lot to us.
I also think this was a not-so-good and somewhat misleading analogy—the association between Novik and Caroline in the example is strictly one-way (Caroline likes Novik, Novik has no idea who Caroline is), whereas the association between FTX and EA is clearly two-way (e.g. various EA orgs endorsing and promoting SBF, SBF choosing to earn-to-give after talking with 80k etc).
I don’t [currently] view EA as particularly integral to the FTX story either. Usually, blaming ideology isn’t particularly fruitful because people can contort just about anything to suit their own agendas. It’s nearly impossible to prove causation, we can only gesture at it.
However, I’m nitpicking here but—is spending money on naming rights truly evidence that SBF wasn’t operating under a nightmare utilitarian EA playbook? It’s probably evidence that he wasn’t particularly good at EA, although one could argue it was the toll to further increase earnings to eventually give. It’s clearly an ego play but other real businesses buy naming rights too, for business(ish) reasons, and some of those aren’t frauds… right?
I nitpick because I don’t find it hard to believe that an EA could also 1) be selfish, 2) convince themselves that ends justify the means and 3) combine 1&2 into an incendiary cocktail of confused egotism and lumpy, uneven righteousness that ends up hurting people. I’ve met EAs exactly like this, but fortunately they usually lack the charm, knowhow and/or resources required to make much of a dent.
In general, I’m not surprised with the community’s reaction. Best case scenario, it had no idea that the fraud was happening (and looks a bit naïve in hindsight) and its dirty laundry is nonetheless exposed (it’s not so squeaky clean after all). Even if EA was only a small piece in the machinery that resulted in such a [big visible] fraud, the community strives to do *important* work and it feels bad for potentially contributing to the opposite.
The point there isn’t so much, “He could not have had any EA thoughts in his head at all”, which I doubt is really true—though also there could’ve just been pressure from coworkers, and office politics around it, resolving in something like the Future Fund so that they were doing anything. My point is just that this nightmare is probably not one of a True Sincere Committed EA Act Utilitarian doing these things; that person would’ve tried to take more money off the table, earlier, for the Future Fund. Needing an e-sports site named after your company—that’s indeed something that other businesses do for business reasons; and if it feeds your business, that’s real, that’s urgent, that has to happen now. The philanthropy side was evidently not like that.
“My point is just that this nightmare is probably not one of a True Sincere Committed EA Act Utilitarian doing these things”—I agree that this is most likely true, but my point is that it’s difficult to suss out the “real” EAs using the criteria listed. Many billionaires believe that the best course of philanthropic action is to continue accruing/investing money before giving it away.
Anyways, my point is more academic than practical, the FTX fraud seems pretty straight forward and I appreciate your take. I wonder if this forum would be having the same sorts of convos after Thanos snaps his fingers.
I still think that this incident should overall update most EAs in the direction of 1) ethical injunctions are important for humans and 2) more EAs should read the ethical injunctions section of the sequences. I agree that there is no system of ethics, or cultural movement, so awesome that it will stop its most loyal adherents from doing terrible things, but some do better than others. Nobody should feel guilty except for the people who committed the crime, but it would be great if EAs thought the right amount about how to lower the prob of events like this in the future, and that amount is not zero.
I’m also not sure how to square your advice about how I should relate to this incident with heroic responsibility.
Which ethical systems do you think have a better track record and why? Does virtue ethics, the preferred moral system of Catholics, have to take responsibility for pedophile priests? Does the rule-based ethics of deontology have to take responsibility for mass incarceration in the USA?
I can understand people claiming that this ethics implies that crazy conclusion, or assigning blame to an idea that seems clearly to have inspired a particular person to do a particular act. But I have no confidence that anybody on this earth has a clue about which ethical system is most or least disproportionately to blame for common-sense forms of good or bad behavior.
I think liberalism has a better track record than communism, for instance. No, but I do think Catholics should spend some time thinking about what’s up with catholic priests molesting children, particularly if that catholic has any control over what goes on in the church. In general I do not think blaming this or that ethical system or social movement makes much sense, but noticing that the adherents of some social movement or ethical system tend to do some particular kind of bad thing more often than others can be useful, particularly if you are a part of that social movement.
Ronny is talking about https://www.lesswrong.com/s/AmFb5xWbPWWQyQ244.
There are however a number of things we ARE at fault for here.
We as a community idolised SBF, including promoting him in many presentations, a relatively fawning interview by 80K which continued to promote the idea that SBF was living frugally (surely people knew by then that was bs). We could have chosen not to do this
Will MacAskell made the introduction to Elon to try and get SBF to help buy twitter. We still have no public information why, but this would have given SBF more power and used a lot of money that could have been used on doing good to that end. Why?
Carrick Flynn campaign; we as a community hugely supported this campaign which was quite blatantly SBF and GBF trying to buy a seat for their interests. Sure, we as a community thought this was also our interests (and I still assume Carrick would have done a good job?) but once again this was a way the community encouraged and didn’t question SBFs power
Will MacAskell knew SBF for 9 years, seemingly relatively closely. Its not Wills fault SBF committed fraud, but it is partially Wills fault SBF became such a face for the community within and outside of it. Maybe no ordinary person could have known SBF was a fraudster. But then, if we only expect from Will what we expect of “ordinary people” why are we happy trusting him with so much power in the community? The only justification I can think of is he is just so so so much better at decision making, having a reliably positive impact and avoiding risks to the community and project of EA. Its clear that Will isn’t this uniquely good. So why do we trust him (and others) with so much power in the community
Yes, assuming that these were foreseeably bad calls. Seems good to separately ask “what responsibility do EAs bear for Sam’s bad decisions ?” and “what did we otherwise do wrong, or right?”. E.g., if it were true that Sam would have made all the same missteps in the absence of EA, it could still be the case that we made Sam-related mistakes like “failing to propagate info about Sam’s past bad behavior”.
It would have given SBF a different kind of power. I’m skeptical of the claim that SBF would be more powerful if he’d poured his money into Twitter, since that implies that Twitter is a more useful, leveraged thing to spend money on than SBF’s other alternatives.
It seems more likely to me that either buying Twitter would reduce SBF’s power/influence (because Twitter isn’t very important), or that buying Twitter is a not-crazy sort of thing for EAs to try to do (because Twitter is very important).
Of course, SBF owning Twitter could have been bad insofar as SBF’s judgment and character were flawed. But then we’re just repeating the critique “EAs should have known that SBF was a bad guy”, not separately critiquing Will for thinking the Twitter buy was a good idea.
I think more of an argument needs to be given for “buying Twitter was a dumb idea” in order to include this on a list of “things EAs are at fault for”.
This seems totally wrong to me. First, because I knew Carrick pre-campaign, I think he’s awesome and would make an amazing elected official, and it doesn’t update me at all to know that Carrick (like a ton of excellent, well-intentioned EAs) got FTX funding.
And second, because AFAIK Carrick is an FHI guy who SBF later decided to support in his primary race (because he’s an EA and SBF wanted more EAs in politics), not someone with close ties to SBF. Quoting Carrick in a Vox interview:
Someone could make an awesome elected official (as I am sure he would have done) and still be a seat essentially bought for SBFs interests as well… like that’s exactly how lobbying works! Also it’s clearly untrue that Carrick did not have close ties to SBF. AFAIK (and I may be wrong) he was pretty good mates with Gabe Bankman Fried (I may be wrong though)
From an upcoming post I am drafting: I would point out that ’heroes put the entire group, many innocent people, ‘the city,’ planet Earth or even the whole damn universe or multiverse in grave danger to save any main character or other thing that We Cannot Bear To Lose, because That’s What Heroes Do’ is ubiquitous in our fantasy media. It might be a majority of DC comics plots. Villains invoke it because decision theory, they know it will work, and even without that it is rather mind-bogglingly awful. That kind of thinking needs to be widely condemned and fall in status at least via What The Hell Hero moments, and I worry it has more influence in these situations than we think.
First a disclaimer that I’ve never got anywhere close to interacting with SBF personally; I’m very much an outsider to this situation. However, from everything I have read, I think it’s pretty ridiculous to suggest that EA wasn’t the main reason SBF tried so hard to maximize profit (poorly, I might add, but it seems like that was his goal) to the point of committing fraud. As far as I understand EA was SBF’s primary guiding ideology; it is why he went down this career path of Jane Street and then starting his own companies. This post seems overly reliant on the fun fact that SBF paid more for e-sports naming rights than on EA donations to show why actually Sam didn’t care about EA that much. But these are two completely separate things! E-sports naming rights is just a means of advertising, with the goal of making FTX more money which will eventually allow SBF to donate more to EA. I think there’s also decent evidence that SBF was looking to ramp up donations in the future, as Effective Altruism continues to grow and is able to use more funding. Once you take out this fun fact about SBF’s current EA spending, I think this whole argument kind of falls apart.
Seems like a reasonable objection to me. (Though it’s still weird that SBF overpaid so much for that particular form of advertising; and it’s weird that SBF didn’t set aside money for FTX FF.)
I like this post much more than your previous post.
Is there a source for the $140M figure?
My guess is that this is the June figure for the FTX Future Fund grant commitments. The current figure is $160M as of September 1st. Some of these grants were in installments, especially the multi-year ones, and not all of the money was transferred. This Fund was “longtermist” and I do not see a dollar figure on other FTX charitable giving. This does not include $500M in equity in Anthropic.
Added, weeks later: Or maybe he got it from NYT:
which seems to be sourced from NYT a month ago:
I suspect that these numbers are actually delivered, not promises. My guess is that the Future Fund pledged $190 million, 160 directly and 30 through regranting, delivered 100 and failed to deliver $90 million (a). (Plus $50 million not through the Future Fund, at least some of which counts as EA.)
Thanks, there is also $32M from the regrants tab. But yes, difficult to know the actual total of payouts without word from the staff. Or payouts not subject to clawback without further details on legal proceedings.
Kind of ironic that they were “longtermist” about the world, but not about their own existence!
When comparing the size of SBF/FTX outlay on EA vs. stuff like naming rights, I think it is important to compare apples to apples. As far as the victim’s perspective, the key question is “how much money went out the door” as opposed to “how much did SBF/FTX plan or commit to spend in the future?” Although I don’t know how the naming rights deals were set up, I suspect that much of the money was to be paid in the future. That means the stadiums, teams, etc. are now general unsecured creditors on any claims. I am hearing that depositor claims may be valued on the distressed-debt market at 3-5 cents on the dollar, so the claims of naming-rights counterparties are likely worth even less.
Fair point.
The question that heads this post obviously answers itself, in that only actual perpetrators of bad deeds and their direct instigators (intellectual or otherwise) are to be held accountable for them; nevertheless, I must admit that I found Eliezer Yudkowsky’ analogy unconvincing, and (not quite, but feeling a little bit) disingenuous. Whenever we see examples of adherents of some creed, ideology, religious or thought system going into nefarious places, it is natural to wonder if said ideas (whether properly or mistakenly interpreted) influenced or condoned the path they took. Some articles I have read lately have pointed the finger towards the hubristic hazards of miscalculating for optimal results, and the concomitant dangers of risky betting and of cutting corners. As is well known, the road to hell is paved with good intentions. And besides, as has been stated, a lot of the people involved in this weren’t just ‘fellow travelers’ or ocasional readers of EA material. A lot of them were very visibly engaged in and seen as poster childs for the movement. And I am sure most of them were innocent victims, especially so the rank-and-file workers of FTX and Alameda.
Having said that, I do not find it reasonable either to go to masochistic extremes of self-flagellation. Humans being what they are, there will always be cases of wolves in sheep’s clothing, and never enough controls to catch them in advance. Which is humbling, in a not necessarily bad way. My impression is that the EA community and its members are a wonderful group of people and they will probably come out of this situation wiser, if sadder. And that obviously, it is wrong to blame EA for what has happened.
As Eliezer Yudkowsky mentions Caroline Ellison’s blog, I would like to say that I have been reading it of late, and even taking into account the potential deceitfulness of words and the pictures we build with them, I do not get from both its contents and her general trajectory that she could be a morally bankrupt person. On the contrary, the impression I got is of a true believer, and a good person. This does not preclude the possibility that, under circumstances of a certain naiveté and inexperience in a field as murky as crypto, she might have let herself go along with what she might have perceived as temporary and ‘bad’ expedient means. But to believe this person ever intended to purposely and maliciously scam people our of their money or be privy to a fraud is, for me, completely out of the question. I believe the best option is to be charitable and await to see what the courts of law have to say once the dust has settled. As for SBF, and after reading some of the things he has said and done, that’s a completely different story.
In case anyone wants a reference for the $210 million that FTX committed to spend on esports naming rights for TSM, a Washington Post article from today is here
I’m very new here, just signed in today so I’m unfamiliar with all the formats at the point but I often seek to explain how biological factors can play a role in morality or our decision making because it can be useful to understand our brain’s limitations among all the other factors.
Stress, isolation and the position of power have consequences for the brain. The less cooperative one has to function within their society to leads to damage in areas of the brain responsible for decision making, the anterior cingulate cortex being the most key area. It breaks down the ability for the human to manage their own emotions and impulse control becomes a problem over time. I haven’t followed SBFs career closely but there were perhaps signs of his brain struggling. Addiction is a typical symptom this is occuring.
We are but human, all of us, and we can’t supercede how our brains evolved to operate. It’s a very tricky position to have so much responsibility and power, that consolidation of power becomes potentially harmful as it did with FTX. This is my no means an excuse for SBF, it’s just a potential problem to consider when engaging in effective altruism through large amounts of wealth. Managing ones own ego is maybe a lot harder than some anticipated. Perhaps even the most difficult tasks they’ll ever do because of the design of the brain. Lots and lots of emotional self care would help, but free will is a tricky concept of whether it’s possible to beat our own brains since we make decisions before we become aware we did. Self compassion is a very important piece, maybe one SBF did not have.
I agree that EA likely wasn’t a major causal factor for FTX/SBF’s likely fraud. Unfortunately, it’s a situation where even if it’s not our fault it is our problem. People are trashing EA across the internet because of Sam’s position in the movement. His Twitter profile pic still has him wearing an EA shirt for christ sake!
So are people who never attacked EA before suddenly doing so? That isn’t what I’ve seen. I’ve seen lots of bad-faith takes about how this is proof of what they always thought, and news reporting which is about as accurate as you’d expect—that is, barely correct on the knowable facts, and misleading or confused about anything more complicated than that.
EA is a brand, and people on the outside don’t have much information about it, so a negative association matters on the margin for recruiting. The main post makes a fair point about not going overboard with self blame, but it seems good for EA folks to be publicly concerned about how they could have acted better, or to publicly discuss the lessons they’re taking. At the very least, I don’t think it’s worth much effort to stop people from doing so.
Even so, I’m still recommending people to read Terry Pratchett instead of Novik. Something something low probability, large impact.
But seriously, I think the problem is less how SBF self-identified with EA, and more the way EAs saw him as The Hero.
Anyway, maybe EAs do have a problem of egocentrism.
C’mon, if she’s a true maximizer using depositors money, I guess she’d just download it from z-library
OMG, is this why z-lib was recently seized by FBI?
I chatted with an Alameda python dev for about an hour. I tried to get a sense of their testing culture, QA practices, etc. Lmao: there didn’t seem to be any. Soups of scripts, no time for tests, no internal audits. Just my impression.
My type-driven and property-based testing zealot/pedant side has harvested some bayes points, unfortunately.
We do not know her absolute state of mind when FTX (mis)used customer deposits. But, for all its worthiness, I wouldn’t have predicted ahead of time that EY’s writings packed a sufficient multiplicative weight to attain the sufficient condition that drove the state. I think the simple answer is whole (per the current corpus of evidence); ‘FTX.’.
I was not being serious there. It was meant to show—see, I could blame myself too, if I wanted to be silly; now don’t be that silly.
I think you probably need to label your account “EliezerYudkowsky (parody)” because otherwise a few people might not realize you’re occasionally being sarcastic, and then you might get banned from Twitter.
Caroline Ellison is the disgraced and probably criminally responsible CEO of Alameda, involved in FTX’s downfall.
Despite Yudhowsky’s citing of Peter Singer, almost none of SBF’s FTX FF money went to Peter Singer’s causes of global poverty and animal welfare. No one in these causes was invited to the Bahamas with the other attendees. Yudhowsky was hosted by SBF in the Bahamas and is a regrantor of FTX FF.
There are many reasons why SBF would not want to meet with him, for many of the same reasons SBF might not want to meet with me or most readers.
As many readers of the forum know, Caroline Ellison’s blog is bizarre seeming, intimate and sometimes salacious, which is why its content has not been cited widely on the EA forum until now, when Yudhowsky used one element in this top level post.
I think a reasonable person would say that many of Ellison’s interests that are orthogonal to mainstream beliefs, are highly associated with certain parts of EA, that are local to the Bay Area, near Stanford, Ellison’s Alma Mater and the rationalist community.
One example is Ellison’s interest in “HBD”. These are highly associated with the “rationalist” culture and do not appear in EA in many other US cities and other countries. The reason why I will not further elaborate is that it is speculative, inflammatory, hostile to try to single out the SSC/LW/Rationalist community, which has been done here explicitly here with Yudhowsky, with “Peter Singer” and “Givewell-style EA”.
I was passing through the Bahamas and asked if FTX wanted me to talk to the EAs they had on fellowships there. They paid for my hotel room and an Airbnb when the hotel got full, for a week. I’m not sure but I don’t think I remember getting to see SBF at all while I was at the hotel. Didn’t go swimming or sunning or any such because I am not a very outdoors person. It does not seem entirely accurate to characterize this as “was hosted by SBF in the Bahamas”.
The Future Fund basically turned down all my ideas until the regrantor program started; I made two recommendations and I expect neither of them will pay out now unless they moved very fast.
Unless I specifically defend an idea, I think that a lot of what gets said in the San Francisco Bay Area is also not something I’d accept as my fault. Eg there was a lot of drug use involved in this going wrong, which I’m sure did not start from me, and I’ve suggested increasingly loudly and openly of late that people cut back on the drug use; maybe it’s Bay-associated idk, but it sure is not Yudkowsky-endorsed.
I did think Will MacAskill was from the Singer side of things, so I admit to being surprised if the highly-legible side of effective altruism got nothing, unless it was a room-for-more-funding issue with Givewell+OpenPhil having already snapped up all the fruit hanging lower than GiveDirectly. I will consider myself tentatively corrected on that point unless I hear otherwise or have investigated.
Yudkowsky wrote this above.
It would be wild to see anyone defend or explain the terms “Singer side” or “twisting people’s brains” in this context, much less the intentional act implied.
This is a flat out attack that uses ideas and sentiment from actual criticisms of MIRI/LW, which I do not cite, because it is inflammatory. This is likely to preempt anticipated future criticism using these arguments.
I am writing here because the EA community should know now, that sentiment in global health and poverty, and animal welfare, is extremely low, especially among limited talent.
As EAs know, the FTX money favored longtermist causes. In the aftermath of the FTX collapse, EA is globally harmed, further disadvantaging these causes already in the shadow of this money.
The departure of this talent could be a wholesale disaster for EA, and leave it in a permanent weakened state. It is not being discussed, like dangers, such as the risk of FTX, due to the dynamics of EA discourse, which is easily dominated by full time influencers like Yudhowsky.
In this vulnerable state, undue attempts to associate Peter Singer, “EA”, and undue attempts to dissasociate “LW” and “rationality”, are an incredibly uncooperative defection.
Another example of this uncooperative behavior is the LW treatment of the Gopalakrishnan’s post, while received mixed reception, claims to point out serious misconduct.
On balance, negative claims of this supposed behavior is highly associated with SF and the EA/LW communities there.
The author implicates LW as well as EA, for example she says,
This is the response by a member of the LW team, which reads more like an attempt to dissociate this conduct with LW and put it squarely with EA, instead of stating that the post seems tendentious or unproductive.
The above comment is not intellectually honest.
For onlookers: this thread contains three examples of this dishonest behavior, that range from optics management, to shameless attacks (“brain twisting” by Peter Singer people ???).
This is an intentional, premeditated strategy by Eliezer and LW/MIRI staff, where:
Potential criticisms involving FTX personel’s behavior is attached to all of EA, instead of what a reasonable person reading Ellison’s blog, would find more associated to parts of EA local to the SF area, shielding LW/MIRI from this criticism.
Preempt anticipated future criticism using these arguments
In a time of “evaporative cooling”—Eliezer may try to shift EA’s state and shift it to his faction
This is despite the fact there are structural reasons this has limited gains (there’s already a LW!)—this is less than zero sum.
Note the direct attempts to reference or associate Will MacAskill in this behavior.
LW/MIRI is doing this unilaterally, no other side in EA has criticized LW. This is because there are no other paid influencers, morale is low. This is shameless.
This is coordinated and happening on the EA forum.