A personal statement on FTX
This is a repost from a Twitter thread I made last night. It reads a little oddly when presented as a Forum post, but I wanted to have the content shared here for those not on Twitter.
This is a thread of my thoughts and feelings about the actions that led to FTX’s bankruptcy, and the enormous harm that was caused as a result, involving the likely loss of many thousands of innocent people’s savings.
Based on publicly available information, it seems to me more likely than not that senior leadership at FTX used customer deposits to bail out Alameda, despite terms of service prohibiting this, and a (later deleted) tweet from Sam claiming customer deposits are never invested.
Some places making the case for this view include this article from Wall Street Journal, this tweet from jonwu.eth, this article from Bloomberg (and follow on articles).
I am not certain that this is what happened. I haven’t been in contact with anyone at FTX (other than those at Future Fund), except a short email to resign from my unpaid advisor role at Future Fund. If new information vindicates FTX, I will change my view and offer an apology.
But if there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception.
I want to make it utterly clear: if those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.
If this is what happened, then I cannot in words convey how strongly I condemn what they did. I had put my trust in Sam, and if he lied and misused customer funds he betrayed me, just as he betrayed his customers, his employees, his investors, & the communities he was a part of.
For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations.
A clear-thinking EA should strongly oppose “ends justify the means” reasoning. I hope to write more soon about this. In the meantime, here are some links to writings produced over the years.
These are some relevant sections from What We Owe The Future:
Here is Toby Ord in The Precipice:
Here is Holden Karnofsky: https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous
Here are the Centre for Effective Altruism’s Guiding Principles: https://forum.effectivealtruism.org/posts/Zxuksovf23qWgs37J/introducing-cea-s-guiding-principles
If FTX misused customer funds, then I personally will have much to reflect on. Sam and FTX had a lot of goodwill – and some of that goodwill was the result of association with ideas I have spent my career promoting. If that goodwill laundered fraud, I am ashamed.
As a community, too, we will need to reflect on what has happened, and how we could reduce the chance of anything like this from happening again. Yes, we want to make the world better, and yes, we should be ambitious in the pursuit of that.
But that in no way justifies fraud. If you think that you’re the exception, you’re duping yourself.
We must make clear that we do not see ourselves as above common-sense ethical norms, and must engage criticism with humility.
I know that others from inside and outside of the community have worried about the misuse of EA ideas in ways that could cause harm. I used to think these worries, though worth taking seriously, seemed speculative and unlikely.
I was probably wrong. I will be reflecting on this in the days and months to come, and thinking through what should change.
- Reflections and lessons from Effective Ventures by 28 Oct 2024 16:01 UTC; 186 points) (
- FTX FAQ by 13 Nov 2022 5:00 UTC; 144 points) (
- Sadly, FTX by 17 Nov 2022 14:26 UTC; 134 points) (
- Sadly, FTX by 17 Nov 2022 14:30 UTC; 133 points) (LessWrong;
- The FTX Situation: Wait for more information before proposing solutions by 13 Nov 2022 20:28 UTC; 109 points) (
- 13 Nov 2022 21:11 UTC; 109 points) 's comment on The FTX Situation: Wait for more information before proposing solutions by (
- On Leif Wenar’s Absurdly Unconvincing Critique Of Effective Altruism by 4 Apr 2024 15:25 UTC; 104 points) (
- Critique of the notion that impact follows a power-law distribution by 14 Mar 2024 10:28 UTC; 88 points) (
- To what extent & how did EA indirectly contribute to financial crime—and what can be done now? One attempt at a review by 13 Apr 2024 5:55 UTC; 63 points) (
- Will MacAskill’s role in connecting SBF to Elon Musk for a potential Twitter deal by 12 Nov 2022 23:37 UTC; 60 points) (
- EA & LW Forums Weekly Summary (7th Nov − 13th Nov 22′) by 16 Nov 2022 3:04 UTC; 38 points) (
- SBF’s comments on ethics are no surprise to virtue ethicists by 1 Dec 2022 4:18 UTC; 36 points) (LessWrong;
- 13 Nov 2022 21:43 UTC; 24 points) 's comment on The FTX Situation: Wait for more information before proposing solutions by (
- 28 Nov 2022 10:52 UTC; 24 points) 's comment on If you received FTX grant money you should return it by (
- EA & LW Forums Weekly Summary (14th Nov − 27th Nov 22′) by 29 Nov 2022 22:59 UTC; 22 points) (
- Non-performative speech in EA by 15 Nov 2022 0:07 UTC; 21 points) (
- EA & LW Forums Weekly Summary (14th Nov − 27th Nov 22′) by 29 Nov 2022 23:00 UTC; 21 points) (LessWrong;
- EA & LW Forums Weekly Summary (7th Nov − 13th Nov 22′) by 16 Nov 2022 3:04 UTC; 19 points) (LessWrong;
- 13 Nov 2022 9:00 UTC; 18 points) 's comment on FTX FAQ by (
- 16 Nov 2022 19:17 UTC; 14 points) 's comment on Who’s at fault for FTX’s wrongdoing by (
- SBF’s comments on ethics are no surprise to virtue ethicists by 1 Dec 2022 4:21 UTC; 10 points) (
- On Leif Wenar’s Absurdly Unconvincing Critique Of Effective Altruism by 4 Apr 2024 19:01 UTC; 8 points) (LessWrong;
- 28 Nov 2022 10:32 UTC; 5 points) 's comment on If you received FTX grant money you should return it by (
- In Defense of SBF by 14 Nov 2022 16:10 UTC; -1 points) (
- 14 Nov 2022 23:38 UTC; -2 points) 's comment on In Defense of SBF by (
It’s fair enough to feel betrayed in this situation, and to speak that out.
But given your position in the EA community, I think it’s much more important to put effort towards giving context on your role in this saga.
Some jumping-off points:
Did you consider yourself to be in a mentor / mentee relationship with SBF prior to the founding of FTX? What was the depth and cadence of that relationship?
e.g. from this Sequoia profile (archived as they recently pulled it from their site):
“The math, MacAskill argued, means that if one’s goal is to optimize one’s life for doing good, often most good can be done by choosing to make the most money possible—in order to give it all away. “Earn to give,” urged MacAskill.
… And MacAskill—Singer’s philosophical heir—had the answer: The best way for him to maximize good in the world would be to maximize his wealth. SBF listened, nodding, as MacAskill made his pitch. The earn-to-give logic was airtight. It was, SBF realized, applied utilitarianism. Knowing what he had to do, SBF simply said, “Yep. That makes sense.””
What diligence did you / your team do on FTX before agreeing to join the Future Fund as an advisor?
[Edited to add: Were you aware of the 2018 dispute at Alameda re: SBF’s leadership? If so, how did this context factor into your decision to join the Future Fund?]
Did you have visibility into where money earmarked for Future Fund grants was being held?
Did you understand the mechanism by which FTX claimed to be generating revenue? Were the revenues they reported sanity-checked against a back-of-the-envelope estimate of how much their claimed mechanism would be able to generate?
What were your responsibilities at the Future Fund? How often were you in contact with SBF and other members of FTX leadership in your role as an advisor?
[Edit after months: While I still believe these are valid questions, I now think I was too hostile, overconfident, and not genuinely curious enough.] One additional thing I’d be curious about:
You played the role of a messenger between SBF and Elon Musk in a bid for SBF to invest up to 15 billion of (presumably mostly his) wealth in an acquisition of Twitter. The stated reason for that bid was to make Twitter better for the world. This has worried me a lot over the last weeks. It could have easily been the most consequential thing EAs have ever done and there has—to my knowledge- never been a thorough EA debate that signalled that this would be a good idea.
What was the reasoning behind the decision to support SBF by connecting him to Musk? How many people from FTXFF or EA at large were consulted to figure out if that was a good idea? Do you think that it still made sense at the point you helped with the potential acquisition to regard most of the wealth of SBF as EA resources? If not, why did you not inform the EA community?
Source for claim about playing a messenger: https://twitter.com/tier10k/status/1575603591431102464?s=20&t=lYY65-TpZuifcbQ2j2EQ5w
I don’t think EAs should necessary require a community-wide debate before making major decisions, including investment decisions; sometimes decisions should be made fast, and often decisions don’t benefit a ton from “the whole community weighs in” over “twenty smart advisors weighed in”.
But regardless, seems interesting and useful for EAs to debate this topic so we can form more models of this part of the strategy space—maybe we should be doing more to positively affect the world’s public fora. And I’d personally love to know more about Will’s reasoning re Twitter.
I don’t think this is true? Especially for decisions in the billions of dollars? Why do you think 20 smart advisors can spot all the problems that thousands of community members will, or even the major ones?
See Holden Karnofsky’s Some Thoughts on Public Discourse:
I think it’s important to note that many experts, traders, and investors did not see this coming, or they could have saved/made billions.
It seems very unfair to ask fund recipients to significantly outperform the market and most experts, while having access to way less information.
See this Twitter thread from Yudkowsky
Edit: I meant to refer to fund advisors, not (just) fund recipients
Lorenzo, I agree the expert traders and investor have more technical skills about investment. But it seems to me that MacAskill and FTX Future Fund board had more direct information about the personality of SBF and the personal connections among the leaders and the group dynamics. So, when it comes to your statement “having access to way less information”, I don’t think this is the case.
I’m not as sure about advisors, as I wrote here. Agree on recipients
I’ll go with you part of the way, but I also think that experts, traders, and even investors were further from SBF than at least some of the people in the equation here, which seems more and more true the more accounts I hear about people from Alameda saying they warned top brass. I wouldn’t expect an investor to have that kind of insight.
I think it’s good practice to try to understand a project’s business model and try to independently verify the implications of that model before joining the project.
My understanding is that FTX’s business model fairly straightforwardly made sense? It was an exchange, and there are many exchanges in the world that are successful and probably not fraudulent businesses (even in crypto—Binance, Coinbase, etc). As far as I can tell, the fraud was due to supporting specific failures of Alameda due to bad decisions, but wasn’t inherent to FTX making any money at all?
I’m gonna wait it out on this one.
I’d currently wildly guess that Coinbase is not a fraud.
I agree that it is less likely than Binance, based on the fact that public stock market companies are required to be more transparent[1], I do not know much about these particular companies.
of course Enron, Wirecard and others show that being listed on the stock market is no guarantee
Yeah, fair. I have no real knowledge of crypto, so am not particularly endorsing Binance
This seems to be “not even wrong”—FTX’s business model isn’t and never was in question. The issue is Sam committing fraud and misappropriating customer funds, and there being a total lack of internal controls at FTX that made this possible.
If you say that your business model is to hold depositor funds 1:1 and earn money from fees, but in fact you sometimes earn money via making trades with depositor funds, then you would be misrepresenting your business model.
Sure, and what is your point?
My current best guess is that WM quite reasonably understood FTX to be a crypto exchange with a legitimate business model earning money from fees—just like the rest of the world also thought. The fact that FTX was making trades with depositor funds was very likely to be a closely kept secret that no one at FTX was likely to disclose to an outsider. Why the hell would they—it’s pretty shady business!
Are you saying WM should have demanded to see proof that FTX’s money was being earned legitimately, even if he didn’t have any reason to believe it might not be? This seems to me like hindsight bias. To give an analogy—have you ever asked an employer of yours for proof that their activities aren’t fraudulent?
Not disagreeing with your overall point, but if my non-EA aligned, low-level crypto trader friend is any indication, then there certainly was reason to believe that SBF was at the very least doing some shady things. In August, I asked this friend for his thoughts on SBF, and this is what he replied:
“He’s obviously super smart but comes across a bit evil while trying to portray the good guy front. His exchange is notorious for liquidating user positions, listing shit coins thats prices trend to zero. He also founded Alameda research (trading / market maker firm) alongside FTX (the exchange). Alameda are one of the biggest crypto trading firms with predatory reputation. There’s also the issue of barely any divide between the exchange and the trading firm so alameda likely sees a lot of exchange data that gives them an edge trading on FTX vs other users.”
The irony is that this friend lost most of his savings because he was a FTX user.
Also from the Sequoia profile: “After SBF quit Jane Street, he moved back home to the Bay Area, where Will MacAskill had offered him a job as director of business development at the Centre for Effective Altruism.” It was precisely at this time that SBF launched Alameda Research, with Tara Mac Aulay (then the president of CEA) as a co-founder ( https://www.bloomberg.com/news/articles/2022-07-14/celsius-bankruptcy-filing-shows-long-reach-of-sam-bankman-fried).
To what extent was Will or any other CEA figure involved with launching Alameda and/or advising it?
Indeed, even by 2019, anyone who took a cursory look at Alameda’s materials would have known that they were engaged in Ponzi schemes or something equally fraudulent.
See https://twitter.com/DylanLeClair_/status/1591521505464455168
As you can see in that Twitter thread, Alameda was promising a guaranteed 15% rate of return on investments, with “no downside.” This is impossible. Only the likes of Bernie Madoff would pretend to have a risk-free return at that level. And the materials look so amateurish (with a number of grammatical errors) that the person posting on Twitter originally thought it was “so egregious my first thought was that it was fake.”
This was fairly close to the time of Alameda’s founding. As of 2019, how much were CEA folks (including Will and Tara) involved with Alameda’s obvious fraud?
Tara left CEA to co-found Alameda with Sam. As is discussed elsewhere, she and many others split ways with Sam in early 2018. I’ll leave it to them to share more if/when they want to, but I think it’s fair to say they left at least in part due to concerns about Sam’s business ethics. She’s had nothing to do with Sam since early 2018. It would be deeply ironic if, given what actually happened, Sam’s actions are used to tarnish Tara.
[Disclosure: Tara is my wife]
Strong agree, but in that case, it seems very unlikely that Will was unaware of these serious “concerns about Sam’s business ethics” back in 2018, and it seems all the more incumbent on him to offer an explanation as to why he kept such a close affiliation with SBF thereafter.
The returns shown in the document are not indicative of fraud—those sorts of returns are very possible when skilled traders deploy short-term trading strategies in inefficient markets, which crypto markets surely were at the time. The default risk when borrowing at 15% might have been very low, but not zero as they suggested. The “no downside” characterization should have been caught by a lawyer, and was misleading.
Nobody with an understanding of trading would have[EDIT] I would not have concluded they were engaged in Ponzi schemes or were misrepresenting their returns based on the document. There are plenty of sloppy, overoptimistic startup pitch decks out there, but most of the authors of those decks are not future Theranoses.Good points, Brian . . . I’m sure there are lots of overoptimistic pitch decks, and that a 15% return might be feasible, and maybe I’m just looking at this with the benefit of hindsight.
Even so, an investment firm normally doesn’t do anything like this, right? I mean, I assume that even Renaissance Technologies wouldn’t want to offer one single investment opportunity packaged as a loan with a legally guaranteed 15% rate of return with “no downside.” https://www.bloomberg.com/news/articles/2021-02-10/simons-makes-billions-while-renaissance-investors-fume-at-losses#xj4y7vzkg They might brag about their past returns, but would include lots of verbiage about the risks, and about how past performance is no guarantee of future returns, etc.
Thanks for your reply. I do think it would be unusual to see such promises, particularly from a firm looking for large investments. And I would expect to see a bunch of disclaimers, as you suggest. There might have been such language in the actual investment documents, but still. The excerpt shared on Twitter would have set off red flags for me because it seems sloppy and unprofessional, and it would have made me particularly concerned about their risk management, but I wouldn’t have concluded it was a Ponzi scheme or that there was something fraudulent going on with the reported returns.
It will be interesting to see if all of the FTX/Alameda fraud (if there was fraud, which seems very likely) took place after the most recent investment round. Investors may have failed not in financial diligence but in ensuring appropriate governance and controls (and, apparently, in assessing the character of FTX’s leadership).
Archived version (that gets around the paywall)
One specific question I would want to raise is whether EA leaders involved with FTX were aware of or raised concerns about non-disclosed conflicts of interest between Alameda Research and FTX.
For example, I strongly suspect that EAs tied to FTX knew that SBF and Caroline (CEO of Alameda Research) were romantically involved (I strongly suspect this because I have personally heard Caroline talk about her romantic involvement with SBF in private conversations with several FTX fellows). Given the pre-existing concerns about the conflicts of interest between Alameda Research and FTX (see examples such as these), if this relationship were known to be hidden from investors and other stakeholders, should this not have raised red flags?
I believe that, even in the face of this particular disaster, who EAs are fucking is none of EA’s business. There are very limited exceptions to this rule like “maybe don’t fuck your direct report” or “if you’re recommending somebody for a grant, whom you have fucked, you ought to disclose this fact to the grantor” or “Notice when somebody in a position of power seems to be leaving behind a long trail of unhappy people they’ve fucked”, plus of course everything that shades over into harrassment, assault, and exploitation—none of which are being suggested here.
Outside of that, there’s a heck of a lot of people in this world fucking a heck of a lot of other people; most people who are fucking don’t blow up depository institutions; and controls and diligence on depository institutions should achieve reliability by some avenue other than checking which people are fucking. And I consider it generally harmful for a community to think that it has a right to pass judgment on fucking that is not like really clearly violating deontology. That’s not something that community members owe to a community.
As a Bloomberg article put it in September: https://www.pymnts.com/cryptocurrency/2022/bankman-frieds-stake-in-quant-trading-firm-raises-conflict-questions/
Are you trying to suggest that when two firms need to be at arms-length because of the potential for an enormous conflict of interest, it wouldn’t matter if the two firms’ chief executives were dating each other?
I’m saying that if your clearance process is unable to tell whether or not two firms are arms-length, when they have a great deal to potentially gain from illegally interoperating, without the further piece of info about whether the CEOs are dating, you’re screwed. This is like trying to fix the liar loan problem during the mortgage meltdown by asking whether the loan issuer is dating the loan recipient. The problem is not that, besides the profit motive, two people might also be fond of each other and that’s terrible; the problem is if your screening process isn’t enough to counterbalance the profit motive. A screening process that can make sure two firms aren’t colluding to illegally profit should not then break down if the CEOs go on a date.
Or to put it more compactly and specifically: Given the potential energy between Alameda and FTX as firms, not to mention their other visible degrees of prior entanglement, you’d have to be nuts to rely on an assurance process that made a big deal about whether or not the CEOs were dating.
Maybe even more compactly: Any time two firms could gain a lot of financial free energy by colluding, just pretend you’ve been told their CEOs are dating, okay, and now ask what assurances or tests you want to run past that point.
...I think there must be some basic element of my security mindset that isn’t being shared with voters here (if they’re not just a voting ring, a possibility that somebody else raised in comments), and I’m at somewhat of a loss for what it could be exactly. We’re definitely not operating in the same frame here; the things I’m saying here sure feel like obvious good practices from inside my frame.
Taking prurient interest in other people’s sex lives, trying to regulate them as you deem moral, is a classic easy-mode-to-fall-into of pontificating within your tribe, but it seems like an absurd pillar on which to rest the verification that two finance companies are not intermingling their interests. Being like “Oh gosh SBF and Caroline were dating, how improper” seems like finding this one distracting thing to jump on… which would super not be a key element of any correctly designed corporate assurance process about anything? You’d go audit their books and ask for proofs about crypto cold storage, not demand that somebody’s romance be a dark secret that nobody got to hear about?
We sure are working in different frames here, and I don’t understand the voters’ (if they’re not just a voting ring).
I work (indirectly) in financial risk management. Paying special attention to special categories of risk—like romantic relationships—is very fundamental to risk management. It is not that institutions are face with a binary choice of ‘manage risk’ or ‘don’t manage risk’ where people in romantic relationships are ‘managed’ and everyone else is ‘not’. Risk management is a spectrum, and there are good reasons to think that people with both romantic and financial entanglements are higher risk than those with financial entanglements only. For example:
Romantic relationships inspire particularly strong feelings, not usually characterising financial relationships. People in romantic relationships will take risks on each other’s behalf that people in financial relationships will not. We should be equally worried about familial relationships, which also inspire very strong feelings.
Romantic relationships inspire different feelings from financial relationships. Whereas with a business partner you might be tempted to act badly to make money, with a romantic partner you might be tempted to act badly for many other reasons. For example, to make your partner feel good, or to spare your romantic partner embarrassment
Romantic relationships imply a different level of access than financial relationships. People in romantic relationships have levers to make their partner do things they might not want to—for example abusive relationships, threatening to end the relationship unless X is done, watching the partner enter their password into their computer to gain access to systems.
So if I were writing these rules, I might very well rephrase it as “do you have a very strong friendship with this other person” and “do you occasionally spend time at each other’s houses” to avoid both allonormativity and the temptation to prurient sniffing; and I’d work hard to keep any disclosed information of that form private, like “don’t store in Internet-connected devices or preferably on computers at all” private, to minimize incentives against honest disclosure. And even then, I might expect that among the consequences of the regulation, would be that CEOs in relationships would occasionally just lie to me about it, now that such incentives had been established against disclosure.
When you optimize against visible correlates of possible malfeasance, you optimize first and above all against visibility; and maybe secondarily against possible malfeasance if the visibility is very reliable and the correlations are strong enough to take causal leaning on them.
But, sure, if you know all that and you understand the consequences, then Sequoia could’ve asked if SBF and Caroline were in a relationship, understanding that a No answer might be a lie given the incentives they’d established, and that a Yes answer indicated unusual honesty.
I don’t really understand why you are describing this as a hypothetical (“If I were writing these rules...”). You are the founder and head of a highly visible EA organisation recieving charitable money from donors, and presumably have some set of policies in place to prevent staff at that organisation from systematically defrauding those donors behind your back. You have written those policies (or delegated someone else to write them for you). You are sufficiently visible in the EA space that your views on financial probity materially affect the state of EA discourse. What you are telling me is that the policies which you wrote don’t include a ‘no undeclared sexual relationships with people who are supposed to act as a check on you defrauding MIRI’ rule, based on your view that it is excessively paternalistic to inquire about people’s sex life when assessing risk, and that your view is that this is the position that should be adopted in EA spaces generally.
This is—to put it mildly—not the view of the vast majority of organisations which handle money at any significant scale. No sane risk management approach would equate a romantic relationship with ‘a very strong friendship’. Romantic love is qualitatively different to fraternal love. No sane risk management approach would equate “occasionally spend[ing] time at each other’s house” to living together. My wife is often alone in the house for extended periods of time, but I usually hang out with friends when they come over (to give just one difference from an enormous list of possibilities).
EA leadership—which includes you—has clearly made a catastrophic error of financial risk management with this situation. The extent to which they are ‘responsible’ is a fair debate, but it is unquestionable they failed to protect people who trusted them to steer EA into the future—hundreds of people have been made unemployed overnight and EA is potentially facing its first existential risk as a result. I am genuinely baffled how you can look at this situation and conclude that the approach you are describing—a very intelligent non-expert such as yourself creates their own standards of financial conduct at significant odds with the mainstream accepted approach—could still possibly be appropriate in the face of the magnitude of the error this thinking has led to.
I also think it is extremely unedifying that you make the case elsewhere that the disagreement votes you are recieving for your position are from vote manipulation. A more plausible explanation is that people have independently reached the conclusion you are wrong that romantic love presents no special financial risks.
Somebody else in that thread was preemptively yelling “vote manipulation!” and “voting ring!”, and as much as it sounds recursively strange, this plus some voting patterns (early upvotes, then suddenly huge amounts of sudden downvoting) did lead me to suspect that the poster in question was running a bunch of fake accounts and voting with them.
We would in fact be concerned if it turned out that two people who were supposed to have independent eyes on the books were in a relationship and didn’t tell us! And we’d try to predictably conduct ourselves in such a mature, adult, understanding, and non-pearl-clutching fashion that it would be completely safe for those two people to tell the MIRI Board, “Hey, we’ve fallen in love, you need to take auditing responsibility off one of us and move it to somebody else” and have us respond to that in a completely routine, nonthreatening, and unexcited way that created no financial or reputational penalties for us being told about it.
That’s what I think is the healthy, beneficial, and actually useful for minimizing actual fraud in real life culture, of which I do think present EA has some, and which I think is being threatened by performative indignation.
I’m struggling to follow your argument here. What you describe as the situation at MIRI is basically standard risk management approach—if two people create a risk to MIRI’s financial security processes by falling in love, you make sure that neither signs off on risk taken by the other.
But in this thread you are responding with strong disagreement to a comment which says “if this relationship [between SBF and Caroline] were known to be hidden from investors and other stakeholders, should this not have raised red flags?”. You said “who EAs are fucking is none of EA’s business”, amongst other comments of a similar tone.
I don’t understand what exactly you disagree with if you agree SBF and Caroline should have disclosed their relationship so that proper steps could be taken to de-risk their interactions (as would happen at MIRI). It seems that you do agree it matters who EAs are fucking in contexts like this? And therefore that it is relevant to know whether Will MacAskill knew about the undisclosed relationship?
You could plausibly claim it gets disclosed to Sequoia Capital, if SC has shown themselves worthy of being trusted with information like that and responding to it in a sensible fashion eg with more thorough audits. Disclosing to FTX Future Fund seems like a much weirder case, unless FTX Future Fund is auditing FTX’s books well enough that they’d have any hope of detecting fraud—otherwise, what is FTXFF supposed to do with that information?
EA generally thinking that it has a right to know who its celebrity donors are fucking strikes me as incredibly unhealthy.
I think we might be straying from the main point a bit; nobody is proposing a general right to peer into EA sex lives, and I agree that would be unhealthy.
There are some relatively straightforward financial risk management principles which msinstream orgs have been successfully using for decades. You seem to believe one of the pillars of these principles—surfacing risk due to romantic entanglements between parties—shouldn’t apply to EA, and instead some sort of ‘commonsense’ approach should prevail instead (inverted commas because I think the standard way is basically common sense too).
But I don’t understand where your confidence that you’re right here is coming from—EA leadership has just materially failed to protect EA membership from bad actor risk stemming at least in part from a hidden conflict of interest due to a romantic entanglement. EA leadership has been given an opportunity to run risk management their way, and the result is that EA is now associated with the biggest crypto fraud in history. Surely the Bayesian update here is that there are strong reasons to believe mainstream finance had it approximately right?
Rereading the above, I think I might just be unproductively repeating myself at this point, so I’ll duck out of the discussion. I appreciated the respectful back-and-forth, especially considering parts of what I was saying were (unavoidably) pretty close to personal attacks on you and the EA leadership more broadly. Hope you had a pleasant evening too!
My (possibly wrong) understanding of what Eliezer is saying:
FTX ought to have responded internally to the conflict of interest, but they had no obligation to disclose it externally (to Future Fund staff or wider EA community).
The failure in FTX was that they did not implement the right internal controls—not that the relationship was “hidden from investors and other stakeholders.”
If EA leadership and FTX investors made a mistake, it was failing to ensure that FTX had implemented the right internal controls—not failing to know about the relationship.
I couldn’t quite bottom out exactly what EY was saying, but I’m pretty sure it wasn’t that. On your interpretation, EY said, “who EAs are fucking is none of [wider] EA’s business [except people who are directly affected by the COI]”. But he goes on to clarify “There are very limited exceptions to this rule like ‘maybe don’t fuck your direct report’ ”. If that’s an exception to the rule of EAs fucking being only of interest to directly affected parties, then it mean EY thinks an EA having sex with a subordinate should be broadcast to the entire community. That’s a very strict standard (although I guess not crazy—just odd that EY was presenting it as a more relaxed / less prurient standard than conventional financial risk management).
It also doesn’t address my core objection, which is that EA leadership failed very badly to implement proper financial risk management processes. Generally my point was that EA leadership should be epistemically humble now and just implement the risk management processes that work for banks, rather than tinkering around and introducing their own version of these systems. Regardless of what EY meant, unless he meant ‘We should hire in PWC to implement the same financial controls as every Fortune company’ then he is making exactly the same mistake EA leadership made with FTX—assuming that they could create better risk management from first principles than the mainstream system could from actual experience
By the way, I disagree with the objective position here too. Every FTX investor needed to know about the COI and the management strategy FTX adopted in order to assess their risk exposure. This would be the standard at a conventional company (if the company knew about such a blatant COI from their CEO and didn’t tell investors at a conventional company then their risk officers would potentially be liable for the fraud too, iirc)
Voting ring? That sounds preposterous to me
This comment is, at time of writing, sitting at −7 karma from 5 votes. Can someone who downvoted or strong downvoted this clarify why they did so?
What’s disappointing is not that Eliezer can’t make even a minor acknowledgement for the relevance of the models or experiences of others, that he is probably outright wrong on the substantial issues, but that Eliezer struggles to communicate and hold a thread in this conversation.
His counterpart is a literal domain expert and maybe very valuable talent to EA. (As a statement considering the totality of the votes and writing) this person is being badgered under what to any outsider should be the scary or unclear norms and power structures of the EA community on its own forum, while Eliezer’s de facto community keeps him afloat.
Elizier’s behavior is unacceptable for a funded, junior community builder, much less a senior leader.
Imagine a newcomer witnessing this, much less experiencing this.
I agree that it does not seem likely that there was a manipulation here with the votes (I casted strong-disagreement-vote on multiple comments by Eliezer on this page). But concerns about potential voting manipulation on this forum are reasonable by default, considering that it’s an open platform in which it’s technically possible for someone to vote from multiple anonymous accounts.
Fair enough, as long as that’s the standard that is applied to all commenters and not just EA leadership. I appreciate EY agreed that there was likely no manipulation after I pointed this out
It is extremely inaccurate to characterise the relationship between Bankman-Fried and Ellison as merely “dating”, and that people are merely saying that this was “improper”.
Bankman-Fried and Ellison were living together in a shared apartment. An unverified suggestion in one article suggests that Ellison may have been in some form of relationship with other residents of the apartment—residents who included the CTO and Director of Engineering of FTX.
If true, this is a genuinely alarming piece of information that would very obviously have caused anybody to question whether they should have placed their funds in the care of this specific group of people. However this information was not made public, and that lack of transparency is where the problem lies.
People who read this far seem to have upvoted
This statement is incredibly out of touch Eliezer. If CEO #1 and CEO #2 are in a romantic relationship, there is a clear conflict of interest here, especially when not disclosed to the public. In agreement with Anonymous, I also strongly oppose the language you’re using. I also agree with their comments regarding romantic relationships in the workplace. My general stance is 0 tolerance for workplace romance because it’s messy and there are far too many power dynamics at play.
Conflict of interest is the issue my friend. Unbiased decisions cannot happen when one has an other-than-work-relationship with the persons they are dealing with.
...You think it’s important to disclose this conflict of interest when you recommend a grant to someone, but not important when you as a CEO decide on a multi-billion dollar loan to the company where the other person is the CEO?
Disclose to who? The loan was blatantly bad; nobody in a position to take that disclosure should’ve given two flying floops whether the CEOs were dating or not.
Is the word “maybe” here just a style of writing? Should the EA community tolerate some cases in which person A is having sex with a person B who is a direct report of A (in an EA org)?
I realize that inserting hedge words can allow one to publish things using much less time and energy (which can be a very good reason to insert hedge words).
Because as somebody who could potentially be mistaken for a Leader I want to be pretty derned careful about laying down laws to regulate other people’s sexuality; and while something like that would definitely be a red flag at, like, idk, CEA or MIRI, maybe it’s different if we’re talking about a 3-person startup. Maybe you’ll say it’s still ill-advised, but I don’t know their circumstances and there’s also a difference between ill-advised and Forbidden. I feel a lot more comfortable leaving out the ‘maybe’ when I pontificate my legislation about informing a donor that your recommended grantee is one with whom you’ve had a relationship—though even there, come to think, I’m relying on all the donors I ever talk to being sensible people who aren’t going to go “OH NO, PREMARITAL SEX” about it.
...I am confused and somewhat worried by the degree to which voters on this post seem to feel that it’s not an important heuristic to try to construct your community regulatory process in a way that doesn’t revolve around people’s sex lives, except in so far as the sex itself is per se a bad thing (eg nonconsensual or under conditions where positive consent could not reasonably be determined).
It’s not about the sex in and of itself, it’s about the conflict of interest and favouritism. Romantic love interest is enough for that too. EA could probably learn a lot from how mainstream orgs deal with this.
Yes—I almost can’t believe I am reading a senior EA figure suggesting that every major financial institution has an unreasonably prurient interest in the sex lives of their risk-holding employees. EA has just taken a bath because it was worse at financial risk assessment than it thought it was. The response here seems to be to double-down on the view that a sufficiently intelligent rationalist can derive—from first principles—better risk management than the lessons embedded in professional organisations. We have ample evidence that this approach did not work in the case of FTX funding, and that real people are really suffering because EA leaders made the wrong call here.
Now is the time to eat a big plate of epistemically humble crow, and accept that this approach failed horribly. Conspiracy theorising about ‘voting rings’ is a pretty terrible look.
I feel like people are mischarachterizing what Eliezer is saying. It sounds to me like he’s saying the following.
“Sure, the fact that the two were dating or having sex makes it even more obvious that something was amiss, but the real problem was obviously that Alameda and FTX were entangled from the very start with Sam having had total control of Alameda before he started FTX, and there were no checks and balances and so on, so why are you weirdos focusing on the sex part so much and ignore all the other blatant issues?!”
That seems like a very charitable reading of the comment
“who EAs are fucking is none of EA’s business. There are very limited exceptions to this rule like … none of which are being suggested here.”
I’d suggest that given the high stakes of the situation at the moment it is especially important not to inadvertently give the impression that EA leadership think they have privileged insight into financial risk management that they actually don’t. If EY has merely mangled his argument (as you suggest) it would be very sensible for him to edit his comment to reflect that, and apologise for implying that vote rigging was the only reason he could have been down voted.
I was commenting on his overall stance from his comments throughout the threads here, not only on that particular first comment. I agree that the part you cite doesn’t sound defensible. I considered his further comments to be admissions of “Okay, you all have a point, but …” (If I’m right with my interpretation, he could’ve been more clear about the part of “sorry, you all have a point and the initial comment was too crude.”)
Edit: FWIW, I thought the info/arguments you gave about why it’s common practice in finance to carefully monitor romantic or sexual conflicts of interests were compelling, and your point about how EAs maybe shouldn’t think they can do better based on first-principles reasoning also seems wise.
I’m under the impression that mainstream orgs deal with this rather poorly, by having the relationships still happen, but be Big Dark Forbidden Secrets instead of things that people are allowed to actually know about and take into account. But they Pretend to be acting with Great Propriety which is all that matters for the great kayfabe performance in front of those who’d perform pearl-clutching otherwise. People falsifying their romantic relationships to conform to ideas about required public image is part of our present culture of everything being fake; so what loves you forbid from being known and spoken of, by way of trying to forbid the loves themselves, you should forbid very hesitantly.
I think our current culture is better, even in light of current events, because I don’t think the standard culture would have actually prevented this bad outcome (unless almost any minor causal perturbance would’ve prevented it). It would mean that SBF/C’s relationship was just coming out now even though they’d previously supposedly properly broken up before setting up the two companies, or something—if we learned even that much!
One thing that people in mainstream orgs do, if they want to act with integrity, is resign from roles/go work somewhere else when they want to start a relationship that would create a conflict of interest whilst both are in their current positions (or if they value their job(s) more, give up on the idea of the relationship).
The first half of this comment is tangential at best. It’s also a bit odd to me how much you are defending “our current culture” when someone posted on the forum just yesterday expressing concerns about said culture.
...are you suggesting that nobody ought to dare to defend aspects of our current culture once somebody has expressed concerns about them?
I’m suggesting that you show some humility.
...by not saying anything in favor of protecting some aspect of our current culture, when somebody else has just recently expressed concerns about it? That’s a rule?
I would have expected the opposite corner of the two axis voting (because I think people don’t like the language)
First of all, this language is wildly unhelpful, even outside of the current situation.
Secondly, this isn’t close to true, and shows ignorance and blasé disregard for a wide range of social and power structures in the real world, such as corporate environments. This is not some leftist social justice statement. Even before the #metoo era, there’s a vast range of sexual conduct that wouldn’t be acceptable for a middle/senior leader.
Finally, you, Eliezer Yudkowsky, are a major part of the problem, with EA, as you put it. Your intellectual contributions are poorly regarded in the real world and across EA. This is not a “sneerclub” view, but by open minded outsiders, and senior EAs across all cause areas. Even in a full AI safety worldview, your recent views/contributions on AI safety have not been positive, and would have been an issue to work around, even without the changes in longtermist funding.
As a heterosexual male who has interacted with many female EAs, I almost never consider any sort of romantic and sexual relationship with any female EA, and absolutely not junior. This is partially the consequence of past events due to misconduct of others, of which I am entirely innocent of. Many other male EAs have similar personal policies, for the same reasons.
As the public content above shows, it is not salacious to say that even this basic idea above is ignored by others. I have plausible reasons to believe this is costly, and I resent the cost the movement bears, carrying on like this, due to this kind of behavior.
Note that this account is anonymous, but not because of a desire to communicate or criticize without retaliation, and my identity will be clear.
This statement has been downvoted and removed from view. I’m pretty skeptical that is helpful.
Perhaps the ditch the “Your intellectual contributions are poorly regarded” thread; at best, it is unsupported & off-topic
Morale is low right now and senior EA figures are occupied and some have come under direct criticism, whether justified or not. In this environment, it’s difficult to communicate or express leadership. Only the CEA community health team seems to be taking the initiative, which must be very difficult and this is heroic.
In this situation there is often gardening of the online space that tends to be performed by marginal actors. LW and MIRI has been left mostly unscathed by the FTX disaster, and now, Eliezer and Rob B (professional communicator employed by MIRI) are highly active. In the sense of advancing their cause, that’s OK and natural. They are also helpful in tempering gardening attempts by other actors.
Note that I don’t think Eliezer’s representations, such downplaying his interactions with FTX (he might have been a regrantor and was probably more actively jockeying/seeking money than it seems, which is understandable) and other statements are entirely truthful or disinterested.
More importantly, the low opinion of Eliezer’s contributions is well known, relevant and should be communicated. (The quality of his output and MIRI was considered low, which is why they received relatively little funding and were unscathed[1]).
The fact they were not funded by a bad person is not a sign of virtue or quality, to say this in “LW speak”, see Reversed Stupidity Is Not Intelligence.
I think this was downvoted in the first 10 seconds, I am bumping this so this gets read.
As of writing, I want to point out that Eliezer’s comments, which are a probably strained digression to explain his original comment (and include speculation of a voting ring against him ???), has the following vote score.
My comments below this have:
This is not only not justifiable by content, this is literally suppressing criticism of a promoted EA figure on the EA forum.
Note that my comment is not policing or calling out private actual interpersonal conduct and seems well justified given the parent comment, as well as the wide range of topics discussed. A week ago, I think we all know that I would be voted down for factual, but off-color statements about SBF’s business practices.
Now, here, I make the additional, specific accusation that the existence of Eliezer Yudkowsky as a major public figure in EA is out of proportion to his contributions or his popularity in EA, and is partially supported by de facto organized coordination by a group of people on the LessWrong and Effective Altruism forums.
That is, my comment has been shared on say, private FB groups, Slack, Discord servers with the expected aim of managing this content.
I encourage examination of the view/vote graph and the origin of view/votes for my comment and possibly other content.
I am not attacking the cultures, views to help the world or ways of interpersonal relationships of people close to Eliezer. I do not want Eliezer to be harmed, or reduce his agency to contribute in the ways he wants to.
Eliezer is popular here. He founded LessWrong, MIRI and the AGI x-risk community. It’s not surprising you are getting downvoted for criticising his work (note I have not downvoted you, just explaining here).
Not just for criticism of his work but also for bringing this up in a totally unrelated context. If you’re (I mean the anonymous commenter) bothered by the way Eliezer dismisses concerns around “sex within orgs or close networks makes things messy and often ends badly,” I think that’s fair enough and I wouldn’t have downvoted your comment for it. But then adding that you think his intellectual contributions are also shit (or at least are seen as bad by people outside the movement) – that just seems a bit mean-spirited (besides IMO being wrong).
… I feel sad and uncomfortable about the commenters here criticizing Anonymous for “personally attacking” Eliezer, “bringing this up in a totally unrelated context”, being “mean-spirited”, etc.
It surely matters whether or not the intellectual contributions of someone in Eliezer’s reference class are bad, and in a world where they are bad, I care a lot more about learning that fact than about exactly which thread or subthread the discussion occurs on.
I’m glad you mention “besides IMO being wrong” at all. But where’s the objection that no supporting argument has been given? Where are the requests for specifics, so that it’s even possible to evaluate Anon’s claim by comparing notes about whether a given idea is a good intellectual contribution?
The problem with “More importantly, the low opinion of Eliezer’s contributions is well known” isn’t that it’s rude or off-topic; it’s that it’s maximally vague, more like a schoolyard taunt (“Oh, everyone knows X is lame, it’s so obvious I don’t even need to say why!”) than like a normal critique of someone’s intellectual output. If you think Eliezer’s wrong about tons of stuff, give some examples so those can be talked about, for goodness’ sake.
I agree that maximal vagueness is the much bigger issue with the intellectual criticism part of the comment than its unrelatedness and should also have said so. (And also via that vagueness implying that there’s a consensus where there IMO isn’t.)
I have, as it happens, a low opinion of Eliezer’s influence on EA (though I admit I’ve hardly read his stuff), but I still downvoted a generalized off-topic nasty personal attack.
Is the romantic relationship that big a deal? They were known to be friends and colleagues + both were known to be involved with EA and FTX future fund, and I thought it was basically common knowledge that Alameda was deeply connected with FTX as you show with those links—it just seems kind of obvious with FTX being composed of former Alameda employees and them sharing an office space or something like that.
Romantic love is a lot more intense than mere friendship! Makes conflicts of interest way more likely.
My $0.02 - (almost) the entire financial and crypto world, including many prominent VC funds that invested in FTX directly seem to have been blindsided by the FTX blowup. So I’m less concerned about the ability to foresee that. However the 2018 dispute at Alameda seems like good reason to be skeptical, and I’m very curious what what was known by prominent EA figures, what steps they took to make it right and whether SBF was confronted about it before prominent people joined the future fund etc.
+1, I think people are applying too much hindsight here.* The main counter consideration: To the degree that EAs had info that VCs didn’t have, it should’ve made us do better.
*It’s still important to analyze what went wrong and learn from it.
Hi Milan,
I come from the traditional accounting/ internal audit where governance teams and internal controls at the very least are installed and due diligence is a best practice especially in large sums of money being distributed. I am new here to the EA community and have expected similar protocols are in place as large scale fraud is not some new thing—it had brought down the accounting profession in 2001 (Enron) and the mortgage crisis in 2008 (Lehman).
I guess what is clear to me is EA lacks the expertise on fraud / error detection, moreover has to make some improvements in the near future.
All the best,
Miguel
+1 on this. It is painfully clear that we need to radically improve our practices relating to due diligence moving forward.
Sorry to be jumping in having never posted here, but I’ve been following along for a while and I’m fascinated.
Everything mentioned in the Sequoia piece about MacAskill’s involvement is strange. I’d be interested in hearing more about what his “pitch” was like back then:
What was with this recruitment process? The notion of “signing on” jumps out at me. Also MacAskill is the one who told SBF to go to Jane Street?
That’s so weird. I get the idea of trying to spread the gospel, but has MacAskill ever spoken about his motives for...going around meeting college kids to...I don’t even know what the correct description would be. Gain acolytes?
I think what his public official motive would be is obvious: he’s always tried to get people to do things he thinks have positive altruistic impact-for example, by writing books advocating they do stuff-so he was doing the same with potentially influential people at a more 1-1 level. I don’t think this is something that’s ever been hidden! I can see why you might reasonably think this sort of influence seeking feels a bit off, since on some level it is an attempt to exercise power in a way the bypasses democracy. I’m sure someone has criticized it on those grounds. But organizations recruiting talented college students is quite normal in itself, even if they don’t usually have to sign on to a detailed philosophy. And even the latter is hardly unique: think of someone trying to network informally for people to get involved with their new libertarian think tank, or socialist magazine.
Oh, yeah, I totally agree. I don’t think of it as a way to bypass democracy or exercise undue influence. The main thing for me is that SBF and MacAskill are so interconnected. I thought it was primarily a philosophical connection, but the financial connection seems just as important, especially since MacAskill has been involved in every single part of SBF’s career. The first job at Jane Street, the arbitrage, the founding of Alameda, and now all the FTX crap.
Outside recent political donations, it seems that SBF was shoveling most of his donations money back into MacAskill’s organizations. (Someone else linked to his old blog, which gives a glimpse of this: http://measuringshadowsblog.blogspot.com/)
Now that SBF’s biggest endeavor has turned out to be a giant scam, it’s important to understand what MacAskill knew about everything and whether any of the same kind of financial misconduct is going on at any of the charitable organizations. I’m sure we’ll know a lot more soon, though.
‘Now that SBF’s biggest endeavor has turned out to be a giant scam’ I’m not sure that is quite right. As far as I can tell, the main definite issue that has been proven is that he stole money when he lost money, which is not quite the same as the endeavor itself being a scam from the beginning. Not that that is any morally better, and it was so deeply, deeply morally wrong that I strongly suspect SBF has done other very bad things, but it’s good to be precise I think. (I agree that Will had a severe conflict of interest, but I do think we have to stress that there’s no strong direct evidence of wrongdoing on his part yet, and that getting rich people to fund you is a standard charity/political party model used by everyone, even if known to be problematic and even if Will and SBF seem to have been unusually connected.)
They’re probably talking about the Giving What We Can pledge here, which MacAskill co-founded. I don’t see why anyone would consider that controversial.
My naive moral psychology guess—which may very well be falsified by subsequent revelations, as many of my views have this week—is that we probably won’t ever find an “ends justify the means” smoking gun (eg, an internal memo from SBF saying that we need to fraudulently move funds from account A to B so we can give more to EA). More likely, systemic weaknesses in FTX’s compliance and risk management practices failed to prevent aggressive risk-taking and unethical profit-seeking and self-preserving business decisions that were motivated by some complicated but unstated mix of misguided pseduo-altruism, self-preservation instincts, hubris, and perceived business/shareholder demands.
I say this because we can and should be denouncing ends justify the means reasoning of this type, but I suspect very rarely in the heat of a perceived crisis will many people actually invoke it. I think we will prevent more catastrophes of this nature in the future by focusing more on on integrity as a personal virtue and the need for systemic compliance and risk-management tools within EA broadly and highly impactful/prominent EA orgs, especially those whose altruistic motives will be systematically in tension with perceived business demands.
Relatedly, I think a focus on ends-justify-the-means reasoning is potentially misguided because it seems super clear in this case that, even if we put zero intrinsic value on integrity, honesty, not doing fraud, etc., some of the decisions made here were pretty clearly very negative expected-value. We should expect the upsides from acquiring resources by fraud (again, if that is what happened) to be systematically worth much less than reputational and trustworthiness damage our community will receive by virtue of motivating, endorsing, or benefitting from that behavior.
I thank you for apologizing publicly and loudly. I imagine that you must be in a really tough spot right now.
I think I feel a bit conflicted on the way you presented this.
I treat our trust in FTX and dealings with him as bureaucratic failures. Whatever measures we had in place to deal with risks like this weren’t enough.
This specific post reads a bit to me like it’s saying, “We have some blog posts showing that we said these behaviors are bad, and therefore you could trust both that we follow these things and that we encourage others to, even privately.” I’d personally prefer it, in the future, if you wouldn’t focus on the blog posts and quotes. I think they just act as very weak evidence, and your use makes it feel a bit like otherwise.
Almost every company has lots of public documents outlining their commitments to moral virtues.
I feel pretty confident that you were ignorant of the fraud. I would like there to be more clarity of what sorts of concrete measures were in place to prevent situations like this, and what measures might change in the future to help make sure this doesn’t happen again.
There might also be many other concrete things that could be done to show your (and other senior people’s) care about these values.
Again, I appreciate the words, but if there’s one thing that the recent scandal taught us, it’s that it’s hard to take much from words. I don’t blame you here—but I would like us to have a culture where EAs can focus on evidence of credibility that’s much more high-signal than a list of previous altruistic writings.
All that said, I imagine that more rigorous evidence here will take more time.
EA posts are very unlike company virtue statements. They include philosophical arguments (at least some of screenshots and linked posts do). I agree that there’s more that can (and maybe should) be said, but I think linking to extensive discussion of naive/act utilitarianism vs. global consequentialism, ethical injunctions, etc., is a great way to show that EAs have seriously engaged with these topics and come down pretty decisively on one side of it.
[Edit: I have a reply to this in the comments]
I think it’s nice, but I also think we should be raising the bar of the evidence we need to trust people.
SBF and the inner FTX crew seemed very EA. SBF had a utilitarian blog[1] that I thought was pretty good (for the time, it was ~2014).
He repeatedly spoke about how important it was for crypto exchanges to not do illegal activity. He even actively worked to regulate the industry.
I’d bet that SBF spent a lot more effort speaking and advocating about the importance of trustworthiness in crypto, then perhaps any of us on the importance of trust and regularly-good moral principles.
Sam literally argued for trust and accountability to congress.
From what I understand, he was the poster boy for what trustworthy crypto looks like.
We at very least could really use measures that would have caught a SBF-lite.
> EA posts are very unlike company virtue statements.
Sure, but SBF definitely got through. I’m sure any of his co-conspirators also would have. EA-adjacent people an clearly fool EAs using these sorts of methods.
(I considered raising this issue more in the first post, but am happy to add it now that there’s push-back.)
I can’t find the blog now, and wouldn’t be surprised if it were no longer online. It’s possible I’m misremembering. I remember the blog having like ~6 posts or so, in around 2014. If anyone else has a link, it seems valuable to share it.
I believe this is SBF’s blog: http://measuringshadowsblog.blogspot.com/
I think coming back to this, my point isn’t straightforwardly fair. My post above uses a lot of evidence in a way that makes it seem like the point is very obvious.
I think that bars like “does the person have public writing showing they deeply understand EA principles” are generally pretty decent and often have worked decently well.
The case with SBF does seem extremely unusual to me. Protecting against it isn’t just some “obvious set of regular measures”. It might take a fair deal of thought and effort.
I think that we should be thinking about how to that thought of effort. I think we should be working to find and assume ways of verification that would have at least caught some lite-SBF.
So, the example of SBF seemed too good to not share, but it is extreme, so can’t be taken too much as a typical example to be worried about.
I still think that we should set the bar higher than a few blog posts for situations like this though, and assume that Will would agree. (He meant this much more as a quick public statement, and not real evidence of innocence to EAs, I assume)
This lady posted this 3 weeks ago. Some people in the crypto space have been warning about SBF, and FTX for over a year.
You should find a better example. The video above doesn’t contain any a relevant warning. Her criticism is that SBF is pro-regulation. Not that he was defrauding his customers and neither does she make any predictions about his exchange going down due to insolvency.
Not sure if you find this a better example:
https://twitter.com/Hedgeye/status/1591240664779743232
I agree with Lukas, though I also suspect this was mostly a failure of bureaucracy / competence / a few individuals’ moral character, rather than a failure that has much connection to EA ideology. I expect we’ll have more clarity on that later, as facts come to light.
Eh, let’s not overrate how much abstract arguments actually work to achieve their goals. Now the broader EA sphere has responded surprisingly well, but we need to move beyond abstract words, and instead focus more on what they actually do.
What do you mean, this community is largely composed of people who do really weird things in their day job based on abstract arguments.
Thanks for verifying that for me.
(Edit: I think now that I misread this a bit: I think this post is really meant as a hastily-written update, not a well-reasoned apology. I would appreciate just a head’s up that a longer doc is coming.)
I think one issue I have is that this post seems to be doing a few things at once, and it’s not very clear to me.
1. Publicly apologize for what happened
2. Re-affirm to those viewing this that doing things like fraud are not publicly endorsed by EA leadership
3. Outline Will’s involvement in the situation and describe who was responsible for it
4. Make it clear that Will wasn’t involved in the bad parts of the scandal, and can be trusted in the future
This post came from a Twitter thread. It seems like it was hastily written.
I don’t think this post does a great job at all points. It seems likely to me that it wasn’t meant to.
If this post is meant to be the best public statement of all 4 things, I would really like that to be made clear.
Especially when they contain massive qualifiers like “the ends do not ALWAYS justify the means”
That doesn’t strike me as a massive qualifier, it strikes me as something that’s straightforwardly true (or as true as a moral claim can be). For example, if you’re in a situation where you can lie to save 1B people from terrible suffering, then I bet most people think it’s not only acceptable, but obligatory to lie. If so, the ends clearly do sometimes justify the means.
I think it is morally correct and that people would agree with it, but I don’t think if it as strong evidence for the claim “we are against this type of behavior.”
So massive is too big of a word, but the qualifier in some sense let’s everything in and isn’t powerful.
Seems to me that you have it exactly backwards? Everyone agrees that the ends usually justify the means—e.g., it’s a good idea to go grocery shopping because this results in getting food. “Are there exceptions?” is exactly the claim that naive consequentialists are getting wrong.
I want to say that I have tremendous respect for you, I love your writing and your interviews, and I believe that your intentions are pure.
How concerned were you about crypto generally being unethical? Even without knowledge of the possibly illegal, possibly fraudulent behaviour. Encouraging people to invest in “mathematically complex garbage” seemed very unethical. (Due to the harm to the investor and the economy as a whole).
SBF seemed like a generally dishonest person. He ran ads saying, “don’t be like Larry”. But in this FT interview, he didn’t seem to have a lot of faith that he was helping his customers.
“Does he worry about the clients who lose life-changing sums through speculation, some trading risky derivatives products that are banned in several countries? The subject makes Bankman-Fried visibly uncomfortable. Throughout the meal, he has shifted in his seat, but now he has his crossed arms and legs all crammed into a yogic pose.”
It is now clear that he is dishonest. Given he said on Twitter that FTX US was safe when it wasn’t (please correct me if I’m wrong here).
I think that even SBF thinks/thought crypto is garbage, yet he spent billions bailing out a scam industry, possibly with customer deposits. If he had been successful, the result would have been more people getting scammed.
I hate to say, “I told you so”, but I told many people in EA privately that I thought SBF was possibly doing more harm than good. The mechanism I argued was that making EA associated with a scam industry would hurt EA and increase x-risk. Everyone I told dismissed my concerns as implausible/ defended crypto.
I obviously regret not saying this publicly; I was afraid of the impact this could have on my job security.
I believe that I, along with many other people in EA, was blinded by his massive philanthropy in an Emperor Wears No Clothes kind of situation. Or maybe No-Face from Spirited Away.
I think I was greedy and had a fear-based mindset. I don’t know if the community was encouraging independent thought or free speech regarding SBF/FTX/crypto. Obviously, on the forum, people were, but maybe less so on the inside when employed in an EA org, particularly the higher-ups.
I’m feeling a lot of things right now. If this turns out to be an Enron/ Madoff situation, I don’t know if I can trust you anymore. I don’t know if I should stay in EA. I would feel very sad if you publicly praised someone who turned out to be morally bankrupt. Of course, everyone makes mistakes. But still, some trust has been lost.
My inside view is that LVT is on a similar order of magnitude in importance to AI safety. As it seems more neglected and tractable, the scale is, of course, much smaller. Everyone in EA said I was wrong, and I trusted them, so I updated my outside view. But now I don’t know what to think. I hope I’m using the terms inside view and outside view correctly. Sorry if this is unclear/ too much of a rant. I’m very upset about all of this.
Edit: Many non-profits don’t know what to do when being offered donations by morally questionable billionaires. Turning it down is, of course, challenging and possibly not the most impactful action. If they accept the money, they usually try to avoid praising the donor, being publicly associated with them or making them a central figure in their movement. Why didn’t you do that?
Thanks for posting this, Dean. Just commenting because it aligns really well with everything I am feeling too.
Also to add to that, whether or not crypto is in itself ethical its known to be a very unstable sector and one with a particularly negative reputation. Was there any discussion of how to compensate for that potential volatility, and of potential reputational risks of being associated?
These are great points and poignant question. My only disagreement is that crypto is not a scam industry. It has one clear use-case, and that use case is to help criminals and criminal organizations stow away money far away from government oversight. (thanks to @geuss for pointing this out).
That’s true. Crypto seems to have evolved from a convenient currency for criminals to a way for “entrepreneurs” to scam ordinary people.
None of them have the properties of a good currency. Nor do they generate cash flow, so they aren’t real investments. So in the best case, they are pure bubbles.
It is surprisingly easy to scam people into buying these economically worthless lines of code by simply repeating “DeFi”, “The Federal Reserve is evil” (for some unexplained reason), “You would have been a billionaire if you bought BitCoin in 2010”, “Don’t miss out”, “It’s the future”, “Don’t be a FUDer”.
This isn’t enough for these greedy “entrepreneurs”, so they have resorted to well-known financial scams. Ponzi Schemes (yield farming); Pump and Dump (most new cryptocurrencies); Pyramid Schemes (Web 3 “jobs”), and taking risky bets with deposits (FTX).
The result of all of this is that people waste their lives in an industry that can’t produce anything economically valuable, huge amounts of energy are wasted, and financial resources that could be invested in productive businesses sit in worthless lines of code.
And of course, many ordinary hard-working people lose their life savings.
Banks, P2P platforms, listed corporations, Private Equity Firms and VCs all invest in potentially economically valuable enterprises that could generate cash flow. Mainstream finance isn’t perfect, but crypto isn’t the answer. The loss in economic growth from the underinvestment caused by crypto is potentially considerable.
More crimes are committed with cash, is cash a scam too? Criminals prefer cash, because as it turns out, most crypto transactions are very trackable, unlike cash.
People who do on-chain analysis have tracked FTX’s crypto movements, something you would be unable to do with cash. It seems many here have a fundemental misunderstanding of cryptocurrency, and why it was invented in the first place.
Sam was using his own invented crypto that was basically worthless as collateral to borrow real money to then trade or buy politicians, or whatever else he was doing with it. His crypto coin was a scam, and he literally went on Bloomberg and bragged about his “Ponzi” scheme, using those words. More than a few people in the crypto community have been warning about Sam for months, but those people were mocked, ridiculed, and ignored.
What Sam did was completely against the ethos of crypto, (especially when he started lobbying for regulations and attempting to buy elections) just as what he did was against the ethos of EA.
It isn’t fair or reasonable to judge all of crypto by his actions, just as it wouldn’t be fair or reasonable to judge all of EA by Sam Bankman-Fried.
Hi Sam, thanks for writing this. I’m not sure why I got so many disagree votes. I don’t think crypto is a scam because criminals use it. I think they don’t have the properties of a good currency. Maybe some of the stablecoins have some of them. But many of them seem to have crashed, and the transaction costs don’t seem much better than traditional money transfers. It’s possible that the traceability of Blockchain is an advantage over government fiat currencies. But many businesses don’t want everything to be traceable. So this only seems like an advantage to the police.
Nor do they seem like a real investment because they don’t generate cash flow. They are just a sort of faith-based store in value like Gold. Is that wrong?
There are also many things in Web 3 that look like literal scams.
I listed: Ponzi Schemes (yield farming); Pump and Dump (most new cryptocurrencies); Pyramid Schemes (Web 3 “jobs”), and taking risky bets with deposits (FTX).
Yes, I agree with you regarding the Bloomberg interview. I linked this as evidence that he didn’t think he was helping his customers. It was quite a viral interview which is why I’m surprised more people in EA weren’t talking about it.
I can’t claim to be a crypto expert, but the whole thing looks pretty unethical to me when you see ordinary people regularly losing all of their money. I wish EA was less associated with what I see as a scam industry.
Just want to let you know there are Georgist EAs out there. In my head I put something like 80% odds that Georgism (or maybe just LVT) will be a cause area (i.e. similar to how at first dismissed areas like mental health, wild animal suffering or great power conflict are now considered major cause areas in EA) within the next ten years.
A second reason I’m writing this comment is so I can look back on it in ten years.
Thank you, Cornelis. Yes, I’m a big fan of Lars Doucet. One of my goals was to try to make Georgism an EA cause area. YIMBYism is already a minor EA cause area. However, I don’t want to distract from AI safety, particularly if EA is now more funding contained. So I might just do earning to give.
Thank you so much for writing this! I wonder the same things regarding crypto and I really don’t understand why so many EA’s seem(ed) fine with it. I should’ve expressed these concerns more loudly before.
Will:
One item that should be a part of your reflections in the days and months to come is whether you are fit to be the public face of the effective altruism movement, given your knowledge of Sam’s unethical behavior in the past, ties to him going back to 2013, and your history of vouching for Ben Delo, another disgraced crypto billionaire.
The EA community has many excellent people—including many highly capable women—who are uninvolved in this scandal and could step up to serve in this capacity.
I am glad you felt okay to post this—being able to criticise leadership and think critically about the actions of the people we look up to is extremely important.
I personally would give Will the benefit of the doubt of his involvement in/knowledge about the specific details of the FTX scandal, but as you pointed out the fact remains that he and SBF were friends going back nearly a decade.
I also have questions about Will Macaskill’s ties with Elon Musk, his introduction of SBF to Elon Musk, his willingness to help SBF put up to 5 billion dollars towards the acquisition of Twitter alongside Musk, and the lack of engagement with the EA community about these actions. We talk a lot about being effective with our dollars and there are so many debates around how to spend even small amounts of money (eg. at EA events or on small EA projects), but it appears that helping SBF put up to 5 billion towards Twitter to buy in with a billionaire who recently advocated voting for the Republican party in the midterms didn’t require that same level of discussion/evaluation/scrutiny. (I understand that it wasn’t Will’s money and possibly SBF couldn’t have been talked into putting it towards other causes instead, but Will still made the introduction nonetheless.)
Of course. This reads as almost bizarre: it would be a baby-eater-type conspiracy theory to think that Will (or anyone else in EA leadership) knew about this. That’s just not how things work in the world. The vast majority of people at Alameda/FTX didn’t know (inner circle may have been as small as four). I mean, maybe there’s a tiny chance that Sam phoned up someone a week ago and wanted a billion in secret, but you can see how ridiculous that sounds. I mean picture the conversation: “Hey [EA leader], turns out I fucked up. Can you wire me a billion? You’re down with me secretly trading with customer funds, right?”
In any case, I think that isn’t Kerry’s point. The point isn’t “Did Will know about concrete fraud,” but rather, “Was there reason to think fraud is unusually likely? And if so, did people take the types of precautions that you’d want to take if you thought you were dealing with someone capable of doing shady things?”
For comparison, think of Zuckerberg. If the Social Network movie is roughly accurate, then Zuckerberg acted in utterly despicable and unethical ways, lying to the Winklevoss twins repeatedly and stealing their idea (and then screwing over the guy played by Justin Timberlake). So anyone aware of this would’ve been faced with the choice to (1) not trust him at all and not invest in facebook or associate with him; (2) trust him like you’d trust any nice-seeming person without a suspicious history; (3) try to talk to him (e.g., give feedback, try to lecture or mentor, etc.) about how that was unacceptable, ask for concrete ways for him to change, maybe use leverage you have to set up good governance structures and accountability procedures or at least nudge things that way, etc. I’d say (2) is clearly stupid and immoral. (3) might be defensible depending on one’s judgment call (and subsequent execution), and (1) can certainly be appropriate. In the case of facebook, I don’t know if Zuckerberg got any feedback from anyone or how facebook set up its board or accountability procedures, but it looks like, so far, that Zuckerberg never ended up doing anything crazy illegal that destroyed billions in value. (Though maybe some would argue that social media isn’t good for the world and that other inventors could maybe have taken more precautions to make it less bad.)
I don’t think leadership needed to know how the sausage was made to be culpable to some degree. Many people are claiming that they warned leadership that SBF was not doing things above board and if true then has serious implications, even if they didn’t know exactly what SBF was up to.
Not I am not claiming that anyone, specific or otherwise, knew anything.
Thanks for your response. On reflection, I don’t think I said what I was trying to say very well in the paragraph you quoted, and I agree with what you’ve said.
My intent was not to suggest that Will or other FTX future fund advisors were directly involved (or that it’s reasonable to think so), but rather that there may have been things the advisors chose to ignore, such as Kerry’s mention of Sam’s unethical behaviour in the past. Thus, we might think that either Sam was incredibly charismatic and good at hiding things, or we might think there actually were some warning signs and those involved with him showed poor judgement of his character (or maybe some mix of both).
Actually...
Facebook parent company fined in largest campaign finance penalty in U.S. history
Oct. 27th, 2022
— A Washington state judge on Wednesday fined Facebook parent company Meta nearly $25 million for repeatedly and intentionally violating campaign finance disclosure law, in what is believed to be the largest campaign finance penalty in U.S. history.
The penalty issued by King County Superior Court Judge Douglass North was the maximum allowed for more than 800
violations of Washington's Fair Campaign Practices Act,
passed by voters in 1972 and later strengthened by the Legislature. Washington Attorney General Bob Ferguson argued that the maximum was appropriate considering his office previously sued Facebook in 2018 for violating the same law.
Is Ben Delo a “disgraced crypto billionaire”? From Jess Riedel’s description, it wasn’t obvious to me whether the thing BitMEX got fined for was something seriously evil, versus something closer to “overlooked a garden-variety regulation and had to go pay a fine, as large businesses often do”.
(Conflict-of-interest notice: I work for MIRI, which received money from Open Phil in 2020 that came in part from Ben Delo.)
I’d prefer that the discussion focus on more concrete, less PR-ish things than questions of who is “fit to be the public face of the effective altruism movement”. The latter feels close to saying that Will isn’t allowed to voice his personal opinions in public if EAs think he fucked up.
I’d like to see EA do less PR-based distancing itself from people who have good ideas, and also less signal-boosting of people for reasons orthogonal to their idea quality (and ability to crisply, accurately, and honestly express those ideas). Think less “activist movement”, more “field of academic discourse”.
I’d be interested to know why you thought it relevant to mention “women” specifically?
It seems like there’s this expectation of public figures to always have acted in a way that is correct given the information we have now—basically hindsight bias again.
One of the many wildly charismatic people you hang out with later turns out to be No Good? Well of course you shouldn’t have associated with them. One of the many rumours you might have heard turns out to be true and a big deal? Off course you should have acted on it.
I don’t think this is very fair or useful. I guess we might worry that the rest of the world will think like that but I don’t see why we should.
I, in general, share your sentiments, but I wanted to pick up on one thing (which I also said on twitter originally)
While it might sound good to say people should be honest, have integrity, and reject ‘ends justify the means’ reasoning, I do see how you can expect people to do all three simultaneously: many people—including many EAs and almost certainly you, given your views on moral uncertainty—do accept that the ends sometimes justify the means. Hence, to go around saying “the ends don’t justify the means” when you think that, sometimes—perhaps often—they do, smacks of dishonesty and a lack of integrity. So, I hope you do write something further to your statements above.
It seems like the better response is to accept that, in theory, the ends can sometimes justify the means—it would be right to harm one person to save *some* number more—but then say that, in practice, defrauding people of their money is really not a time when this is true.
I agree… Was very bothered by the categorical proscriptions against “ends justifying the means” as well as the seeming statements that some kinds of ethical epistemology are outside of the bounds of discourse. Seemed very contrary to the EA norm of open discourse on morality being essential to our project.
This has indeed always been the case, but I’m glad it is so explicitly pointed out now. The overgeneralization from “FTX/SBF did unethical stuff” to “EA people think the end always justifies the means” is very easy to make for people that are less familiar with EA—or perhaps even SBF fell for this kind of reasoning, though his motivations are speculations for now.
It would probably be for the better to make the faulty nature of “end justify the means” reasoning (or the distinction between naive and prudent utilitarianism) a core EA cultural norm that people can’t miss.
Please someone explain to me, how the information publicly available years ago did not clearly indicate this was a fraud risk , as before starting FTX SBF engaged in the Kimchi Premium, or arbitrage on South Korean and perhaps Japanese exchanges ? South Korean authorities may have a word to say about that, considering that such arbitrage was as illegal then as it is now, and SBF used a EA cutout to carry it out.
How did the Goodwill MacAskill says existed with SBF originate, and what was the breaking point and limits defined? There is much more explaining to do here than pointing out at passages of your recent book I am afraid.
The 2nd google hit for it is “Investopedia” (no idea how reliable a source it is), which claims that it was widely perceived as legally unproblematic until recently, but might actually not have been: https://www.investopedia.com/terms/k/kimchi-premium.asp
’Was the Kimchi Premium Associated with Illegal Money Transfers?
While it was usually assumed that the Kimchi premium was innocuous, caused by technical limitations of the Korean banking system and the popularity of crypto, a new investigation in the summer of 2022 suspects that more than $3.4 billion of illegal foreign transactions in the country stemmed from cryptocurrencies.’
No further source is cited. I find it hard to tell whether this was something that CEA should have reasonably known was dodgy when SBF exploited it, and how dodgy they should have thought it was, but it certainly seems very worth investigating in any postmortem. In general, I think the actual harm-to-the-general-public causing decision here was helping persuade SBF to set up Alameda in the first place, as from that point on he was probably perfectly capable of making a lot of money, and then losing it, and stealing from his depositors whether or not he had the good will of EAs. (Even if getting a good reputation was somehow necessary, he could have just given to other charities and not mentioned EA, and we’d probably have shut up about him even if we’d officially decided he was bad. And it’s not clear having reputation for public charity was particularly important to FTX’s success, let alone Alameda’s, anyway.) But that decision might not actually have been a bad one based on information available to the people who persuaded/influenced him at the time: “set up a hedge fund” is not obviously immoral, unless you have specific evidence that the person your getting to do it is really dodgy, or the business model of the hedge fund is morally dubious. So my guess is to know whether the really crucial harm-causing decision was actually bad on the information available at the time, or just unlucky, we need to know about this.
You are my first reply in this forum, friendly and thorough as one would desire, as I have come in a bad time or a time of need and suffering for a group that seems to know and care about these things, doctor.
I address you by your title as I would like to offer a reflection on someone who also carries titles: vice admiral sir Francis Drake. The difference a title makes! And their role in disguising the truth of piracy, of slavery, of pillage and plundering! But perhaps more interesting, what some call ‘different perspectives of history’ but in reality may be a whitewashing and rationalization that was fundamental to the creation of the british empire, that passed through much charity and justified as regulations not having been defined at the time.
Any crypto enterprise is closer to privateering than to a Ponzi scheme, which in any case is only concensually defined as such postfacto. Privateering, which is less discussed, is a breaking things to go fast approach, fast before regulations catch up as they have anticipated from the very beginning, a quasi anarchic and ambitious career making project that ultimately wants to be in good society, pretend the good, but will never shed its links to piracy.
I hadn’t heard that the arbitrage was illegal. Here’s a little more info for others: https://www.pymnts.com/cryptocurrency/2022/south-koreas-crypto-kimchi-premium-suspected-in-3-4b-fx-investigation/
Thank you for writing this.
What do you think you/we could have done better? What should you/we have done differently? My sense is that the you were one of the key people who’s job it was to ensure what Sam was doing was above board (I could be mistaken here). If that’s the case, I’m surprised by a lack of discussion about what should have happened differently.
Likewise if there were no safeguards here, that seems startlingly naïve from us.
Will,
Like others, I appreciate your openness about your role in the FTX debacle, no matter how limited that role was, and your readiness to consider seriously how this impacts your philosophy from the ground up. I am a long-time lurker on this forum who has a lot of admiration for the commitment shown to their cause by EA advocates, but who also has a lot of disregard for the politics of EA, which I consider to be frankly naive and inadequate to the actual nature of the problems EA seeks to address.
With regard to the collapse of FTX, it seems that only a handful people in the EA movement had a critique of wealth that would lead them to question whether the means by which Bankman-Fried acquired his wealth were entirely ethical, and (assuming that one agreed that those means were ethically questionable) whether the willingness of Bankman-Fried and his colleagues to use those means to create their wealth might lead them to make further questionable decisions in future.
I would be interested to hear from you or other EA advocates on this forum whether recent events have caused you to question not just whether Sam Bankman-Fried’s behaviour in this instance was ethical, but more broadly whether the same events have caused any of you to reconsider your views about the ethical status of the super-wealthy in general, and what EA’s collective view of such individuals should be. I recognise that my politics are probably not shared by most (if any!) contributors to this forum, but now might be a good time to have that conversation.
Hello Wil,
I commented this in Nathan’s post earlier and this is a best practice that the EA governance team may consider creating...
All the best,
Miguel
Mentioned in a response to another post, but has MacAskill ever discussed his visits to colleges looking for recruits?
And trying to point them in certain directions?
Copying my reply from above:
They’re probably talking about the Giving What We Can pledge here, which MacAskill co-founded. I don’t see why anyone would consider that controversial.
I see. So it would be the equivalent MacAskill signing people up for a Giving What We Can pledge for as part of High Impact Careers (which became 80,000 Hours).
That in itself isn’t controversial, but I’d be interested in knowing more about exactly how it works. The Sequoia article says that MacAskill himself suggested Jane Street. (I’ll note that the 80,000 Hours website mentions Jane Street but not MacAskill’s involvement).
What does that actually mean? Did he have a hand in helping SBF secure an interview or the gig? Did SBF pledge to donate to one of the organizations MacAskill was associated with?
It feels similar to a headhunting firm. I’m sure there are not finders fees for placing these positions, but I’d be interested in knowing if there’s any kind of arrangement, even if it’s mostly implicit—i.e. “We found you a good candidate, so please donate to our organization.”
And then the other big question: what is the pledge-pitching actually like? And does the person making the pledge get different treatment depending on what their pledge looks like. The amount? The organization? If someone pledges to one of MacAskill’s affiliated organizations, is that more likely to result in a placement?
80000hours.org dances around all of this a bit. They certainly don’t mention getting anything in return. But on a page where they list their “mistakes” there’s an entry saying that they didn’t focus enough on “high-value plan changes.” https://80000hours.org/about/credibility/evaluations/mistakes/?int_source=job-board#growth-of-high-value-plan-changes-slower-than-medium-value
Okay, well, what is that? It seems to be a phrase used internally, and barely shows up on the rest of the site. The closest we get to a definition is a post from 2016: https://forum.effectivealtruism.org/posts/WKkF36bJsH8FmYZkw/why-donate-to-80-000-hours
The phrase crops up in the “Funding target” section, and mentions that if they’re going to raise £1.7m, they’ll need to seek “the easiest, scalable way to get more high-value plan changes.” So it’s clear that they’re counting on some sort of money flowing in from either the job-seeker or the company in which they’re placed, or both.
Now...
None of that is wrong, and it seems like an inventive (if slightly hidden?) system.
With that in mind, I do still think it would be good to know more about that original meeting and what happened after. How much of SBF’s Jane Street income was going to MacAskill’s organization, if any? I know that after he left, he went straight to the Centre for Effective Altruism and that’s where he was for the successful arbitrage. (although in some articles this fact is elided, and it’s described more as SBF bumming around trying to think of some good idea). Then he used the proceeds to found Alameda with Tara Mac Aulay, who was with CEA at the time and later deeply involved in the Celsius fiasco.
Don’t use phrases like “self-hatred”. This is a massive opportunity to improve EA organisations and you should have the self-confidence and enthusiasm to contribute to these improvements with alacrity.
Will, I highly recommend reading the books The Master and His Emissary and The Matter with Things, both written by the psychiatrist Iain McGilchrist. The books are rigorous, and have been praised by many of the world’s leading psychiatrists and experts in hemispheric specialization. These books could have told you that EA was destined to fail from the beginning. Here’s the blurb for the first of these books:
Reading these books makes it clear that the downfall of EA was predictable, has historical precedent, and has a clear neurological basis. The tools of EA (standardized epistemic rules, forecasting, and so on) cannot save us, because the EA community has made a point of consistently relying on the least reliable and least insightful hemisphere of our brains.
I know that this may sound wacky, but I really, really, really strongly recommend reading these books. They will bring everything into focus, and make what went wrong look painfully obvious. Here is a short YouTube video that gives a short summary of the ideas in the first book, if you are interested (though the YouTube video is no substitute for actually reading the book).
And here is a talk by V.S. Ramachandran that discusses hemispheric specialization. Ramachandran is one of, if not the world’s leading neuroscientists (who, by the way, speaks highly of McGilchrist’s work). Again, watching Ramachandran’s talk is no substitute for reading McGilchrist’s books. But it will be enough to show you just how wacky an over-reliance on the left hemisphere can get, which should be enough to at least pique your interest.
Best of luck, Will.