FTX FAQ
What is this?
I thought it would be useful to assemble an FAQ on the FTX situation.
This is not an official FAQ. I’m not writing this in any professional capacity.
This is definitely not legal or financial advice or anything like that.
Please let me know if anything is wrong/unclear/misleading.
Please suggest questions and/or answers in the comments.
What is FTX?
FTX is a Cryptocurrency Derivatives Exchange. It is now bankrupt.
Who is Sam Bankman-Fried (SBF)?
The founder of FTX. He was recently a billionaire and the richest person under 30.
How is FTX connected to effective altruism?
In the last couple of years, effective altruism received millions of dollars of funding from SBF and FTX via the Future Fund.
SBF was following a strategy of “make tons of money to give it to charity.” This is called “earning to give”, and it’s an idea that was spread by EA in early-to-mid 2010s. SBF was definitely encouraged onto his current path by engaging with EA.
SBF was something of a “golden boy” to EA. For example, this.
How did FTX go bankrupt?
FTX gambled with user deposits rather than keeping them in reserve.
Binance, a competitor, triggered a run on the bank where depositors attempted to get their money out.
It looked like Binance was going to acquire FTX at one point, but they pulled out after due diligence.
How bad is this?
“It is therefore very likely to lead to the loss of deposits which will hurt the lives of 10,000s of people eg here” (Source)
Also:
Does EA still have funding?
Yes.
Before FTX there was Open Philanthropy (OP), which has billions in funding from Dustin Muskovitz and Cari Tuna. None of this is connected to FTX.
Is Open Philanthropy funding affected?
Global health and wellbeing funding will continue as normal.
Because the total pool of funding to longtermism has shrunk, Open Philanthropy will have to raise the bar on longtermist grant making.
Thus, Open Philanthropy is “pausing most new longtermist funding commitments” (longtermism includes AI, Biosecurity, and Community Growth) for a couple of months to recalibrate.
How much of EA’s money came from FTX Future Fund?
As per the post Historical EA funding data from August 2022, the estimation for 2022 was:
Total EA funds: 741 M$
FTX Future Fund contribution: 262 M$ (35%)
(This Q&A contributed by Miguel Lima Medín.)
If you got money from FTX, do you have to give it back?
“If you received money from an FTX entity in the debtor group anytime on or after approximately August 11, 2022, the bankruptcy process will probably ask you, at some point, to pay all or part of that money back.”
“you will receive formal notice and have an opportunity to make your own case”
“this process is likely to take months to unfold” and is “going to be a multi-year legal process”
If this affects you, please read this post from Open Philanthropy. They also made an explainer document on clawbacks.
What if you’ve already spent money from FTX?
It’s still possible that you may have to give it back.
If you got money from FTX, should you give it back?
You probably shouldn’t, at least for the moment. If you gave the money back, there’s the possibility that because it wasn’t done through the proper legal channels you end up having to give the money back twice.
If you got money from FTX, should you spend it?
Probably not. At least for the next few days. You may have to give it back.
I feel bad about having FTX money.
Reading this may help.
“It’s fine to be somebody who sells utilons for money, just like utilities sell electricity for money.”
“You are not obligated to return funding that got to you ultimately by way of FTX; especially if it’s been given for a service you already rendered, any more than the electrical utility ought to return FTX’s money that’s already been spent on electricity”
What if I’m still expecting FTX money?
The FTX Future Fund team has all resigned, but “grantees may email grantee-reachout@googlegroups.com.”
“To the extent you have a binding agreement entitling you to receive funds, you may qualify as an unsecured creditor in the bankruptcy action. This means that you could have a claim for payment against the debtor’s estate” (source)
I needed my FTX money to pay the rent!
Nonlinear has stepped in to help: “If you are a Future Fund grantee and <$10,000 of bridge funding would be of substantial help to you, fill out this short form (<10 mins) and we’ll get back to you ASAP.”
I have money and would like to help rescue EA projects that have lost funding.
You can contribute to the Nonlinear emergency fund mentioned above.
“If you’re a funder and would like to help, please reach out: katwoods@nonlinear.org”
How can I get support/help (for mental health, advice, etc)?
“Here’s the best place to reach us if you’d like to talk. I know a form isn’t the warmest, but a real person will get back to you soon.” (source)
Some mental health advice here.
Someone set up a support network for the FTX situation.
This table lists people you can contact for free help. It includes experienced mental health supporters and EA-informed coaches and therapists.
This Slack channel is for discussing your issues, and getting support from the trained helpers as well as peers.
How are people reacting?
Will MacAskill: “If there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception.”
Rob Wiblin: “I am ******* appalled. [...] FTX leaders also betrayed investors, staff, collaborators, and the groups working to reduce suffering and the risk of future tragedies that they committed to help.”
Holden Karnofsky: “I dislike “end justify the means”-type reasoning. The version of effective altruism I subscribe to is about being a good citizen, while ambitiously working toward a better world. As I wrote previously, I think effective altruism works best with a strong dose of pluralism and moderation.”
Evan Hubinger: “We must be very clear: fraud in the service of effective altruism is unacceptable”
Was this avoidable?
Nathan Young notes a few red flags in retrospect.
But on the other hand:
Did leaders in EA know about this?
Will there be an investigation into whether EA leadership knew about this?
Tyrone-Jay Barugh has suggested this, and Max Dalton (leader of CEA) says “this is something we’re already exploring, but we are not in a position to say anything just yet.”
Why didn’t the EA criticism competition reveal this?
(This question was posed by Matthew Yglesias)
Bruce points out that there were a number of relevant criticisms which questioned the role of FTX in EA (eg). However, there was no good system in place to turn this into meaningful change.
Does the end justify the means?
Many people are reiterating that EA values go against doing bad stuff for the greater good.
Will MacAskill compiled a list of times that prominent EAs have emphasised the importance of integrity over the last few years.
Several people have pointed to this post by Eliezer Yudkowsky, in which he makes a compelling case for the rule “Do not cheat to seize power even when it would provide a net benefit.”
Did Qualy the lightbulb break character?
Yes.
- Sadly, FTX by 17 Nov 2022 14:26 UTC; 134 points) (
- Sadly, FTX by 17 Nov 2022 14:30 UTC; 133 points) (LessWrong;
- SBF, extreme risk-taking, expected value, and effective altruism by 13 Nov 2022 17:44 UTC; 73 points) (
- EA & LW Forums Weekly Summary (7th Nov − 13th Nov 22′) by 16 Nov 2022 3:04 UTC; 38 points) (
- The Dangers of a Little Knowledge by 2 Jan 2024 0:29 UTC; 30 points) (
- EA & LW Forums Weekly Summary (14th Nov − 27th Nov 22′) by 29 Nov 2022 22:59 UTC; 22 points) (
- EA & LW Forums Weekly Summary (14th Nov − 27th Nov 22′) by 29 Nov 2022 23:00 UTC; 21 points) (LessWrong;
- EA & LW Forums Weekly Summary (7th Nov − 13th Nov 22′) by 16 Nov 2022 3:04 UTC; 19 points) (LessWrong;
- 8 Dec 2022 2:42 UTC; 16 points) 's comment on EA created FTX by (
- 9 Dec 2022 10:52 UTC; 11 points) 's comment on FTX FAQ by (
- 10 Dec 2022 2:45 UTC; 6 points) 's comment on FTX FAQ by (
- 10 Dec 2022 8:04 UTC; 5 points) 's comment on FTX FAQ by (
Nitpick: the sources for “SBF is bankrupt” and for “SBF will likely be convicted of a felony” lead to Nathan’s post which I do not believe says either, and which has hundreds of comments so if it’s one of them maybe link to that?
For the first of those, while I know FTX filed for bankruptcy (and had their assets frozen by the Bahamas), I haven’t heard about SBF personally filing for bankruptcy (which doesn’t rule it out, of course).
For the second, it is probably premature to assume he’ll be convicted? This depends not only on what he did, but also on the laws and the authorities in several countries. On the other hand, I’m not actually knowledgeable about this, and here is a source claiming that he will probably go to jail.
Thanks!
You are right about SBF personally bankruptcy. I was confused.
The felony conviction comes from a Manifold market embedded within the Nathan post. I have added a link directly to Manifold to make this clearer.
Thanks for this; it’s a nicely compact summary of a really messy situation that I can quickly share if necessary.
FYI: this post has been linked to by this article on Semafor, with the description:
I don’t think you’re from CEA… Right?
Semafor has corrected the article.
The “correction” is:
Which is of course patently false. What does it mean for a forum to put together an FAQ? Have Semafor ever used a forum before?
If you interpret “the Effective Altruism Forum” as a metonym for “the people who use the forum”, then it is true (like how you can say “Twitter is going nuts over this”).
It’s weird, but I don’t see any reason to make a fuss about it.
If someone says “Twitter is going nuts over this” and I learned the source was one Tweet, I’d consider what they said to be pretty inaccurate. (There is a bit of nuance here since your post is highly upvoted and Twitter has more users than EAF, but I would also think “EA Twitter is going nuts” over one highly liked Tweet by an EA to be a severe exaggeration).
Similarly, this FAQ was never put together by either a) the EAF team, or b) crowdsourced from a bunch of users.
I expect most people reading this to think of this FAQ as substantially more official than what your own caveats at the top of the page said.
The Centre for Effective Altruism has had to deal with a lot of questions about Bankman-Fried since FTX’s collapse. Here’s an FAQ Hamish Doodles, a user on the Effective Altruism Forum, put together in a personal capacity.
I gather that you think it’s an issue worth correcting? Feel free to suggest a more correct phrasing for semafor and I’ll pass it on.
Ah, thanks. You are right.
I have tweeted them a correction.
I got funding approved from FTX (but they didn’t transfer any money to me). Will anyone else look at my funding application and consider funding it instead?
My guess is that there are a lot of people in that situation (including people who already made important and irreversible decisions based on that funding existing) and these opportunities are being desperately triaged. We don’t have any official word on this though.
Do you have urgent need for word on the funding? If not, I’d recommend waiting a few weeks and enquiring again once things have calmed down.
Waiting a few weeks would be a big problem with me (I estimate my co-founder might leave meanwhile)
(Still, thank you for your reply)
For you personally, and for some other people, the Long Term Future Fund might be a good idea if you haven’t tried them yet.
Have you tried grantee-reachout@googlegroups.com?
Yes, I didn’t get any reply yet (emailed them yesterday)
Thanks for the suggestion though
There are now people who are trying to help those in your situation. https://forum.effectivealtruism.org/posts/L4S2NCysoJxgCBuB6/announcing-nonlinear-emergency-funding
Thanks,
I feel this is less of an “emergency funding” situation and more of “trying to quickly understand if we can get normal funding”, since the worst case here is going back to “normal” jobs, and not, as the Nonlinear post says, “facing an existential financial crisis”.
Still, thank you for the recommendation.
nitpick: It’s Cari Tuna, not Kari Tuna
Thanks! Now we need to hide the evidence to avert EA having no billionaire funders.
And certainly not Carry Tuna, because carrying tuna goes against our animal welfare ideals.
Nitpick: It is actually appropriate to carry tuna, but only if you are carrying distressed tuna to safety
Other possible question for this FAQ:
How much of EA’s money came from FTX Future Fund?
As per the post Historical EA funding data from August 2022, the estimation for 2022 was:
* Total EA funds: 741 M$
* FTX Future Fund contribution: 262 M$ (35%)
If anyone has more up to date analysis, or better data, please report it.
And many thanks for the very clear and useful summary. Well done.
Thanks! I have added your contribution here.
Expanding on what Nathan Young said about the dangers of wealthy celebrities mentioning Effective Altruism, I am wondering if it’s EA’s best interest to certify donors and spokespeople before mentioning EA. The term “effective altruism” itself is ambiguous and having figures such as Musk or FTX using their own definitions without going through the rigor of studying the established definition only makes the problem worse. With certification (one that needs to be renewed annually I must add), it ensures that there’s agreement between well-known figures and the EA community that they are in alignment with what EA really means. It also adds accountability to their pledges and donations.
It seems like this only works for people who want to be aligned with EA but are unsure if they’re understanding the ideas correctly. This does not seem to apply for Elon Musk (I doubt he identifies as EA, and he would almost certainly simply ignore this certification and tweet whatever he likes) or SBF (I am quite confident he could have easily passed such a certification if he wanted to)
Can you identify any high-profile individuals right now who think they understand EA but don’t, who would willingly go through a certification like this and thus make more accurate claims about EA in the future?
Thank you for providing this FAQ. Maybe you want to add this:
A support network has been set up for people going through a rough patch due to the FTX situation.
In this table, you can find the experienced mental health supporters to talk to.
These want to help (for free), and you can just contact them. The community health team, as well as some EA-informed coaches and therapists, are listed already.
You can join the new Support Slack here.
People can share and discuss their issues, and get support from the trained helpers as well as peers.
Thanks! I have added this to the “Where can I get help” section.
Re: do the ends justify the means?
It is wholly unsurprising that public facing EAs are currently denying that ends justify means. Because they are in damage control mode. They are tying to tame the onslaught of negative PR that EA is now getting. So even if they thought that the ends did justify the means, they would probably lie about it. Because the ends (better PR) would justify the means (lying). So we cannot simply take these people at their word. Because whatever they truly believe, we should expect their answers to be the same.
Let’s think for ourselves, then. Would utilitarianism ever justify making high-stakes high-reward bets? Yes, of course. Could that be what SBF was doing? Quite possibly. Because a double-or-nothing coin-flip scales; it doesn’t stop having high EV when we start dealing with big bucks. So perhaps SBF was simply being a good utilitarian and did whatever had the highest value in expectation. Only this time he landed on the ‘nothing’ side of the coin. So far, there is nothing we know so far that rules this out. Because, though what he did was risky, the rewards were also quite high.
So we cannot assume that SBF was being bad, or ‘naive’ utilitarian. Because it could instead be the case that SBF was a perfect utilitarian, but utilitarianism is wrong and so perfect utilitarians are bad people. Because utility and integrity are wholly independent variables, so there is no reason for us to assume a priori that they will always correlate perfectly. So if we wish to believe that integrity and expected value correlated for SBF, then we must show it. We must actually do the math. Crunch the numbers for yourself. Don’t rely on thought leaders.
By doing this, it becomes clear that SBF’s actions were very possibly if not probably caused by his utilitarian-minded EV reasoning. Anyone who wishes to deny this can convince me by crunching the numbers and proving me wrong mathematically.
Risky bets aren’t themselves objectionable in the way that fraud is, but to just address this point narrowly: Realistic estimates puts risky bets at much worse EV when you control a large fraction of the altruistic pool of money. I think a decent first approximation is that EA’s impact scales with the logarithm of its wealth. If you’re gambling a small amount of money, that means you should be ~indifferent to 50⁄50 double or nothing (note that even in this case it doesn’t have positive EV). But if you’re gambling with the majority of wealth that’s predictably committed to EA causes, you should be much more scared about risky bets.
(Also in this case the downside isn’t “nothing” — it’s much worse.)
I think marginal returns probably don’t diminish nearly as quickly as the logarithm for neartermist cause areas, but maybe that’s true for longtermist ones (where FTX/Alameda and associates were disproportionately donating), although my impression is that there’s no consensus on this, e.g. 80,000 Hours has been arguing for donations still being very valuable.
(I agree that the downside (damage to the EA community and trust in EAs) is worse than nothing relative to the funds being gambled, but that doesn’t really affect the spirit of the argument. It’s very easy to underappreciate the downside in practice, though.)
I’d actually guess that longtermism diminishes faster than logarithmic, given how much funders have historically struggled to find good funding opportunities.
Global poverty probably have slower diminishing marginal returns, yeah. Unsure about animal welfare. I was mostly thinking about longtermist causes.
Re 80,000 Hours: I don’t know exactly what they’ve argued, but I think “very valuable” is compatible with logarithmic returns. There are also diminishing marginal returns to direct workers in any given cause, so logarithmic returns on money doesn’t mean that money becomes unimportant compared to people, or anything like that.
(I didn’t vote on your comment.)
Here’s Ben Todd’s post on the topic from last November:
Despite billions of extra funding, small donors can still have a significant impact
I’d especially recommend this part from section 1:
So he thought the marginal cost-effectiveness hadn’t changed much while funding had dramatically increased within longtermism over these years. I suppose it’s possible marginal returns diminish quickly within each year, even if funding is growing quickly over time, though, as long as the capacity to absorb funds at similar cost-effectiveness grows with it.
Personally, I’d guess funding students’ university programs is much less cost-effective on the margin, because of the distribution of research talent, students should already be fully funded if they have a decent shot of contributing, the best researchers will already be fully funded without many non-research duties (like being a teaching assistant), and other promising researchers can get internships at AI labs both for valuable experience (80,000 Hours recommends this as a career path!) and to cover their expenses.
I also got the impression that the Future Fund’s bar was much lower, but I think this was after Ben Todd’s post.
Caroline Ellison literally says this in a blog post:
“If you abstract away the financial details there’s also a question of like, what your utility function is. Is it infinitely good to do double-or-nothing coin flips forever? Well, sort of, because your upside is unbounded and your downside is bounded at your entire net worth. But most people don’t do this, because their utility is more like a function of their log wealth or something and they really don’t want to lose all of their money. (Of course those people are lame and not EAs; this blog endorses double-or-nothing coin flips and high leverage.)”
So no, I don’t think anyone can deny this.
Link?
https://at.tumblr.com/worldoptimization/slatestarscratchpad-all-right-more-really-stupid/8ob0z57u66zr
EDIT: The tumblr has been taken down.
EDIT #2: Someone archived it: https://web.archive.org/web/20210625103706/https://worldoptimization.tumblr.com/
That link doesn’t work for me. Do you have another one, or has it been taken down?
It looks like the tumblr was actually deleted, unfortunately. I spent quite a bit of time going through it last night because I saw screenshots of it going around.
Hey @Lin BL, someone archived it! I just found this link:
https://web.archive.org/web/20210625103706/https://worldoptimization.tumblr.com/
This feels a bit unfair when people (i) have argued that utility and integrity will correlate strongly in practical cases (why use “perfectly” as your bar?), and (ii) that they will do so in ways that will be easy to underestimate if you just “do the math”.
You might think they’re mistaken, but some of the arguments do specifically talk about why the “assume 0 correlation and do the math”-approach works poorly, so if you disagree it’d be nice if you addressed that directly.
Utility and integrity coming apart, and in particular deception for gain, is one of the central concerns of AI safety. Shouldn’t we similarly be worried at the extremes even in human consequentialists?
It is somewhat disanalogous, though, because
We don’t expect one small group of humans to have so much power without the need to cooperate with others, like might be the case for an AGI taking over. Furthermore, the FTX/Alameda leaders had goals that were fairly aligned with a much larger community (the EA community), whose work they’ve just made harder.
Humans tend to inherently value integrity, including consequentialists. However, this could actually be a bias among consequentialists that consequentialists should seek to abandon, if we think integrity and utility should come apart at the extremes and we should go for the extremes.
(EDIT) Humans are more limited cognitively than AGIs, and are less likely to identify net positive deceptive acts and more likely to identify net negative one than AGIs.
EDIT: On the other hand, maybe we shouldn’t trust utilitarians with AGIs aligned with their own values, either.
Assuming zero correlation between two variables is standard practice. Because for any given set of two variables, it is very likely that they do not correlate. Anyone that wants to disagree must crunch the numbers and disprove it. That’s just how math works.
And if we want to treat ethics like math, then we need to actually do some math. We can’t have our cake and eat it too
I’m not sure how literally you mean “disprove”, but at it’s face, “assume nothing is related to anything until you have proven otherwise” is a reasoning procedure that will never recommend any action in the real world, because we never get that kind of certainty. When humans try to achieve results in the real world, heuristics, informal arguments, and looking at what seems to have worked ok in the past are unavoidable.
I am talking about math. In math, we can at least demonstrate things for certain (and prove things for certain, too, though that is admittedly not what I am talking about).
But the point is that we should at least be to bust out our calculators and crunch the numbers. We might not know if these numbers apply to the real world. That’s fine. But at least we have the numbers. And that counts for something.
For example, we can know roughly how much wealth SBF was gambling. We can give that a range. We also can estimate how much risk he was taking on. We can give that a range too. Then we can calculate if the risk he took on had net positive expected value in expectation
It’s possible that it has expected value in expectation, only above a certain level of risk, or whatever. Perhaps we do not know whether he faced this risk. That is fine. But we can still at any rate see in under what circumstances SBF would have been rational, acting on utilitarian grounds, to do what he did.
If these circumstances sound like do or could describe the circumstances that SBF was in earlier this week, then that should give us reason to pause.
Fair.
TBH, this has put me off of utilitarianism somewhat. Those silly textbook counter-examples to utilitarianism don’t look quite so silly now.
Except the textbook literally warns about this sort of thing:
Again, warnings against naive utilitarianism have been central to utilitarian philosophy right from the start. If I could sear just one sentence into the brains of everyone thinking about utilitarianism right now, it would be this: If your conception of utilitarianism renders it *predictably* harmful, then you’re thinking about it wrong.
There’s the case that such distinctions are too complex for a not insignificant proportion of the public and therefore utilitarianism should not be promoted at all for a larger audience, since all the textbooks filled with nuanced discussion will collapse to a simple heuristic in the minds of some, such as ‘ends justifying the means’ (which is obviously false).
I don’t think we should be dishonest. Given the strong case for utilitarianism in theory, I think it’s important to be clear that it doesn’t justify criminal or other crazy reckless behaviour in practice. Anyone sophisticated enough to be following these discussions in the first place should be capable of grasping this point.
If you just mean that we shouldn’t promote context-free, easily-misunderstood utilitarian slogans in superbowl ads or the like, then sure, I think that goes without saying.
It’s quite evident people do follow discussions on utilitarianism but fail to understand the importance of integrity in a utilitarian framework, especially if one is unfamiliar with Kant. If the public finds SBF’s system of moral beliefs to blame for his actions, it will most likely be for being too utilitarian rather than not being utilitarian enough – a misunderstanding which will be difficult to correct.
Are you disagreeing with something I’ve said? I’m not seeing the connection. (I obviously agree that many people currently misunderstand utilitarianism, or I wouldn’t spend my time trying to correct those misunderstandings.)
Why should we trust you? You’re a known utilitarian philosopher. You could be lying to us right now to rehabilitate EA’s image. That’s what a utilitarian would do, after all. And you have not provided any arguments for this that are even remotely convincing, neither here nor in your post on the topic.
What are you using to justify these conclusions? EV? Is it an empirical claim? How do you know? What kind of justification are you using? And can you show us your justification? Can you show us the EV calculus? Or, if it’s empirical, then can you show us the evidence? No? So far I am seeing no arguments from you. Just assertions.
Really? SBF seemed pretty sophisticated. But he didn’t get the point. So maybe it’s time to update your “empirical” argument against utilitarianism being self-effacing, then.
Yeah.… don’t think publius said that. Maybe stop misrepresenting the views of people who disagree with you. You seem to do that a lot.
Do you talk like that to your students?
As a moderator, I think some elements of this and previous comments break Forum norms. Specifically, unsubstantiated accusations of lying or misrepresentation and phrases like “when has a utilitarian ever cared about common sense” are unnecessarily rude and do not reflect a generous and collaborative mindset.
We want to be clear that this comment is in response to the tone and approach, not the stance taken by the commenter. As a moderator team we believe it’s really important to be able to discuss all perspectives on the situation with an open mind and without censoring any perspectives.
We strongly encourage all users to approach discussions in good faith, especially when disagreeing—attacking the character of an author rather than the substance of their arguments is discouraged. This is a warning, please do better in the future.
Was anything I said an “unsubstantiated accusation of lying”?
No. Perhaps it was an accusation. But it was not unsubstantiated. It was substantiated. Because I provided a straightforward argument as to why utilitarians cannot be trusted in this situation.
If you disagree with the conclusion of this argument, that’s fine. But the proper response to that is to explain why you think the argument is unsound. Not to use your mod powers.
So, then, let me ask you: why do you think this argument is unsound (assuming that you do)?
If you cannot answer this question, then you cannot honestly say that my “accusation” was unsubstantiated.
Something similar applies to my other question: “when has a utilitarian ever cared about common sense?” If you care to provide examples, I’d be happy to hear you out. Because that is why I asked the question.
But if you cannot find examples (and so do not like what the answer to my question may be), then I fail to see how that is my fault. Is asking critical questions “rude”? If yes, then quite frankly that reflects poorly on the “Form norms”.
As does, by the way, the selective enforcement of these norms. I know that some moderators insist that enforcement of Forum norms has nothing to do with the offender’s point-of-view. But it does not take a PhD in critical analysis to see this as plainly false.
Since, as any impartial lurker on the forum could tell you, there are a handful of high-status dogmatists on here that consistently misrepresent the views of those that disagree with them; misrepresent expert consensus; and are rude, condescending, arrogant, and combative.
(Note: I am not naming names, here, so no accusation is being made. But you know who they are. And if you don’t, that speaks to the strength of the in-group bias endemic to EA.)
But I have yet to see any one of these individuals get a “warning” from a moderator. And no one who I’ve discussed this issue with has either. So, it is genuinely hard to believe that these norms are not being enforced selectively.
In fairness, sometimes the rules are necessary. I get that. You want to keep things civil, and fair enough. But it’s plainly obvious that the rules are often abused, too.
This cycle of abuse is as follows.
Someone disagrees with the predominant EA in-group thinking.
Said person voices their concern with said in-group thinking on the Forum.
Said person is met with character assassinations, misrepresentations and strawmen arguments, ad hominens, and so on. This violates Forum norms, but these norms are not enforced.
Said person is not a saint. So, they respond to this onslaught of hostility with hostility in turn. This time, Forum norms are conveniently enforced.
Said person is now deemed to be arguing “in bad faith”.
Said person’s concerns (expressed in step 2) are now dismissed out of hand on account of the allegation that they were made in bad faith. So the relevant concerns expressed in step 2 go unaddressed. The echo-chamber intensifies. The Overton window narrows.
No one seems to clue into the fact that accusing someone of bad faith is, ironically enough, itself an ad hominen.
EAs continue to go on not knowing what they don’t know, and so thinking that they know everything.
Rinse and repeat for several years.
Hubris balloons to dangerously high levels.
FTX crashes.
And now we are here.
Note that steps 1-7 describe what happened to Emile Torres. Which is a shame, since many of the criticisms he expressed back in step 2 were, as it happens, correct (as, by now, should be obvious).
So perhaps if Torres hadn’t been banned, then we would have taken his concerns seriously. And perhaps if we took his concerns seriously, then none of this would have happened. Whoops. That’s a bad look, don’t you think?
So it’s worth noting, then, that the concerns I am forwarding here aren’t very different from the concerns that got Torres banned all those years ago. So, given what has since transpired, maybe it’s about time we take these concerns seriously. Because it was one thing to use mod powers to silence Torres when he made these critiques back then (please don’t play dumb, we both know it’s true). But to use mod powers to intimidate people for these same criticisms, even now, despite everything… that’s unconscionable.
I know you don’t like to hear that. But quite frankly, you need to hear it, because it’s true. I doubt that will be much comfort to you, though, so you’ll probably ban me for saying that. But once your power trip has ended, consider digging deep. Do some serious critical reflection. And then do better next time.
And I don’t mean, by the way, that you should do better as a moderator (though that is of course part of it). No. My request goes much deeper than this. I am requesting that you be better as a person. Be a better person than this. Be a better person than this.
Be honest with yourself. Have some integrity. Update your beliefs. And then accept your share of the responsibility for this mess.
But, most importantly: have some fucking shame.
Please.
It’s well overdue. Not just for you, but for all of us. Because we all contributed to this mess, in however minor a way.
Anyway. I think that’s everything I needed to say.
So, closing remarks: please don’t mistake my tough love for hostility. I understand that this is a tough time for everyone, and probably the mods especially. So, for that, I wish you all well. Genuinely. I really do wish you guys well. But, after the dust has settled, you all really need to think this stuff through. Reflect on what I said here. Really chew on it. Then do better going forward.
I referenced work to this effect from my decade-old PhD dissertation, along with published articles and books from prior utilitarians, none of which could possibly have been written with “rehabilitating EA’s image” in mind.
Randomly accusing people of lying is incredibly jerkish behaviour. I’ve been arguing for almost two decades now that utilitarianism calls for honest and straightforward behaviour. (And anyone who knows me IRL can vouch for my personal integrity.) You have zero basis for making these insulting accusations. Please desist.
My post on naive utilitarianism, like other academic literature on the topic (including, e.g., more drastic claims from Bernard Williams et al. that utilitarianism is outright self-effacing, or arguments by rule consequentialists like Brad Hooker), invokes common-sense empirical knowledge, drawing attention to the immense potential downside from reputational risks alongside other grounds for distrusting direct calculations as unreliable when they violate well-established moral rules.
Again, there’s a huge academic literature on this. You don’t have to trust me personally, I’m just trying to summarize some basic points.
What are you talking about? Publius referenced the idea that this may be “too complex for a not insignificant proportion of the public and therefore utilitarianism should not be promoted at all for a larger audience”. This could be interpreted in different (stronger or weaker) ways, depending on what one has in mind by “larger audiences”. My reply argued against a strong interpretation, and then indicated that I agreed with a weaker interpretation.
I’m not talking about your PhD dissertation.
So let’s restrict our scope to SBF’s decision-making within the past few years. It is an open question: were SBF’s decisions consistent with utilitarian-minded EV reasoning?
And we can start to answer this question. We can quantify the money he was dealing with, and his potential earnings. We can quantify the range of risk he was likely dealing with. We can provide a reasonable range as to the negative consequences of him getting caught. We can plug all these numbers into our EV calculus. It is the results of these equations that we are currently discussing.
So some vague and artificial thought experiments written a decade ago is not especially relevant. Not unless you happened to run these specific EV calculations into your PhD dissertation. But given the fact that you are a mere mortal and so cannot predict the future, I doubt that you did.
Your post is hardly “academic literature” (was it peer reviewed? Or just upvoted by many philosophically naive EAs?).
And it is common-sense empirical knowledge that SBF did what he did due to his utilitarianism + EV reasoning. It is currently only on this forum where this incredibly obvious fact is being seriously questioned.
And, besides, when has a utilitarian ever cared about common sense?
Do you think you represented your opponent’s view in the most charitable way possible? Do you think a superbowl commercial is a charitable example to be giving? Do you think that captures the essence of the critique? Or is it merely a cartoonish example, strategically chosen to make the critique look silly?
It’s not you personally. It’s utilitarians in general. Like I said in my original comment: it is wholly unsurprising that public facing EAs are currently denying that ends justify means. Because they are in damage control mode. They are tying to tame the onslaught of negative PR that EA is now getting. So even if they thought that the ends did justify the means, they would probably lie about it. Because the ends (better PR) would justify the means (lying). So we cannot simply take these people at their word. Because whatever they truly believe, we should expect their answers to be the same.
So why should we have any reason to trust any utilitarian right now? And again, I am referring to this particular situation—pointing to defences of utilitarianism written in the 1970s is not especially relevant, since they did not account for SBFs particular situation, which is what we are currently discussing.
As I’m sure you’ll find, it’s pretty difficult to provide any reason why we should trust a utilitarian’s views on the SBF debacle. Perhaps that’s a problem for utilitarianism. We can add it to the collection.
People believing utilitarianism could be predictably harmful, even if the theory actually says not to do the relevant harmful things. (Not endorsing this view: I think if you’ve actually spent time socially in academic philosophy, it is hard to believe that people who profess to be utilitarians are systematically more or less trustworthy than anyone else.)
As someone who has doubts about track record arguments for utilitarianism, I want to go on the record as saying I think that cuts both ways – that I don’t think SBF’s actions are a reason to think utilitarianism is false or bad (nor true or good).
Like, in order to evaluate a person’s actions morally we already need a moral theory in place. So the moral theory needs to be grounded in something else (like for example intuitions, human nature and reasoned argument).
Sure, it’s possible that misunderstandings of the theory could prove harmful. I think that’s a good reason to push back against those misunderstandings!
I’m not a fan of the “esoteric” reasoning that says we should hide the truth because people are too apt to misuse it. I grant it’s a conceptual possibility. But, in line with my general wariness of naive utilitarian reasoning, my priors strongly favour norms of openness and truth-seeking as the best way to ward off these problems.
Also note Sam’s own blog
Interesting, thanks. This quote from SBF’s blog is particularly revealing:
Here SBF seems to be going full throttle on his utilitarianism and EV reasoning. It’s worth noting that many prominent leaders in EA also argue for this sort of thing in their academic papers (their public facing work is usually more tame).
For example, here’s a quote from Nick Bostrom (head huncho at the Future of Humanity Institute). He writes:
That sentence is in the third paragraph.
Then you have Will MacAskill and Hilary Greaves saying stuff like:
This seems very different from Will’s recent tweets, where he denied that the ends justified the means (because, surely, if 100 dollars could save a trillion lives, then we’d be justified in stealing 100 dollars?)
Anyway. It seems like SBF took these arguments to heart. And here we are.
Note that from a utilitarian point of view, none of this really matters much. Here’s another quote from Nick Bostrom (section 2, first paragraph):
So if all wars and pandemics in human history are “mere ripples” from a utilitarian standpoint, then what does this FTX scandal amount to?
Probably not much. It is very bad, to be sure, but only because it is very bad PR. The fact that SBF committed massive financial fraud is not, in itself, of any issue. So the people immediately affected by this are mere rounding errors on spreadsheets, from a utilitarian standpoint. So the expressions of remorse currently being given by EA leaders… are those real?
If these leaders take utilitarianism seriously, then probably not.
And when the leaders in EA claim to care, are they being honest? Is the apology tour genuine, or just an act?
To answer this, we need to think like a utilitarian. Why would a utilitarian care about a mere ripple? That makes no sense. But why would a utilitarian pretend to care about a mere ripple? Well, for good PR, of course. So we cannot take anything that any EA thought-leader says. These people have not earned our trust.
And on that note: if the EA thought-leaders are lying to us, then this has serious implications for the movement. Because our goal here is to do the most good. And so far it seems like the utilitarianism that has infected the minds of EA elites is preventing us from doing that. Since the utilitarian vision of the good seems not so good after all.
So we need to seriously consider the possibility, then, that the biggest obstacle facing the EA movement is the current EA leadership.
And if that’s the case, then waiting on them to fix this mess from the top-down might be hopeless. Change needs to come from us, in spite of the leadership.
I’m not exactly sure how this could be done, but I know there has been some talk about democratizing the CEA and enacting whistleblower protections. I’m not sure how we should implement this, though.
Suggestions are welcome.
I think the quotes from Sam’s blog are very interesting
and are pretty strong evidence for the view that Sam’s thinking and actions were directly influenced by some EA ideas.I think the thinking around EA leadership is way too premature and presumptive. There are many years (like a decade?) of EA leadership generally being actually good people and not liars. There are also explicit calls in “official” EA sources that specifically say that the ends do not justify the means in practice, honesty and integrity are important EA values, and pluralism and moral humility are important (which leads to not doing things that would transgress other reasonable moral views).
Most of the relevant documentation is linked in Will’s post.
Edit: After reading the full blog post, the quote is actually Sam presenting the argument that one can calculate which cause is highest priority, the rest be damned.
He goes on to say in the very next paragraph:
He concludes the post by stating that the multiplicative model, which he thinks is more likely, indicates that both reducing x-risk and improving the future are important.
There’s another post on that same page where he denotes his donations for 2016 and they include donations to x-risk and meta EA orgs, as well as donations to global health and animal welfare orgs.
So nevermind, I don’t think those blog posts are positive evidence for Sam being influenced by EA ideas to think that present people don’t matter or that fraud is justified.
Ya, they aren’t really talking about the numbers, even though a utilitarian should probably accept instrumental harm to innocents for a large enough benefit, at least in theory. Maybe they distrust this logic so much in practice, possibly based on historical precedent like communism, that they endorse a general rule against it. But it would still be good to see some numbers.
I read that the Future Fund has granted something like $200 million already, and FTX/Alameda leadership invested probably something like half a billion dollars in Anthropic. And they were probably expecting to donate more. Pesumably they didn’t expect to get caught or have a bank run, at least not this soon. Maybe they even expected that they could eventually make sure they had enough cash to cover all customer investments, so no customer would actually ever be harmed even in the case of a bank run (although they’d still be exposed to risks they were lied to about until then). Plausibly they underestimated the risk of getting caught, but maybe by their own lights, it’ll already have been worth it even with getting caught, as long as the EA community doesn’t pay it all back.
If our integrity, public trust/perception, lost potential EAs and ability to cooperate with others are worth this much, should we* just pay everything we got from FTX and associates back to FTX customers? And maybe more for deterence and the cases that don’t get caught?
*possibly our major funders, not individual grantees.
I think that’s part of why Will etc are giving lots of examples of things they said publicly before FTX exploded where they argued against this kind of reasoning.
I think there may be two separate actions to analyze here: the decisions to take extreme risks with FTX/Alameda’s own assets to start with, and the decision to convert customer funds in an attempt to prevent Alameda, FTT , FTX, SBF, and the Future Fund from collapsing in that order.
If that is true, it isnt an answer to say SBF shouldn’t have been taking extreme risks with a huge fraction of EA aligned money. At the time the fraud / no fraud decision was to be made, that may no longer have been an option.
So EA needs to be clear on whether SBF should have allowed his wealth / much of the EA Treasury to collapse rather than risk/convert customer funds, because that may have been the choice he was faced with a week ago.
One reaction when reading this is that you might be kind of eliding the difference between utilitarianism per se and expected value decision analysis.
Fair enough. I tried to explain that they were different in the comment section of another post, but was meet with downvotes and whole walls of text trying to argue with me. So I’ve largely given up trying to make those distinctions clear on this forum. It’s too tiresome
I believe the ‘walls of text’ that Adrian is referring to are mine. I’d just like to clarify that I was not trying to collapse the distinction between a decision procedure and the rightness criterion of utilitarianism. I was merely arguing that the concept of expected value can be used both to decide what action should be taken (at least in certain circumstances)[1] and whether an action is / was morally right (arguably in all circumstances) - indeed, this is a popular formulation of utilitarianism. I was also trying to point out that whether an action is good, ex ante, is not necessarily identical to whether the consequences of that action are good, ex post. If anyone wants more detail you can view my comments here.
Although usually other decision procedures, like following general rules, are more advisable, even if one maintains the same rightness criterion.
Hi @Hamish Doodle. My post cited here with regards to “If you got money from FTX, do you have to give it back?” and “If you got money from FTX, should you spend it?” was intended to inform people about not spending FTX money (if possible) until Molly’s announced EA forum post. She has now posted it here.
Could you please add/link that source? I believe the takeaway is simillar but it’s a much more informative post. The key section is:
Thanks!
I have incorporated information from Molly’s post.
https://www.theguardian.com/us-news/2022/dec/12/former-ftx-ceo-sam-bankman-fried-arrested-in-the-bahamas-local-authorities-say
How much taxpayer money was funnelled through Ukraine to FTX and back to American politicians
Given that an insider just wired hundreds of millions of dollars out of FTX and SBF’s private jet just left for Argentina, I think we can at the very very least say that someone in FTX is acting immoral, and it may very well be SBF: video
UPDATE: While there are reports of it being SBF’s jet, others dispute it. It’s a developing story, but for now I will update my prediction from “probably SBF [is acting immoral]” to “it may very well be SBF [is acting immoral]”.
UPDATE 2: Sam Bankman-Fried and two former FTX associates are currently being detained by Bahamian authorities, tried to flee. (The story is still developing and Dubai does have an extradition agreement with the US, so take everything with a grain of salt). If you want to downvote this comment for being down on FTX, fine. But me losing karma on months old writings, hours after posting this just seems like retaliation.
UPDATE 3: New FTX tokens worth about $380 million suddenly appear out of thin air. It seems very clear to me that there are immoral actors at FTX.
UPDATE 4: It looks like SBF admitted to fraud.
FWIW, news reports say SBF denies leaving the Bahamas. I guess we have to wait and see.
+1, the only evidence we have is a private plane left the Bahamas and flew to Argentina yesterday. I don’t know the base rate of private planes leaving the Bahamas, but I see a lot of people jumping to conclusions on this.
Not a private plane, his private plane. And you’re ignoring the bigger piece of evidence, namely the 600 million dollar that’s been wired out of FTX.
UPDATE: See above.
Evidence? This site says the plane is owned by Emes Air, whose parent company has a billionaire shareholder in the Bahamas (someone named Joe Lewis, not SBF) and is registered in Argentina.
Also even granting the money wired out was by SBF, not a hacker or someone else, I don’t see how that’s evidence he flew to Argentina.
A number of different sites are reporting it’s his jet. I guess we don’t know for certain since it’s a developing story but my claim is pretty strong:
I’m not saying someone wired money therefore someone flew to Argentina. I’m saying someone wired money and someone flew to Argentina. Just the wiring would be enough to conclude that someone is acting immoral, the flight is just an extra piece of evidence.
UPDATE: There are sources saying it’s SBF’s jet and others dispute it. I have updated my prediction in the top comment from ‘probably’ to ‘maybe’. I will upvote Brad for the additional source.
Those multiple different sites all point to the same tweet as their source, so I don’t think they provide any new information. In fact, those two sources you linked have the exact same text – the first is just a syndication of the second.