Sam Bankman-Fried, founder of the cryptocurrency exchange FTX, is a major donator to the Effective Altruism ecosystem and has pledged to eventually donate his entire fortune to causes aligned with Effective Altruism. By relying heavily on ultra-wealthy individuals like Sam Bankman-Fried for funding, the Effective Altruim community is incentivized to accept political stances and moral judgments based on their alignment with the interests of its wealthy donators, instead of relying on a careful and rational examination of the quality and merits of these ideas. Yet, the Effective Altruism community does not appear to recognize that this creates potential conflicts with its stated mission of doing the most good by adhering to high standards of rationality and critical thought.
In practice, Sam Bankman-Fried has enjoyed highly-favourable coverage from 80,000 Hours, an important actor in the Effective Altruism ecosystem. Given his donations to Effective Altruism, 80,000 Hours is, almost by definition, in a conflict of interest when it comes to communicating about Sam Bankman-Fried and his professional activities. This raises obvious questions regarding the trustworthiness of 80,000 Hours’ coverage of Sam Bankman-Fried and of topics his interests are linked with (quantitative trading, cryptocurrency, the FTX firm…).
In this post, I argue that the Effective Altruism movement has failed to identify and publicize its own potential conflicts of interests. This failure reflects poorly on the quality of the standards the Effective Altruism movement holds itself to. Therefore, I invite outsiders and Effective Altruists alike to keep a healthy level of skepticism in mind when examining areas of the discourse and action of the Effective Altruism community that are susceptible to be affected by incentives conflicting with its stated mission. These incentives are not just financial in nature, they can also be linked to influence, prestige, or even emerge from personal friendships or other social dynamics. The Effective Altruism movement is not above being influenced by such incentives, and it seems urgent that it acts to minimize conflicts of interest.
(Note that this issue was commented on here a month ago.) This whole thing is now starting to look like the classic “ends justify the means” criticism of Utilitarianism writ large :(
I wrote that comment from over a month ago. And I actually followed it up with a more scathing comment that got downvoted a lot, and that I deleted out of a bit of cowardice, I suppose. But here’s the text:
In 2019, he took some of the profits from Alameda and $8 million raised from a few smaller VC firms and launched FTX. He quickly sold a slice to Binance, the world’s biggest crypto exchange by volume, for about $70 million.
During this period, Binance processed transactions totalling at least $2.35 billion stemming from hacks, investment frauds and illegal drug sales, Reuters calculated from an examination of court records, statements by law enforcement and blockchain data, compiled for the news agency by two blockchain analysis firms. Two industry experts reviewed the calculation and agreed with the estimate.
Separately, crypto researcher Chainalysis, hired by U.S. government agencies to track illegal flows, concluded in a 2020 report that Binance received criminal funds totalling $770 million in 2019 alone, more than any other crypto exchange. Binance CEO Changpeng Zhao accused Chainalysis on Twitter of “bad business etiquette.”
Or consider FTX’s hiring of Daniel Friedberg as a chief compliance officer. This article claims that he had been involved in previous cheating/fraud at other businesses:
Crypto’s ongoing addiction to the Tether stablecoin is nearly as alarming as the sector’s questionable embrace of lawyers linked to online gambling fraud. . . .
If one ever doubted the insincerity of SBF’s compliance commitment, . . . the company’s former GC, Daniel S. Friedberg, is now FTX’s new chief compliance officer, a role for which Friedberg is almost comically inappropriate. . . .
Friedberg’s presence on FTX’s payroll means Sam Bankman-Fried (SBF) either didn’t do his due diligence before hiring, or he knew of Friedberg’s past sins and didn’t care. Neither of these options paints Sam Bankman-Fried in an overly flattering light.”
Then there are all the recent examples of FTX trying to buy up other crypto players. For example, in July, FTX signed a deal to buy BlockFi for up to $240 million, and to give it $400 million in revolving credit. BlockFi is most famous for having agreed to pay $100 million in penalties for its securities fraud. It’s not clear why FTX would want to spend this amount of money on buying a fraudulent firm.
Just last week, there was a story that FTX is thinking about buying Celsius, another fraudulent firm.
Another story from July had the remarkable claim that SBF is even thinking of putting his own cash into bailing out other crypto firms:
On one or two occasions, Bankman-Fried, who made billions arbitraging cryptocurrency prices in Asia beginning in 2017, said he has used his own cash to backstop failing crypto companies when it didn’t make sense for FTX to do so.
“FTX has shareholders and we have a duty to do reasonable things by them and I certainly feel more comfortable incinerating my own money,” he said.
Why is FTX and perhaps SBF himself putting so much money into buying up other people’s scams? I would hope it’s because they intend to reform the crypto industry and put it on more of a moral footing, although that would reduce the market size by an order of magnitude or two.
***
At least, SBF and FTX ought to provide more transparency into where exactly all the wealth came from, and what (if anything) they are actively doing to prevent crypto frauds/scams. And one might argue that FTX Foundation has a particular moral duty to establish a fund to help out all of the people whose lives were ruined by falling for crypto’s many Ponzi schemes and other assorted scams.
Wow, I didn’t see it at the time but this was really well written and documented. I’m sorry it got downvoted so much and think that reflects quite poorly on Forum voting norms and epistemics.
Moreover, Sven Rone is a pseudonym. The author used a pen name astheir views were unpopular and underappreciated at the time; they likely feared career repercussions if they went public with it. It’s unfortunate that this was the environment they found themselves in.
I find myself having a mixed opinion of how EA responded. It wasn’t outright terrible epistemics, unlike most of the world reacting to a similar event, but there were real failures of epistemics.
On the other hand, there was also successes in EA epistemics, as well.
I think the post ended up around 0 or 1 karma, is that right? (I mean before people changed their voting based on hindsight!) I think it’s important to distinguish between “got downvoted a lot but ended up at neutral karma” vs. “got downvoted double digits into no longer being visible.” The former reflects somewhat poorly on EA, the latter very poorly.
I think the most informative signal here is not the exact karma that comment ended up with but rather that the author ended up deleting it despite believing that what he was saying was potentially important and not receiving any reasons to think he was wrong. A culture where people feel compelled to silence themselves is worse than one where some comments are wrongly downvoted without much consequence to the author.
I think the most important data points here are any comments that were left, and the net karma of the comment. People have in fact been known to overreact, or react in idiosyncratic ways, in forum discussions; I haven’t seen the thread in question, but if the responses were friendly and the comment got ~0 net karma, then that would be a large update for me.
I definitely took “that got downvoted a lot” to mean that the comment got a lot of net downvotes, not just that people offset its upvotes to keep it around a neutral 0. I think it’s pretty bad to describe vote patterns that misleadingly, if it was hovering around 0.
No, I was talking about Stuart Buck’s initial comment in that same thread, which is still up and now has high upvotes.
But Stuart also mentioned he deleted a second comment after it got downvoted too, so that must be the one you’re linking to. (We also don’t know if some people retroactively upvoted the deleted comment, it’s at +6 now but could’ve been negative at the time of deletion. I think I’m still able to vote on the deleted comment – though maybe that’s just because I had already voted on it before it got deleted [strong upvote and weak agree vote]).
Either way it seems highly unlikely that the deleted comment I linked to had lots of negative votes. It had a few disagree votes but very likely not more than 1-2 karma downvotes.
I like how Hacker News hides comment scores. Seems to me that seeing a comment’s score before reading it makes it harder to form an independent impression.
I fairly frequently find myself thinking something like: “this comment seems fine/interesting and yet it’s got a bunch of downvotes; the downvoters must know something I don’t, so I shouldn’t upvote”. If others also reason this way, the net effect is herd behavior? What if I only saw a comment’s score after voting/opting not to vote?
Maybe quadratic voting could help, by encouraging everyone to focus their voting on self-perceived areas of expertise? Commenters should be trying to impress a narrow & sophisticated audience instead of a broad & shallow one?
EDIT: Another thought: If there was a way I could see my recent votes, I could go back and reflect on them to ensure I’m voting in a consistent manner across threads
I think that what FTX is accused of this comment is legitimately way more something where a charitable recipient is not morally obliged to demand this level of careful checking of everything, because our civilization is just not actually able to support this level of competency pornography.
Stealing your customers’ funds is a very different matter from “some of the people who use our services are criminals”. Why, MIRI has in the past accepted matching funds from Google, which I’m sure profits a whole lot off criminals using their services! And some of those criminals may even be bad people!
But you can’t, actually, run a post-agricultural civilization on the principle of everybody who engages in every transaction checking out the full moral character of everybody who transacts with them. If you did try to build clever infrastructure for that, its first use on the margin would be by the right to hunt down sex workers (as already occurs with Visa) and by the left to hunt down people who said bad things on Twitter.
In a hunter-gatherer tribe it maybe makes sense to demand that people not transact with that bad guy over there; it scales as far as it needs to scale. And MIRI would not take money from somebody what we knew had stolen in charity’s name. But to figure it all out—if you want to read about Civilizations that have the basic infrastructure and competence to run those kind of traces, go read science fiction; here on Earth you’ve got VC firms trying to run six months of due diligence and then they invest in FTX.
IMO the amount of diligence someone ought to perform on their counterparties’ character is different in different circumstances. “This person is one of hundreds of people I transact with every week” carries different obligations than “This person is one of the four big donors who fund my organization” carries different obligations than “This person has been my only source of income for the past two years”. Different EAs were at different points along this spectrum.
Quoting from the article you linked about the involvement of Daniel Friedberg, FTX’s Chief Regulatory Officer, in a previous scandal:
In 2008, online poker site Ultimate Bet (UB) publicly confirmed rumors that certain individuals had utilized a little-known feature of the site’s software to view players’ hole cards during hands. This so-called ‘god mode’ allowed a number of ‘super users’ to cheat opponents out of tens of millions in poker winnings. The site’s operators begrudgingly paid out a few million to the loudest complainers and folded the site’s operations into a sister site (which was dealing with its own scandals).
In 2013, an audio recording surfaced that made mincemeat of UB’s original version of events. The recording of an early 2008 meeting with the principal cheater (Russ Hamilton) features Daniel S. Friedberg actively conspiring with the other principals in attendance to (a) publicly obfuscate the source of the cheating, (b) minimize the amount of restitution made to players, and (c) force shareholders to shoulder most of the bill.
On the tape, Daniel S. Friedberg tells Hamilton that he doesn’t want news of the cheating scandal to get out, but if it must, the “ideal thing” would be for the public to be told that a “former consultant to the company, uh, took advantage of a server flaw by hacking into the [software] client.” Friedberg advises Hamilton to publicly claim that he was among the victims of this cheating, “otherwise [the cover story’s] not going to fly.”
Regarding how many millions the site would have to cough up—both in returns to players and regulatory penalties—Friedberg says “if we could get it down to five, I’d be happy.” This is despite Friedberg knowing the real sum owed was many multiples of that number. Friedberg later says that achieving this $5 million target is possible, “depending how creative we get.”
Friedberg also emphasizes the need to shift responsibility for the payout to Excapsa, the holding company that owned UB’s software during the period in which some of the cheating took place. Friedberg discusses naming an Excapsa employee as having prior knowledge of the cheating, because “in order to get to Excapsa’s money legally you almost have to show fraud.”
“They tell you to do your thing but they don’t mean it. They don’t want you to do your own thing, not unless it happens to be their thing, too. It’s a laugh, Goober, a fake. Don’t disturb the universe, Goober, no matter what the posters say.”—Robert Cormier, the Chocolate War
Yes, we should. People hesitate or are averse to bringing issues up with authorities/communities due to fears of being punished. As groups collectivize and become increasingly memetically homogeneous, that which coincides with the solidification of power/influence/financial structures and hierarchies, dissent of any form becomes decreasingly tolerated. It becomes safer/easier to criticize EA as an outsider than as member who simultaneously want to grow in EA, be well received by potential EA organization employers, and rise up the oft unstated hierarchies that developed as EA blossomed.
Until this debacle, SBF was lionized beyond comparison by the major community organizations. And moreover, he was closely associated with EA giants via the foundation/future fund and other projects. He had excellent PR presence due to the constant EA affiliated media attention. He was 80k’s paragon of earning to give.
That’s not to say figures like him were untouchable (nothing in EA is untouchable fortunately), but criticizing the most popular embodiment of success would result in online backlash at best or at worst, damage to the critic’s career capital. In a situation similar to Stuart’s, that is precisely why Sven’s essay on conflicts of interest in EA was anonymous. It’s also why it didn’t even get honorable mention in the essay competition. Even if the criticisms themselves were valid and justified, the PR risks of promoting dissent made sure it wasn’t given a prize. Demands for greater transparency or accountability from EA vanguards in the wake of recent developments may also be viewed instinctively or intuitively as threats to harmony.
Not everyone enjoys having beloved paragons and prophets criticized. Not everyone likes having their faith or trust in institutions shattered, let alone challenged. Not everyone maintains a cynical, skeptical attitude towards those in authority positions. During EA training newcomers certainly aren’t prepared for such developments, perhaps because events like such are not expected to ever come up in the first place.
It remains a problem the community has faced since day one, although much of it is attributable due to hierarchical and tribalistic human psychology rather than EA itself. While EA has better epistemics and remains more open to criticisms than the average ideological movement, harshness or cynical sternness, used to be (in EA’s early days), much more commonplace and welcomed than it is now. As EA has grown and become more of a community, intra-group harmonic cohesion became increasingly prized and promoted. Those who elicit controversy by means of intellectual dissent (rather than conforming) are at a higher likelihood of being downvoted.
Spouting off this stuff isn’t productive on my end. I don’t have a solution, but there needs to be better ways to increase reception towards contrarian/unpopular takes, minimizing unjustified repercussions for dissenters. Those who are harshest or most skeptical among EAs should not be dismissed as impediments to progress. I have faith EA has the capacity to ameliorate this.
By the way, it looks like the comment is now heavily upvoted. I’ve seen this happen quite a few times, so it seems like it might be good to withhold judgment about the net votes for a day or two. But of course it could be that it became highly upvoted because of reactions like this, so I’m not sure what the best course of action is.
I don’t follow crypto, or it’s space, but this seems like a bad habit or norm to downvote pieces that criticize EA’s questionable relationship to crypto.
Cryptocurrency doesn’t actually work, and only is there for scams and fraud. Not surprising that FTX collapsed.
I think you may be getting a lot of disagree-votes because I don’t think crypto was the issue here. People who just have USD sitting in FTX right now lost their money too.
FTX shouldn’t have been risky. It wasn’t a DAO, or based entirely off some token or chain, it was an exchange. It should have just been connecting people who wanted to buy crypto with people who wanted to sell crypto, and taking a fee for doing this. The exchange itself shouldn’t be taking any risk.
The reason as to how looks at least in part to do with leveraged transactions, allowing customers to buy more crypto by supplementing their purchase with a loan. But we’ve let leveraged transactions happen with stock for a hundred years. This looks a lot more like garden-variety financial crime than some problem with crypto.
Here’s a quote from former US Treasury Secretary Larry Summers in a recent Bloomberg interview that backs up some of the claims in this comment:
A lot of people have compared this to Lehman. I would compare it to Enron—the smartest guys in the room, not just financial error, but certainly from the reports, whiffs of fraud. Stadium namings very early in a company’s history. Vast explosion of wealth that nobody quite understands where it comes from.
[...] I think this is probably less about the complexities of the nuances of the rules of crypto regulation and more about some very basic financial principles that go back to financial scandals that took place in ancient Rome.
The relation to crypto is that the bulk of crypto is poorly regulated. Some of that is solvable—well regulated exchanges should be possible. The extreme volatility also increases the temptation toward fraud. So the fraud risk is higher than in a well-regulated industry.
I’d submit that a well-regulated and managed exchange is going to find it much harder to achieve a stratospheric valuation, and other parts of crypto are harder to regulate well. So some skepticism toward huge crypto-linked donors is warranted.
More crypto regulation is coming, and many crypto protocols have worked hard already to be regulatory-compliant. But regulation won’t be uniform across jurisdictions; there will always be loopholes that allow regulatory arbitrage.
Some exchanges, such as Coinbase and Kraken, are based and regulated within the US, and are subject to much stricter oversight than FTX—which seems to have been deliberated based in Hong Kong and then the Bahamas precisely in order to avoid US regulatory oversight. (Arguably, this should have been a red flag in terms of EA’s relationship with FTX).
The US, UK, or EU can regulate all they want, but crypto finance is a global business, and there are plenty of less-regulated havens willing to host crypto businesses.
Hopefully crypto investors, traders, and users will become savvier about checking where businesses are operating, and what regulatory scrutiny they’re subject to.
Agreed on that. My point was that it would be a lot harder for an individual to get super-rich quick in a regulated market. No sane regulator is going to allow a regulated party to risk customer assets for the party’s benefit, and few will allow crazy leverage. And the whole thing will require significantly more of a buffer in fiat currency, again limiting any single person’s ability to get megarich.
In short, I think there are few ways for a well-regulated exchange to be stratospherically profitable. So people should not expect the rise of new crypto megadonors who hail from regulated backgrounds.
I would agree with this. Separate from the object-level causes of the current crisis, crypto as an industry has accepted and normalized a lack of accountability that other industries haven’t. And I agree that lack of regulation and high volatility make fraud more likely.
I would want to avoid purely focusing on crypto, because I think the meta-lesson I might take away is less “crypto bad” and more “make sure donors and influential community members are accountable,” whether that be to regulators, independent audits, or otherwise. (And accountable in a real due diligence sense, because it’s easy for that word to just be an applause light.) But yes, skepticism of crypto-linked donors would be justified under this framework.
I have no idea why this comment is no longer endorsed by its author because it’s entirely correct. Not only is crypto a great way to scam people because transactions can’t be reversed & there’s virtually no regulation for most of the space, the fact that it’s so hard to make money in crypto across an entire cycle means that entities have a huge incentive to resort to scamming.
False, it works just fine. It’s a token that can’t be duplicated and people can send to each other without any centralized authority.
and only is there for scams and fraud.
There are indeed a lot of those, but scams and fraud were very clearly not the intention of its creators. Realistically they were cryptography nerds who wanted to make something cool, or libertarians with overly-idealistic visions of the future.
Not surprising that FTX collapsed.
Clear hindsight bias. This person should have made some money betting against FTX before it collapsed and then I’d take them more seriously.
Basically, the comment is just your standard “cryptocurrency bad” take, without any attempt at justifying their claims or even saying much of anything other than expressing in an inflammatory way that they don’t like cryptocurrency.
“This person should have made some money betting against FTX before it collapsed and then I’d take them more seriously.”
this is naive EMH fundamentalism
not everything can be shorted, not everything can be shorted easily, not everything should be shorted, markets can be manipulated. Especially the crypto market. It both can be the case that people 100% think X is a fraud, and X collapses, and shorting X would have been a losing trade over most timeframes. “Never short” is an oversimplification but honestly not a bad one.
Most of that isn’t even clearly bad, and I find it hard to see good faith here.
Your criticism of Binance amounts to “it’s cryptocurrency”. Everyone knows crypto can be used to facilitate money laundering; this was, for Bitcoin, basically the whole point. Similarly the criticism of Ponzi schemes; there were literally dozens of ICOs for things that were overtly labeled as Ponzis—Ponzicoin was one of the more successful ones, because it had a good name. Many people walked into this with eyes open; many others didn’t, but they were warned, they just didn’t heed the warnings. Should we also refuse to take money from anyone who bets against r/wallstreetbets and Robinhood? Casinos? Anyone who runs a platform for sports bets? Prediction markets? Your logic would condemn them all.
It’s not clear why FTX would want to spend this amount of money on buying a fraudulent firm.
FTX would prefer that the crypto sector stay healthy, and backstopping companies whose schemes were failing serves that goal. That is an entirely sufficient explanation and one with no clear ethical issues or moral hazard.
Even in retrospect, I think this was bad criticism and it was correct to downvote it.
My criticism of Binance was not “it’s cryptocurrency.” My criticism of Binance was that at the very time that that SBF allied with Binance, it was a “hub for hackers, fraudsters and drug traffickers.” Apparently your defense of SBF is that “everyone knows” crypto is good for little else . . . but perhap if someone enters a field that is mostly or entirely occupied by criminal activity, that isn’t actually an excuse?
As for backstopping other scams and frauds, that isn’t a way to make sure that the “crypto sector stays healthy” (barring very unusual definitions of the word “healthy”), and in actuality, we’re now seeing evidence that FTX was just trying to extract assets from other companies in a desperate attempt to shore up their own malfeasance and fraud. https://twitter.com/AutismCapital/status/1591569275642589184
Yeah, still not seeing much good faith. You’re still ahead of AutismCapital, though, which is 100% bad faith 100% of the time. If you believe a word it says I have a bridge to sell you.
Is this Sam in disguise? You’re literally the only person in existence who seems to think it was somehow unfair to be suspicious (and correctly so!) of SBF for having hired a chief compliance officer with a long history of fraud, and of his pattern of trying to buy up other people’s frauds/scams.
The only flaw in my earlier comment is that I was too charitable towards SBF in suggesting that there might be some plausible excuse for the multiple red flags I noticed.
Thanks for this! I echo Lizka’s comment about linkposting.
In light of the recent events I’m struggling a bit with taking my hindsight-bias shades off, and while I scored it reasonably highly, I don’t think I can fairly engage with whether it should have received a prize over other entries even if I had the capacity to (let alone speak for other panelists). I do remember including it in the comment mainly because I thought it was a risk that didn’t receive enough attention and was worth highlighting (though I have a pretty limited understanding of the crypto space and ~0 clue that things would happen in the way they did).
I think it’s worth noting that there has been at least one other post on the forum that engaged with this specifically, but unfortunately didn’t receive much attention. (Edit: another one here)
Ultimately though, I think it’s more important to think about what actionable and constructive steps the EA community can take going forward. I think there are a lot of unanswered questions wrt accountability from EA leaders in terms of due diligence, what was known or could have been suspected prior to Nov 9th this year, and what systems or checks/balances were in place etc that need to be answered, so the community can work out what the best next steps are in order to minimise the likelihood of something like this from happening again.
I also think there are questions around how these kinds of decisions are made when benefits affect one part of the EA community but the risks are pertinent to all, and how to either diversify these risks, or make decision-making more inclusive of more stakeholders, keeping in mind the best interests of the EA movement as a whole.
This is something I’m considering working on at the moment and will try and push for—do feel free to DM me if you have thoughts, ideas, or information.
I’d regard incentive to discount highly immoral business practices (e.g. what happened with Alameda in 2018) as stemming from a conflict of interest (i.e. interest 1: promote integrity in EA; interest 2: get lots of money from SBF for EA. These were in conflict!)
I wouldn’t say orthogonal, more upstream. If SBF had been shunned from the community in 2018, would we be in this situation now? Sure, he might still have committed massive fraud with the ends of gaining wealth and influence, but the focus would be on the Democrats, or whatever other group became his main affiliation.
No, you’re thinking about it entirely wrong. If everyone who did something analogous to Alameda 2018 was shunned, there probably wouldn’t be any billionaire EA donors at all. It was probably worse than most startups, but not remarkably worse. It was definitely not a reliable indicator that a fraud or scandal was coming down the road.
Dustin Moskovitz and Jaan Tallinn were already EA ~billionaire donors well before 2018. They haven’t done anything analogous to what SBF/FTX/Alameda did. What examples are you thinking of?
Thanks (link to the comment). I think those entries really should’ve been put on the EA Forum as posts to be interacted with (like with the Future Fund AI Worldview Prize[1])
Yeah, I can confirm that we evaluated that submission.
Re: putting them on the Forum — we didn’t have the capacity to do that (and I’m not sure it would have been helpful to do that for all the submissions), but in general, I really encourage people to link-post relevant content to the EA Forum. So, you could link-post this (or similar content in the future).
[I should note that I have low capacity right now and might not reply to this thread. Apologies in advance!]
Sven Roneshould’ve won a prize in the Red Teaming contest[1]:
The Effective Altruism movement is not above conflicts of interest
[published Sep 1st 2022]
(Note that this issue was commented on here a month ago.) This whole thing is now starting to look like the classic “ends justify the means” criticism of Utilitarianism writ large :(
although looks like it wasn’t actually entered? Edit: it was, but not posted as a top-level post on the EA Forum (see comments below).
I wrote that comment from over a month ago. And I actually followed it up with a more scathing comment that got downvoted a lot, and that I deleted out of a bit of cowardice, I suppose. But here’s the text:
Consider this bit from the origin story of FTX:
Binance, you say? This Binance?
Or consider FTX’s hiring of Daniel Friedberg as a chief compliance officer. This article claims that he had been involved in previous cheating/fraud at other businesses:
Then there are all the recent examples of FTX trying to buy up other crypto players. For example, in July, FTX signed a deal to buy BlockFi for up to $240 million, and to give it $400 million in revolving credit. BlockFi is most famous for having agreed to pay $100 million in penalties for its securities fraud. It’s not clear why FTX would want to spend this amount of money on buying a fraudulent firm.
Just last week, there was a story that FTX is thinking about buying Celsius, another fraudulent firm.
Another story from July had the remarkable claim that SBF is even thinking of putting his own cash into bailing out other crypto firms:
Why is FTX and perhaps SBF himself putting so much money into buying up other people’s scams? I would hope it’s because they intend to reform the crypto industry and put it on more of a moral footing, although that would reduce the market size by an order of magnitude or two.
***
At least, SBF and FTX ought to provide more transparency into where exactly all the wealth came from, and what (if anything) they are actively doing to prevent crypto frauds/scams. And one might argue that FTX Foundation has a particular moral duty to establish a fund to help out all of the people whose lives were ruined by falling for crypto’s many Ponzi schemes and other assorted scams.
Wow, I didn’t see it at the time but this was really well written and documented. I’m sorry it got downvoted so much and think that reflects quite poorly on Forum voting norms and epistemics.
Moreover, Sven Rone is a pseudonym. The author used a pen name astheir views were unpopular and underappreciated at the time; they likely feared career repercussions if they went public with it. It’s unfortunate that this was the environment they found themselves in.
Seconded. This whole saga has really made me sour on some already mixed views on EA epistemics.
I find myself having a mixed opinion of how EA responded. It wasn’t outright terrible epistemics, unlike most of the world reacting to a similar event, but there were real failures of epistemics.
On the other hand, there was also successes in EA epistemics, as well.
I think the post ended up around 0 or 1 karma, is that right? (I mean before people changed their voting based on hindsight!) I think it’s important to distinguish between “got downvoted a lot but ended up at neutral karma” vs. “got downvoted double digits into no longer being visible.” The former reflects somewhat poorly on EA, the latter very poorly.
I think the most informative signal here is not the exact karma that comment ended up with but rather that the author ended up deleting it despite believing that what he was saying was potentially important and not receiving any reasons to think he was wrong. A culture where people feel compelled to silence themselves is worse than one where some comments are wrongly downvoted without much consequence to the author.
I think the most important data points here are any comments that were left, and the net karma of the comment. People have in fact been known to overreact, or react in idiosyncratic ways, in forum discussions; I haven’t seen the thread in question, but if the responses were friendly and the comment got ~0 net karma, then that would be a large update for me.
I definitely took “that got downvoted a lot” to mean that the comment got a lot of net downvotes, not just that people offset its upvotes to keep it around a neutral 0. I think it’s pretty bad to describe vote patterns that misleadingly, if it was hovering around 0.
Good point. :S
Are we talking about this deleted comment? It has 6 overall karma in 9 votes, and −3 agreement in 5 votes.
No, I was talking about Stuart Buck’s initial comment in that same thread, which is still up and now has high upvotes.
But Stuart also mentioned he deleted a second comment after it got downvoted too, so that must be the one you’re linking to. (We also don’t know if some people retroactively upvoted the deleted comment, it’s at +6 now but could’ve been negative at the time of deletion. I think I’m still able to vote on the deleted comment – though maybe that’s just because I had already voted on it before it got deleted [strong upvote and weak agree vote]).
Either way it seems highly unlikely that the deleted comment I linked to had lots of negative votes. It had a few disagree votes but very likely not more than 1-2 karma downvotes.
I like how Hacker News hides comment scores. Seems to me that seeing a comment’s score before reading it makes it harder to form an independent impression.
I fairly frequently find myself thinking something like: “this comment seems fine/interesting and yet it’s got a bunch of downvotes; the downvoters must know something I don’t, so I shouldn’t upvote”. If others also reason this way, the net effect is herd behavior? What if I only saw a comment’s score after voting/opting not to vote?
Maybe quadratic voting could help, by encouraging everyone to focus their voting on self-perceived areas of expertise? Commenters should be trying to impress a narrow & sophisticated audience instead of a broad & shallow one?
EDIT: Another thought: If there was a way I could see my recent votes, I could go back and reflect on them to ensure I’m voting in a consistent manner across threads
I think that what FTX is accused of this comment is legitimately way more something where a charitable recipient is not morally obliged to demand this level of careful checking of everything, because our civilization is just not actually able to support this level of competency pornography.
Stealing your customers’ funds is a very different matter from “some of the people who use our services are criminals”. Why, MIRI has in the past accepted matching funds from Google, which I’m sure profits a whole lot off criminals using their services! And some of those criminals may even be bad people!
But you can’t, actually, run a post-agricultural civilization on the principle of everybody who engages in every transaction checking out the full moral character of everybody who transacts with them. If you did try to build clever infrastructure for that, its first use on the margin would be by the right to hunt down sex workers (as already occurs with Visa) and by the left to hunt down people who said bad things on Twitter.
In a hunter-gatherer tribe it maybe makes sense to demand that people not transact with that bad guy over there; it scales as far as it needs to scale. And MIRI would not take money from somebody what we knew had stolen in charity’s name. But to figure it all out—if you want to read about Civilizations that have the basic infrastructure and competence to run those kind of traces, go read science fiction; here on Earth you’ve got VC firms trying to run six months of due diligence and then they invest in FTX.
IMO the amount of diligence someone ought to perform on their counterparties’ character is different in different circumstances. “This person is one of hundreds of people I transact with every week” carries different obligations than “This person is one of the four big donors who fund my organization” carries different obligations than “This person has been my only source of income for the past two years”. Different EAs were at different points along this spectrum.
I generally agree with you, but in this case SBF
1) hired a high-level person with a long history of fraud (you don’t see Asana or Stripe doing this); and
2) described his own business as a Ponzi scheme (see https://www.bloomberg.com/news/articles/2022-04-25/sam-bankman-fried-described-yield-farming-and-left-matt-levine-stunned ).
It was obvious that he was up to no good.
“But Sequoia”—I’m not convinced that they did any due diligence, judging by what they published on their own website: https://twitter.com/zebulgar/status/1590394857474109441 It’s not the only occasion when it looks to me like “top” VC firms leapt into investments out of FOMO, with zero effort at due diligence: https://medium.com/swlh/why-are-investors-eager-to-lose-money-on-health-tech-f8c678ccc417
Quoting from the article you linked about the involvement of Daniel Friedberg, FTX’s Chief Regulatory Officer, in a previous scandal:
Sorry this text got heavily downvoted? If so, we should be ashamed.
“They tell you to do your thing but they don’t mean it. They don’t want you to do your own thing, not unless it happens to be their thing, too. It’s a laugh, Goober, a fake. Don’t disturb the universe, Goober, no matter what the posters say.”—Robert Cormier, the Chocolate War
Yes, we should. People hesitate or are averse to bringing issues up with authorities/communities due to fears of being punished. As groups collectivize and become increasingly memetically homogeneous, that which coincides with the solidification of power/influence/financial structures and hierarchies, dissent of any form becomes decreasingly tolerated. It becomes safer/easier to criticize EA as an outsider than as member who simultaneously want to grow in EA, be well received by potential EA organization employers, and rise up the oft unstated hierarchies that developed as EA blossomed.
Until this debacle, SBF was lionized beyond comparison by the major community organizations. And moreover, he was closely associated with EA giants via the foundation/future fund and other projects. He had excellent PR presence due to the constant EA affiliated media attention. He was 80k’s paragon of earning to give.
That’s not to say figures like him were untouchable (nothing in EA is untouchable fortunately), but criticizing the most popular embodiment of success would result in online backlash at best or at worst, damage to the critic’s career capital. In a situation similar to Stuart’s, that is precisely why Sven’s essay on conflicts of interest in EA was anonymous. It’s also why it didn’t even get honorable mention in the essay competition. Even if the criticisms themselves were valid and justified, the PR risks of promoting dissent made sure it wasn’t given a prize. Demands for greater transparency or accountability from EA vanguards in the wake of recent developments may also be viewed instinctively or intuitively as threats to harmony.
Not everyone enjoys having beloved paragons and prophets criticized. Not everyone likes having their faith or trust in institutions shattered, let alone challenged. Not everyone maintains a cynical, skeptical attitude towards those in authority positions. During EA training newcomers certainly aren’t prepared for such developments, perhaps because events like such are not expected to ever come up in the first place.
It remains a problem the community has faced since day one, although much of it is attributable due to hierarchical and tribalistic human psychology rather than EA itself. While EA has better epistemics and remains more open to criticisms than the average ideological movement, harshness or cynical sternness, used to be (in EA’s early days), much more commonplace and welcomed than it is now. As EA has grown and become more of a community, intra-group harmonic cohesion became increasingly prized and promoted. Those who elicit controversy by means of intellectual dissent (rather than conforming) are at a higher likelihood of being downvoted.
Spouting off this stuff isn’t productive on my end. I don’t have a solution, but there needs to be better ways to increase reception towards contrarian/unpopular takes, minimizing unjustified repercussions for dissenters. Those who are harshest or most skeptical among EAs should not be dismissed as impediments to progress. I have faith EA has the capacity to ameliorate this.
By the way, it looks like the comment is now heavily upvoted. I’ve seen this happen quite a few times, so it seems like it might be good to withhold judgment about the net votes for a day or two. But of course it could be that it became highly upvoted because of reactions like this, so I’m not sure what the best course of action is.
I don’t follow crypto, or it’s space, but this seems like a bad habit or norm to downvote pieces that criticize EA’s questionable relationship to crypto.
Cryptocurrency doesn’t actually work, and only is there for scams and fraud. Not surprising that FTX collapsed.
I think you may be getting a lot of disagree-votes because I don’t think crypto was the issue here. People who just have USD sitting in FTX right now lost their money too.
FTX shouldn’t have been risky. It wasn’t a DAO, or based entirely off some token or chain, it was an exchange. It should have just been connecting people who wanted to buy crypto with people who wanted to sell crypto, and taking a fee for doing this. The exchange itself shouldn’t be taking any risk.
The reason as to how looks at least in part to do with leveraged transactions, allowing customers to buy more crypto by supplementing their purchase with a loan. But we’ve let leveraged transactions happen with stock for a hundred years. This looks a lot more like garden-variety financial crime than some problem with crypto.
Here’s a quote from former US Treasury Secretary Larry Summers in a recent Bloomberg interview that backs up some of the claims in this comment:
Sorry for misfiring here, I’ll retract my comment.
The relation to crypto is that the bulk of crypto is poorly regulated. Some of that is solvable—well regulated exchanges should be possible. The extreme volatility also increases the temptation toward fraud. So the fraud risk is higher than in a well-regulated industry.
I’d submit that a well-regulated and managed exchange is going to find it much harder to achieve a stratospheric valuation, and other parts of crypto are harder to regulate well. So some skepticism toward huge crypto-linked donors is warranted.
More crypto regulation is coming, and many crypto protocols have worked hard already to be regulatory-compliant. But regulation won’t be uniform across jurisdictions; there will always be loopholes that allow regulatory arbitrage.
Some exchanges, such as Coinbase and Kraken, are based and regulated within the US, and are subject to much stricter oversight than FTX—which seems to have been deliberated based in Hong Kong and then the Bahamas precisely in order to avoid US regulatory oversight. (Arguably, this should have been a red flag in terms of EA’s relationship with FTX).
The US, UK, or EU can regulate all they want, but crypto finance is a global business, and there are plenty of less-regulated havens willing to host crypto businesses.
Hopefully crypto investors, traders, and users will become savvier about checking where businesses are operating, and what regulatory scrutiny they’re subject to.
Agreed on that. My point was that it would be a lot harder for an individual to get super-rich quick in a regulated market. No sane regulator is going to allow a regulated party to risk customer assets for the party’s benefit, and few will allow crazy leverage. And the whole thing will require significantly more of a buffer in fiat currency, again limiting any single person’s ability to get megarich.
In short, I think there are few ways for a well-regulated exchange to be stratospherically profitable. So people should not expect the rise of new crypto megadonors who hail from regulated backgrounds.
I would agree with this. Separate from the object-level causes of the current crisis, crypto as an industry has accepted and normalized a lack of accountability that other industries haven’t. And I agree that lack of regulation and high volatility make fraud more likely.
I would want to avoid purely focusing on crypto, because I think the meta-lesson I might take away is less “crypto bad” and more “make sure donors and influential community members are accountable,” whether that be to regulators, independent audits, or otherwise. (And accountable in a real due diligence sense, because it’s easy for that word to just be an applause light.) But yes, skepticism of crypto-linked donors would be justified under this framework.
I have no idea why this comment is no longer endorsed by its author because it’s entirely correct. Not only is crypto a great way to scam people because transactions can’t be reversed & there’s virtually no regulation for most of the space, the fact that it’s so hard to make money in crypto across an entire cycle means that entities have a huge incentive to resort to scamming.
I can tell you why I downvoted it.
False, it works just fine. It’s a token that can’t be duplicated and people can send to each other without any centralized authority.
There are indeed a lot of those, but scams and fraud were very clearly not the intention of its creators. Realistically they were cryptography nerds who wanted to make something cool, or libertarians with overly-idealistic visions of the future.
Clear hindsight bias. This person should have made some money betting against FTX before it collapsed and then I’d take them more seriously.
Basically, the comment is just your standard “cryptocurrency bad” take, without any attempt at justifying their claims or even saying much of anything other than expressing in an inflammatory way that they don’t like cryptocurrency.
“This person should have made some money betting against FTX before it collapsed and then I’d take them more seriously.”
this is naive EMH fundamentalism
not everything can be shorted, not everything can be shorted easily, not everything should be shorted, markets can be manipulated. Especially the crypto market. It both can be the case that people 100% think X is a fraud, and X collapses, and shorting X would have been a losing trade over most timeframes. “Never short” is an oversimplification but honestly not a bad one.
Most of that isn’t even clearly bad, and I find it hard to see good faith here.
Your criticism of Binance amounts to “it’s cryptocurrency”. Everyone knows crypto can be used to facilitate money laundering; this was, for Bitcoin, basically the whole point. Similarly the criticism of Ponzi schemes; there were literally dozens of ICOs for things that were overtly labeled as Ponzis—Ponzicoin was one of the more successful ones, because it had a good name. Many people walked into this with eyes open; many others didn’t, but they were warned, they just didn’t heed the warnings. Should we also refuse to take money from anyone who bets against r/wallstreetbets and Robinhood? Casinos? Anyone who runs a platform for sports bets? Prediction markets? Your logic would condemn them all.
FTX would prefer that the crypto sector stay healthy, and backstopping companies whose schemes were failing serves that goal. That is an entirely sufficient explanation and one with no clear ethical issues or moral hazard.
Even in retrospect, I think this was bad criticism and it was correct to downvote it.
My criticism of Binance was not “it’s cryptocurrency.” My criticism of Binance was that at the very time that that SBF allied with Binance, it was a “hub for hackers, fraudsters and drug traffickers.” Apparently your defense of SBF is that “everyone knows” crypto is good for little else . . . but perhap if someone enters a field that is mostly or entirely occupied by criminal activity, that isn’t actually an excuse?
As for backstopping other scams and frauds, that isn’t a way to make sure that the “crypto sector stays healthy” (barring very unusual definitions of the word “healthy”), and in actuality, we’re now seeing evidence that FTX was just trying to extract assets from other companies in a desperate attempt to shore up their own malfeasance and fraud. https://twitter.com/AutismCapital/status/1591569275642589184
Yeah, still not seeing much good faith. You’re still ahead of AutismCapital, though, which is 100% bad faith 100% of the time. If you believe a word it says I have a bridge to sell you.
Is this Sam in disguise? You’re literally the only person in existence who seems to think it was somehow unfair to be suspicious (and correctly so!) of SBF for having hired a chief compliance officer with a long history of fraud, and of his pattern of trying to buy up other people’s frauds/scams.
The only flaw in my earlier comment is that I was too charitable towards SBF in suggesting that there might be some plausible excuse for the multiple red flags I noticed.
Thanks for this! I echo Lizka’s comment about linkposting.
In light of the recent events I’m struggling a bit with taking my hindsight-bias shades off, and while I scored it reasonably highly, I don’t think I can fairly engage with whether it should have received a prize over other entries even if I had the capacity to (let alone speak for other panelists). I do remember including it in the comment mainly because I thought it was a risk that didn’t receive enough attention and was worth highlighting (though I have a pretty limited understanding of the crypto space and ~0 clue that things would happen in the way they did).
I think it’s worth noting that there has been at least one other post on the forum that engaged with this specifically, but unfortunately didn’t receive much attention. (Edit: another one here)
Ultimately though, I think it’s more important to think about what actionable and constructive steps the EA community can take going forward. I think there are a lot of unanswered questions wrt accountability from EA leaders in terms of due diligence, what was known or could have been suspected prior to Nov 9th this year, and what systems or checks/balances were in place etc that need to be answered, so the community can work out what the best next steps are in order to minimise the likelihood of something like this from happening again.
I also think there are questions around how these kinds of decisions are made when benefits affect one part of the EA community but the risks are pertinent to all, and how to either diversify these risks, or make decision-making more inclusive of more stakeholders, keeping in mind the best interests of the EA movement as a whole.
This is something I’m considering working on at the moment and will try and push for—do feel free to DM me if you have thoughts, ideas, or information.
(Commenting in personal capacity etc)
Strongly disagree. That criticism is mostly orthogonal to the actual problems that surfaced. Conflicts of interest were not the problem here.
I’d regard incentive to discount highly immoral business practices (e.g. what happened with Alameda in 2018) as stemming from a conflict of interest (i.e. interest 1: promote integrity in EA; interest 2: get lots of money from SBF for EA. These were in conflict!)
Again, that’s orthogonal to the actual problems that surfaced.
I wouldn’t say orthogonal, more upstream. If SBF had been shunned from the community in 2018, would we be in this situation now? Sure, he might still have committed massive fraud with the ends of gaining wealth and influence, but the focus would be on the Democrats, or whatever other group became his main affiliation.
No, you’re thinking about it entirely wrong. If everyone who did something analogous to Alameda 2018 was shunned, there probably wouldn’t be any billionaire EA donors at all. It was probably worse than most startups, but not remarkably worse. It was definitely not a reliable indicator that a fraud or scandal was coming down the road.
Dustin Moskovitz and Jaan Tallinn were already EA ~billionaire donors well before 2018. They haven’t done anything analogous to what SBF/FTX/Alameda did. What examples are you thinking of?
Those two are perfectly good examples. They did. Every successful startup does something approximately that bad, on the way to the top.
It seems it was entered, according to the (second) comment from Bruce here: Winners Red Teaming
Thanks (link to the comment). I think those entries really should’ve been put on the EA Forum as posts to be interacted with (like with the Future Fund AI Worldview Prize[1])
Which I imagine is no longer happening :(
Yeah, I can confirm that we evaluated that submission.
Re: putting them on the Forum — we didn’t have the capacity to do that (and I’m not sure it would have been helpful to do that for all the submissions), but in general, I really encourage people to link-post relevant content to the EA Forum. So, you could link-post this (or similar content in the future).
[I should note that I have low capacity right now and might not reply to this thread. Apologies in advance!]