I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
Jason
At a minimum, candidates should be invited to seek a waiver of any “complete in one sitting” requirement on an early-round work task for good cause, without any adverse consequences whether the waiver is granted or not. Speaking as an employed individual with a preschooler, three hours of uninterrupted time is a a big ask for an early-round job application process!
It’s unclear from that whether the due diligence scaled appropriately with size of donation. I doubt ~anyone is batting an eye at charities that took 25K-50K from SBF, due diligence or no. The process at the tens of millions per year level needs to be bespoke, though.
Yes, I think we would benefit from having organizations run the pre-app and standard app in parallel for one cycle (while compensating applicants for the additional work on the margin!). We’d be looking for a pre-app “score” for each organization at which very few people whose application would have survived the first round of the old process would be eliminated by the pre-app and/or ~no one who was ultimately accepted was screened out.
I think that’s a good critique, although it can be mitigated somewhat with a narrower interpretation. In the narrower view, motivation (e.g., “effort required to unlock”) is a necessary but not sufficient precursor to various actions.
Being a jerk on X requires only low motivation, but if I’m not prone to being a jerk in the first place then my response to that level of motivation will be [no action], which will not result in any criticism. Conditional on someone posting criticism at that level of motivation, the criticism will be ~ in the form of mean tweets, because the motivation level isn’t high enough to unlock higher forms of criticism.
My impression is that high quality work on both sides is done by people with strong inherent dedication to truth-seeking and intellectual inquiry [ . . .]
. . . as well as sufficient motivation and resources to do so. As with the lower levels, I suggest that high motivation unlocks high-level work in the sense that it is a necessary but not sufficient precondition. This means that people with strong inherent dedication to truth-seeking and intellectual inquiry will still not produce high-quality work unless they are motivated enough to do so.
Right—there’s still a correlation between the legible external factor and outcome, even if there is no causal relationship.
Hypothetical example: Prestigious University does not consider test scores in determining admissions at all. However, test scores happen to be strongly correlated to academic ability, and it so happens that most admitted students have scores in the 99th percentile. This would still be useful information for someone with a test score in the 80th percentile even though there is zero direct causal relationship between test scores and admission.
Unclear, although most nonprofits are attracting significantly less risky donors than crypto people. (SBF wasn’t even the first crypto scammer sentenced to a multidecade term in the Southern District of New York in the past twelve months....)
I’d suggest that even to the extent a non-profit is generally outsourcing that kind of work, it can’t just rely on standard third-party practices where significant information with some indicia of reliability is brought directly to it.
At least where the acceptance rate is 3-5 percent, it seems plausible that there could be something like the “AI Safety Common Pre-Application” that would reduce the time burden for many applicants. In many cases it would seem possible to say, on information not customized to a specific program, that an applicant just isn’t going to make that top 3-5%.
(Applicants meeting specified criteria would presumably be invited to skip the pre-app stage, eliminating the risk of those applicants being erroneously screened out on common information.)
By analogy: In some courts, you have to seek permission from the court of appeals prior to appealing. The bar for being allowed is much lower than for succeeding, which means that denials at permission stage save disappointed litigants the resources they’d otherwise use to prepare full appeals.
This is an extremely rich guy who isn’t donating any of his money.
But cf. the “stages of change” in the transtheoretical model of behavior change. A lack of action suggests he has not reached the action stage, but could be in the contemplation or preparation stages.
Moreover, even if a critic has a sufficiently high level of motivation in the abstract, it doesn’t follow that they will be incentivized to produce much (if any) “polite, charitable, good-faith, evidentiarily rigorous” work. (Many) critics want to be effective too—and they may reasonably (maybe even correctly!) think that effort devoted to producing castle memes produces a higher ROI than polishing, simplifying, promoting, and defending their more rigorous critiques.
For example, a committed e/acc’s top priority is arguably the avoidance of government regulation that seriously slows down AI development. Memes are more important for 90%, perhaps 99%, of the electorate—so “make EA / AI safety a topic of public scorn and ridicule” seems like a reasonable theory of change for the e/acc folks. When you’re mainly trying to tear someone else’s work down, you may plausibly see maintaining epistemic rigor in your own camp as relatively less important than if you were actually trying to build something.
I think the fitness/suitability of major leaders (at least to the extent we are talking about a time when SBF was on the board) and major donor acceptability evaluation are inherently in scope for any charitable organization or movement.
Do you recall what your conception of a possible customer loss resulting “from bankruptcy” was, and in particular whether it was (at least largely) limited to “monies lent out for margin trading”? Although I haven’t done any research, if user accounts had been appropriately segregated and safeguarded, FTX’s creditors (in a hypothetical “normal” bankruptcy scenario) shouldn’t have been able to make claims against them. There might have been an exception for those involved in margin trading
This is a pretty opposite approach to the EA forum which favours bans.
If you remove ones for site-integrity reasons (spamming DMs, ban evasion, vote manipulation), bans are fairly uncommon. In contrast, it sounds like LW does do some bans of early-stage users (cf. the disclaimer on this list), which could be cutting off users with a high risk of problematic behavior before it fully blossoms. Reading further, it seems like the stuff that triggers a rate limit at LW usually triggers no action, private counseling, or downvoting here.
As for more general moderation philosophy, I think the EA Forum has an unusual relationship to the broader EA community that makes the moderation approach outlined above a significantly worse fit for the Forum than for LW. As a practical matter, the Forum is the ~semi-official forum for the effective altruism movement. Organizations post official announcements here as a primary means of publishing them, but rarely on (say) the effectivealtruism subreddit. Posting certain content here is seen as a way of whistleblowing to the broader community as a whole. Major decisionmakers are known to read and even participate in the Forum.
In contrast (although I am not an LW user or a member of the broader rationality community), it seems to me that the LW forum doesn’t have this particular relationship to a real-world community. One could say that the LW forum is the official online instantiation of the LessWrong community (which is not limited to being an online community, but that’s a major part of it). In that case, we have something somewhat like the (made-up) Roman Catholic Forum (RCF) that is moderated by designees of the Pope. Since the Pope is the authoritative source on what makes something legitimately Roman Catholic, it’s appropriate for his designees to employ a heavier hand in deciding what posts and posters are in or out of bounds at the RCF. But CEA/EVF have—rightfully—mostly disowned any idea that they (or any other specific entity) decide what is or isn’t a valid or correct way to practice effective altruism.
One could also say that the LW forum is an online instantiation of the broader rationality community. That would be somewhat akin to John and Jane’s (made up) Baptist Forum (JJBF) that is moderated by John and Jane. One of the core tenets of Baptist polity is that there are no centralized, authoritative arbiters of faith and practice. So JJBF is just one of many places that Baptists and their critics can go to discuss Baptist topics. It’s appropriate for John and Jane to to employ a heavier hand in deciding what posts and posters are in or out of bounds at the JJBF because there are plenty of other, similar places for them to go. JJBF isn’t anything special. But as noted above, that isn’t really true of the EA Forum because of its ~semi-official status in a real-world social movement.
It’s ironic that—in my mind—either a broader or narrower conception of what LW is would justify tighter content-based moderation practices, while those are harder to justify in the in-between place that the EA Forum occupies. I think the mods here do a good job handling this awkward place for the most part by enforcing viewpoint-neutral rules like civility and letting the community manage most things through the semi-democratic karma method (although I would be somewhat more willing to remove certain content than they are).
Ben said “any of the resultant harms,” so I went with something I saw a fairly high probability. Also, I mostly limit this to harms caused by “the affiliation with SBF”—I think expecting EA to thwart schemes cooked up by people who happen to be EAs (without more) is about as realistic as expecting (e.g.) churches to thwart schemes cooked up by people who happen to be members (without more).
To be clear, I do not think the “best case scenario” story in the following three paragraphs would be likely. However, I think it is plausible, and is thus responsive to a view that SBF-related harms were largely inevitable.
In this scenario, leaders recognized after the 2018 Alameda situation that SBF was just too untrustworthy and possibly fraudulent (albeit against investors) to deal with—at least absent some safeguards (a competent CFO, no lawyers who were implicated in past shady poker-site scandals, first-rate and comprehensive auditors). Maybe SBF wasn’t too far gone at this point—he hadn’t even created FTX in mid-2018 -- and a costly signal from EA leaders (we won’t take your money) would have turned him—or at least some of his key lieutenants—away from the path he went down? Let’s assume not, though.
If SBF declined those safeguards, most orgs decline to take his money and certainly don’t put him on podcasts. (Remember that, at least as of 2018, it sounds like people thought Alameda was going nowhere—so the motivation to go against consensus and take SBF money is much weaker at first.) Word gets down to the rank-and-file that SBF is not aligned, likely depriving him of some of his FTX workforce. Major EA orgs take legible action to document that he is not in good standing with them, or adopt a public donor-acceptability policy that contains conditions they know he can’t/won’t meet. Major EA leaders do not work for or advise the FTXFF when/if it forms.
When FTX explodes, the comment from major EA orgs is that they were not fully convinced he was trustworthy and cut off ties from him when that came to light. There’s no statutory inquiry into EVF, and no real media story here. SBF is retrospectively seen as an ~apostate who was largely rejected by the community when he showed his true colors, despite the big $$ he had to offer, who continued to claim affiliation with EA for reputational cover. (Or maybe he would have gotten his feelings hurt and started the FTX Children’s Hospital Fund to launder his reputation? Not very likely.)
A more modest mitigation possibility focuses more on EVF, Will, and Nick. In this scenario, at least EVF doesn’t take SBF’s money. He isn’t mentioned on podcasts. Hopefully, Will and Nick don’t work with FTXFF, or if they do they clearly disaffiliate from EVF first. I’d characterize this scenario as limiting the affiliation with SBF by not having what is (rightly or wrongly) seen as EA’s flagship organization and its board members risk lending credibility to him. In this scenario, the media narrative is significantly milder—it’s much harder to write a juicy narrative about FTXFF funding various smaller organizations, and without the ability to use Will’s involvement with SBF as a unifying theme. Moreover, when FTX explodes in this scenario, EVF is not paralyzed in the same way it was in the actual scenario. It doesn’t have a CC investigation, ~$30MM clawback exposure, multiple recused board members, or other fires of its own to put out. It is able to effectively lead/coordinate the movement through a crisis in a way that it wasn’t (and arguably still isn’t) able to due to its own entanglement. That’s hardly avoiding all the harms involved in affiliation with SBF . . . but I’d argue it is a meaningful reduction.
The broader idea there is that it is particularly important to isolate certain parts of the EA ecosystem from the influence of low-trustworthiness donors, crypto influence, etc. This runs broader than the specific examples above. For instance, it was not good to have an organization with community-health responsibilities like EVF funded in significant part by a donor who was seen as low-trustworthiness, or one who was significantly more likely to be the subject of whistleblowing than the median donor.
Is the better reference class “two-year old startups” or “companies supposedly worth over $10B” or “startups with over a billion invested”? I assume a 100 percent investor loss would be rare, on an annualized basis, in the latter two—but was included in the original claim. Most two-year startups don’t have nearly the amount of investor money on board that FTX did.
Optics would be great on that one—an EA has insight that there’s a good chance of FTX collapse (based on not generally-known info / rumors?), goes out and shorts SamCoins to profit on the collapse! Recall that any FTX collapse would gut the FTT token at least, so there would still be big customer losses.
much more media reporting on the EA-FTX association resulting in significantly greater brand damage?
Most likely concern in my eyes.
The media tends to report on lawsuits when they are filed, at which time they merely contain unsubstantiated allegations and the defendant is less likely to comment. It’s unlikely that the media would report on the dismissal of a suit, especially if it was for reasons seen as somewhat technical rather than as a clear vindication of the EA individual/organization.
Moreover, it is pretty likely to me that EVF or other EA-affiliated entities have information they would be embarrassed to come out in discovery. This is not based on any belief about misconduct, but the base rate that organizations that had a bad miss / messup have information related thereunto that they would be embarrassed about (and my characterization of a bad miss / messup here, whether or not a liability-creating one).
If a sufficiently motivated plaintiff sued, and came up with a legal theory that survived a motion to dismiss, I think it fairly likely that embarrassing information would need to be disclosed in discovery. They could require various persons and organizations to answer questions, under oath, that they would rather not answer. Questions from a hostile examiner motivated to uncover damaging information, not a sympathetic podcaster. While “I don’t remember” is usually an acceptable answer, it also can make the other side’s evidence uncontested if they have anything on point.
For purposes of the next two sentences, “a sufficient basis to believe” means enough that a court would likely allow a good deal of digging if the matter was related or even adjacent to something that was material for purposes of the specific litigation. There’s a sufficient basis to believe that EA leadership may have had good reasons to believe SBF had committed fraud against Alameda investors.[1] There is a sufficient basis to believe that EA PR people were aware of SBF-related risk and were actively working on the topic.[2] The plaintiff could also expand the scope of discovery as previously-discovered information warranted.
If the case didn’t settle before summary-judgment motions, the juicy bits would be all laid out in the plaintiff’s motion, open to public view.
Prompting the legal system into investigating potential EA involvement in the FTX fraud, costing enormous further staff time despite not finding anything?
This seems rather unlikely. The FTX debtor entity is cooperating with the feds. DOJ has several ex-insiders who are singing like canaries, who have good lawyers, and who know that the more people they help the feds convict, the better things will be for their sentences. If there were reasons for the feds to be looking at potential EA involvement in the FTX fraud, it is almost certain the feds would know that at this point without any help from EA sources. Moreover, the FTX or ex-insider information would likely be enough to get the necessary search warrants, wiretaps, etc.
There is of course also, as Will’s note implies, the distraction/expense/angst/etc. of dealing with litigation, whether or not it ultimately has any merit. That would justify giving some weight to whether a disclosure increases the risk of any lawsuit, independent of any merit or concerns about external adverse effects like publicity. However, in my mind that goes both ways! I’d affirmatively want to disclose most information that makes would-be plaintiffs less likely to sue me. If one’s prior is that conditioned on X being not-true, there’s a 75% chance I would specifically deny X for litigation-avoidance reasons, then one can update on the fact that X hasn’t been denied.
- ^
Although the Time article doesn’t specify exactly what information was shared with EA leadership, it does indicate that an Alameda exile told Time that SBF “didn’t have a distinction between firm capital and trading capital. It was all one pool.” That’s at least a badge of fraud (commingling). The exiles accused SBF of various things, including “‘willful and knowing violations of agreements or obligations, particularly with regards to creditors’—all language that echoes the U.S. criminal code.” The document alleges that SBF was “misreporting numbers” and “failing to update investors on poor performance.” Continuing: “The team ‘didn’t trust Sam to be in investor meetings alone,’ colleagues wrote. ‘Sam will lie, and distort the truth for his own gain,’ the document says.” Lying to investors is pretty much diagnostic of fraud.
- ^
The New Yorker, quoting an unnamed participant on a leadership slack channel: “I guess my point in sharing this is to raise awareness that a) in some circles SBF’s reputation is very bad b) in some circles SBF’s reputation is closely tied to EA, and c) there’s some chance SBF’s reputation gets much, much worse. But I don’t have any data on these (particularly c, I have no idea what types of scenarios are likely), though it seems like a major PR vulnerability. I imagine people working full-time on PR are aware of this and actively working to mitigate it, but it seemed worth passing on if not since many people may not be having these types of interactions.”
- ^
Could you say more about that? I suggest that “substantial fraction” may mean something quite different in the context of a bank than here. In the scenario I described, the hypothetical exchange would need to see 80-90% of deposits demanded back in a world where the stocks/bonds had to be sold at a 25-50% loss. It could be higher if the exchange had come up with an opt-in lending program that provided adequate cover for not returning (say) 10-15% of the customers’ funds on demand.
I’d also suggest that the “simple loss of confidence snowballing” in modern bank runs is often justified based on publicly-known (or discernable) information. I don’t think it was a secret that SVB had bought a bunch of long-term Treasuries that sank in value as interest rates increased, and thus that it did not have the asset value to honor 100% of withdrawals. It wasn’t a secret in ~2008 that banks’ ability to honor 100% withdrawals was based on highly overstated values for mortgage-backed securities.
In contrast, as long as the secret stock/bond purchases remained unknown to outsiders, a massive demand for deposits back would have to occur in the absence of that kind of information. Unlike the traditional banking sector, other places to hold crypto carry risks as well—even self-custody, which poses risks from hacking, hardware failure, forgetting information, etc. So people aren’t going to withdraw unless, at a minimum, convinced that they had a safer place to hold their assets.
Finally, in conducting the cost/benefit analysis, the hypothetical SBF would consider that the potential failure mode only existed in scenarios where 80-90%+ of deposits had been demanded back. Conditional on that having happened, the exchange’s value would likely be largely lost anyway. So the difference in those scenarios would be between ~0 and the negative effects of a smaller-scale fraud. If the hypothetical SBF thought the 80-90%+ scenario was pretty unlikely . . . .
(Again, all of this does not include the risk of the fraud leaking out or being discovered.)
I have very little doubt that any advice given to an individual with significant potential exposure to keep their mouths shut was correct advice as to that individual’s personal interests. I also have very little doubt that anyone who worked for or formally advised FTXFF fits in that category.
To the extent that Nathan is asking about legal advice given to EVF, I don’t think the principle would necessarily hold. Legal advice is going to focus relatively more on the client’s legal risks, and less so (if at all) on the traditionally-conceived public interest, what is in the interest of the long-term future, etc. I’d say “charitable organizations should act in their own legal self-interest” probably defaults to true, but that it’s a fairly weak presumption. With the possible and partial exception of lawyers who are also insiders, I think lawyers will significantly underweight considerations like the epistemic health of the broader EA community and also be seriously limited at estimating the effect of various scenarios on that consideration.
That being said, I doubt Will is in a particularly good position to evaluate the legal advice given to EVF because he was recused from FTX-related stuff due to serious conflicts of interests. If he were a lawyer, he might be in a good position to estimate—then he’d have both enough knowledge of facts and the right professional background to infer stuff based on that knowledge. But he isn’t.
When I looked at past CC actions, I didn’t get the impression that they were in the habit of blowing things out of proportion. But of course I didn’t have the full facts of each investigation.
One reason I don’t put much stock in the CC may not “necessarily [be a] trustworthy or fair arbiter” possibility is that it has to act with reasoning transparency because it is accountable to a public process. Its actions with substance (as opposed to issuing warnings) are reviewable in the UK courts, in proceedings where the charity—a party with the right knowledge and incentives—can call them out on dubious findings. The CC may not fear litigation in the same sense that a private entity might, but an agency’s budget/resources don’t generally go up because it is sued and agencies tend not to seek to create extra work for themselves for the thrill of it.
Moreover, the rationale of non-disclosure due to CC concerns operates at the margin. There are particular things we shouldn’t disclose in public because the CC might badly misinterpret those statements is one thing. There is nothing else useful we can disclose because all of those statements pose an unacceptable risk of the CC badly misinterpreting any further detail is another.
I think that what the voting dynamics may suggest would be a bigger problem than the frequency of posts like Mr. Parr’s per se. His lead post got to +24 at one point (and stayed there for a while), while the post on which we are commenting sits at −12 (despite my +9 strong upvote). If I were in a group for which people were advocating for sterilization, and had good reason to think a significant fraction of the community supported that view, it would be cold comfort that the posts advocating for my sterilization only came by every few months!