I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
Jason
much more media reporting on the EA-FTX association resulting in significantly greater brand damage?
Most likely concern in my eyes.
The media tends to report on lawsuits when they are filed, at which time they merely contain unsubstantiated allegations and the defendant is less likely to comment. It’s unlikely that the media would report on the dismissal of a suit, especially if it was for reasons seen as somewhat technical rather than as a clear vindication of the EA individual/organization.
Moreover, it is pretty likely to me that EVF or other EA-affiliated entities have information they would be embarrassed to come out in discovery. This is not based on any belief about misconduct, but the base rate that organizations that had a bad miss / messup have information related thereunto that they would be embarrassed about (and my characterization of a bad miss / messup here, whether or not a liability-creating one).
If a sufficiently motivated plaintiff sued, and came up with a legal theory that survived a motion to dismiss, I think it fairly likely that embarrassing information would need to be disclosed in discovery. They could require various persons and organizations to answer questions, under oath, that they would rather not answer. Questions from a hostile examiner motivated to uncover damaging information, not a sympathetic podcaster. While “I don’t remember” is usually an acceptable answer, it also can make the other side’s evidence uncontested if they have anything on point.
For purposes of the next two sentences, “a sufficient basis to believe” means enough that a court would likely allow a good deal of digging if the matter was related or even adjacent to something that was material for purposes of the specific litigation. There’s a sufficient basis to believe that EA leadership may have had good reasons to believe SBF had committed fraud against Alameda investors.[1] There is a sufficient basis to believe that EA PR people were aware of SBF-related risk and were actively working on the topic.[2] The plaintiff could also expand the scope of discovery as previously-discovered information warranted.
If the case didn’t settle before summary-judgment motions, the juicy bits would be all laid out in the plaintiff’s motion, open to public view.
Prompting the legal system into investigating potential EA involvement in the FTX fraud, costing enormous further staff time despite not finding anything?
This seems rather unlikely. The FTX debtor entity is cooperating with the feds. DOJ has several ex-insiders who are singing like canaries, who have good lawyers, and who know that the more people they help the feds convict, the better things will be for their sentences. If there were reasons for the feds to be looking at potential EA involvement in the FTX fraud, it is almost certain the feds would know that at this point without any help from EA sources. Moreover, the FTX or ex-insider information would likely be enough to get the necessary search warrants, wiretaps, etc.
There is of course also, as Will’s note implies, the distraction/expense/angst/etc. of dealing with litigation, whether or not it ultimately has any merit. That would justify giving some weight to whether a disclosure increases the risk of any lawsuit, independent of any merit or concerns about external adverse effects like publicity. However, in my mind that goes both ways! I’d affirmatively want to disclose most information that makes would-be plaintiffs less likely to sue me. If one’s prior is that conditioned on X being not-true, there’s a 75% chance I would specifically deny X for litigation-avoidance reasons, then one can update on the fact that X hasn’t been denied.
- ^
Although the Time article doesn’t specify exactly what information was shared with EA leadership, it does indicate that an Alameda exile told Time that SBF “didn’t have a distinction between firm capital and trading capital. It was all one pool.” That’s at least a badge of fraud (commingling). The exiles accused SBF of various things, including “‘willful and knowing violations of agreements or obligations, particularly with regards to creditors’—all language that echoes the U.S. criminal code.” The document alleges that SBF was “misreporting numbers” and “failing to update investors on poor performance.” Continuing: “The team ‘didn’t trust Sam to be in investor meetings alone,’ colleagues wrote. ‘Sam will lie, and distort the truth for his own gain,’ the document says.” Lying to investors is pretty much diagnostic of fraud.
- ^
The New Yorker, quoting an unnamed participant on a leadership slack channel: “I guess my point in sharing this is to raise awareness that a) in some circles SBF’s reputation is very bad b) in some circles SBF’s reputation is closely tied to EA, and c) there’s some chance SBF’s reputation gets much, much worse. But I don’t have any data on these (particularly c, I have no idea what types of scenarios are likely), though it seems like a major PR vulnerability. I imagine people working full-time on PR are aware of this and actively working to mitigate it, but it seemed worth passing on if not since many people may not be having these types of interactions.”
- ^
Could you say more about that? I suggest that “substantial fraction” may mean something quite different in the context of a bank than here. In the scenario I described, the hypothetical exchange would need to see 80-90% of deposits demanded back in a world where the stocks/bonds had to be sold at a 25-50% loss. It could be higher if the exchange had come up with an opt-in lending program that provided adequate cover for not returning (say) 10-15% of the customers’ funds on demand.
I’d also suggest that the “simple loss of confidence snowballing” in modern bank runs is often justified based on publicly-known (or discernable) information. I don’t think it was a secret that SVB had bought a bunch of long-term Treasuries that sank in value as interest rates increased, and thus that it did not have the asset value to honor 100% of withdrawals. It wasn’t a secret in ~2008 that banks’ ability to honor 100% withdrawals was based on highly overstated values for mortgage-backed securities.
In contrast, as long as the secret stock/bond purchases remained unknown to outsiders, a massive demand for deposits back would have to occur in the absence of that kind of information. Unlike the traditional banking sector, other places to hold crypto carry risks as well—even self-custody, which poses risks from hacking, hardware failure, forgetting information, etc. So people aren’t going to withdraw unless, at a minimum, convinced that they had a safer place to hold their assets.
Finally, in conducting the cost/benefit analysis, the hypothetical SBF would consider that the potential failure mode only existed in scenarios where 80-90%+ of deposits had been demanded back. Conditional on that having happened, the exchange’s value would likely be largely lost anyway. So the difference in those scenarios would be between ~0 and the negative effects of a smaller-scale fraud. If the hypothetical SBF thought the 80-90%+ scenario was pretty unlikely . . . .
(Again, all of this does not include the risk of the fraud leaking out or being discovered.)
I have very little doubt that any advice given to an individual with significant potential exposure to keep their mouths shut was correct advice as to that individual’s personal interests. I also have very little doubt that anyone who worked for or formally advised FTXFF fits in that category.
To the extent that Nathan is asking about legal advice given to EVF, I don’t think the principle would necessarily hold. Legal advice is going to focus relatively more on the client’s legal risks, and less so (if at all) on the traditionally-conceived public interest, what is in the interest of the long-term future, etc. I’d say “charitable organizations should act in their own legal self-interest” probably defaults to true, but that it’s a fairly weak presumption. With the possible and partial exception of lawyers who are also insiders, I think lawyers will significantly underweight considerations like the epistemic health of the broader EA community and also be seriously limited at estimating the effect of various scenarios on that consideration.
That being said, I doubt Will is in a particularly good position to evaluate the legal advice given to EVF because he was recused from FTX-related stuff due to serious conflicts of interests. If he were a lawyer, he might be in a good position to estimate—then he’d have both enough knowledge of facts and the right professional background to infer stuff based on that knowledge. But he isn’t.
When I looked at past CC actions, I didn’t get the impression that they were in the habit of blowing things out of proportion. But of course I didn’t have the full facts of each investigation.
One reason I don’t put much stock in the CC may not “necessarily [be a] trustworthy or fair arbiter” possibility is that it has to act with reasoning transparency because it is accountable to a public process. Its actions with substance (as opposed to issuing warnings) are reviewable in the UK courts, in proceedings where the charity—a party with the right knowledge and incentives—can call them out on dubious findings. The CC may not fear litigation in the same sense that a private entity might, but an agency’s budget/resources don’t generally go up because it is sued and agencies tend not to seek to create extra work for themselves for the thrill of it.
Moreover, the rationale of non-disclosure due to CC concerns operates at the margin. There are particular things we shouldn’t disclose in public because the CC might badly misinterpret those statements is one thing. There is nothing else useful we can disclose because all of those statements pose an unacceptable risk of the CC badly misinterpreting any further detail is another.
While this is not expressing an opinion on your broader question, I think the distinction between individual legal exposure and organizational exposure is relevant here. It would be problematic to avoid certain collective costs of FTX by unfairly foisting them off on unconsenting individuals and organizations. As Will alluded to, it is possible that the costs would be borne by other EAs, not the speaker.
That being said, people could be indemnified. So I think it’s plausible to update somewhat the probability that there is some valid reason to fear severe to massive legal exposure to some extent. Or that information would come out in litigation that is more damaging than the inferences to be drawn from silence. (Without inside knowledge, I find that more likely than actual severe liability exposure.)
This would be a good post to disallow voting by very young accounts on. That’s not a complete solution, but it’s something. I’d also consider disallowing voting on older posts by young accounts for similiar reasons.
I expect an increase in malicious actors as AI develops, both because of greater acute conflict with people with a vested interest in weakening EA, and because AI assistance will lower the barrier to plausible malicious content. I think it would take time and effort to develop consensus on community rules related to this kind of content, and so would rather not wait until the problem was acutely upon us.
Quote: (and clearly they calculated incorrectly if they did)
I am less confident that, if an amoral person applied cost-benefit analysis properly here, it would lead to “no fraud” as opposed to “safer amounts of fraud.” The risk of getting busted from less extreme or less risky fraud would seem considerably less.
Hypothetically, say SBF misused customer funds to buy stocks and bonds, and limited the amount he misused to 40 percent of customer assets. He’d need a catastrophic stock/bond market crash, plus almost all depositors wanting out, to be unable to honor withdrawals. I guess there is still the risk of a leak.
I don’t think we disagree much if any here—I think pointing out that cost-benefit analysis doesn’t necessarily lead to the “no fraud” result underscores the critical importance of side constraints!
What does “involved in” mean? The most potentially plausible version of this compares people peripherally involved in FTX (under a broad definition) to the main players in Nonlinear.
For both of these comments, I want a more explicit sense of what the alternative was.
Not a complete answer, but I would have expected communication and advice for FTXFF grantees to have been different. From many well connected EAs having a low opinion of him, we can imagine that grantees might have been urged to properly set up corporations, not count their chickens before they hatched, properly document everything and assume a lower-trust environment more generally, etc. From not ignoring the base rate of scamminess in crypto, you’d expect to have seen stronger and more developed contingency planning (remembering that crypto firms can and do collapse in the wake of scams not of their own doing!), more decisions to build more organizational reserves rather than immediately ramping up spending, etc.
Note: This comment is considerably sharper than most of my comments on the Forum. I find that unavoidable given Mr. Parr’s apparent belief that he is being downvoted because his ideas are unpopular and/or optically undesirable, rather than for the merits of his posts.
The evidence available to me does not reasonably support a conclusion that your posts meet the standards I think signify good-faith participation here.
Starting out with Some Strikes
Your first post on the Forum was, in my mind, rather dismissive of objections to the infamous Bostrom listserv, and suggested we instead criticize whoever brought this information to light (even though there is zero reason to believe they are a member of this community or an adjacent community). That’s not a good way to start signaling good faith.
Much of your prior engagement in comments on the Forum has related to race, genetics, eugenics, and intelligence, although it has started to broaden as of late. That’s not a good way to show that you are not seeking to “inject a discussion about race, genetics, eugenics, and intelligence in EA circles” either.
Single-focus posters are not going to get the same presumption of good faith on topics like this that a more balanced poster might. Maybe you are a balanced EA in other areas, but I can only go by what you have posted here, in your substack, and (presumably) elsewhere as Ives Parr. I understand why you might prefer a pseudonym, but some of us have a consistent pseudonym under which we post on a variety of topics. So I’m not going to count the pseudonym against you, but I’m going to base my starting point on “Ives Parr” as known to me without assuming more well-rounded contributions elsewhere.
A Surprising Conclusion
As far as the environmental/iodine issues, let me set for a metaphor to explain one problem in a less ideologically charged context. Let’s suppose I was writing an article on improving life expectancy in developing countries. Someone with a passing knowledge of public health in developing countries, and the principles of EA. might expect that the proposed solution would be bednets or other anti-infectious disease technologies. Some might assign a decent probability to better funding for primary care, a pitch for anti-alcohol campaigns, or sodium reduction work. Almost no one would have standing up quaternary-care cancer facilities in developing countries using yet-to-be-developed drugs on their radar list. If someone wrote a long post suggesting that was the way, I would suspect they might have recently lost a loved one to cancer or might have some other external reason for reaching that conclusion.
I think that’s a fair analogy of your recommendation here—you’re proposing technology that doesn’t exist and wouldn’t be affordable to the majority of people in the most developed countries in the world if it did. The fact that your chosen conclusion is an at least somewhat speculative, very expensive technology should have struck you as pretty anomalous and thrown up some caution flags. Yours could be the first EA cause area that would justify massive per-person individual expenditures of this sort, but the base rate of that being true seems rather low. And in light of your prior comments, it is a bit suspicious that your chosen intervention is one that is rather adjacent to the confluence of “race, genetics, eugenics, and intelligence in EA circles.”
A Really Concerning Miss in Your Post
Turning to your post itself, the coverage of possible environmental interventions in developing countries in the text (in the latter portions of Part III) strikes me as rather skimpy. You acknowledge that environmental and nutritional factors could play a role, but despite spending 100+ hours on the post, and despite food fortification being at least a second-tier candidate intervention in EA global health for a long time, you don’t seem to have caught the massive effect of cheap iodine supplementation in the original article. None of the citations for the four paragraphs after “The extent to which the failure of interventions in wealthy nations is applicable to developing nations is unclear” seem to be about environmental or nutritional effects or interventions in developing countries.
While I can’t tell if you didn’t know about iodine or merely chose not to cite any study about nutritional or environmental intervention in developing countries, either way Bob’s reference to a 13-point drop in IQ from iodine deficiency should have significantly updated you that your original analysis had either overlooked or seriously undersold the possibility for these interventions. Indeed, much relevant information was in a Wikipedia article you linked on the Flynn effect, which notes possible explanations such as stimulating environment, nutrition, infectious diseases, and removal of lead from gasoline [also a moderately well-known EA initiative]. Given that you are someone who has obviously studied intelligence a great deal, I am pretty confident you would know all of this, so it seems implausible that this was a miss in research.
On a single Google search (“effects of malnutrition in children on iq”), one of the top articles was a study in JAMA Pediatrics describing a stable 15.3-point drop in IQ from malnutrition that was stable over an eight-year time period. This was in Mauritius in the 1970s, which had much lower GDP per capita at the time than now but I believe was still better in adjusted terms than many places are in 2024. The percentage deemed malnourished was about 22%, so this was not a study about statistically extreme malnutrition. And none of the four measures were described as reflecting iodine deficiency. That was the first result I pulled, as it was in a JAMA journal. A Wikipedia article on “Impact of Health on Intelligence” was also on the front page, which would have clued you into a variety of relevant findings.
This is a really bad miss in my mind, and is really hard for me to square with the post being written by a curious investigator who is following the data and arguments where they lead toward the stated goal of effectively ending poverty through improving intelligence. If readily-available data suggest a significant increase in intelligence from extremely to fairly cheap, well-studied environmental interventions like vitamin/mineral supplementation, lead exposure prevention, etc., then I would expect an author on this Forum pitching a much more speculative, controversial, and expensive proposal to openly acknowledge and cite that. As far as I can see, there is not even a nod toward achieving the low-hanging environmental/nutritional fruit in your conclusion and recommendations. This certainly gives the impression that you were pre-committed to “genetic enhancement” rather than a search for effective, achievable solutions to increase intelligence in developed countries and end poverty. Although I do not expect posts to be perfectly balanced, I don’t think the dismissal of environmental interventions here supports a conclusion of good-faith participation in the Forum.
Conclusion
That is not intended as an exhaustive list of reasons I find your posts to be concerning and below the standards I would expect for good-faith participation in the Forum. The heavy reliance on certain sources and authors described in the original post above is not exactly a plus, for instance. The sheer practical implausibility of offering widespread, very expensive medical services in impoverished countries—both from a financial and a cultural standpoint—makes the post come across as a thought experiment (again: one that focuses on certain topics that certain groups would like to discuss for various reasons despite tenuous connections to EA).
Also, this is the EA Forum, not a criminal trial. We tend to think probabilistically here, which is why I said things like it being “difficult to believe that any suggestion . . . is both informed and offered in good faith” (emphasis added). The flipside of that is that posters are not entitled to a trial prior to Forum users choosing to dismiss their posts as not reflecting good-faith participation in the Forum, nor are they entitled to have their entire 42-minute article read before people downvote those posts (cf. your concern about an average read time of five minutes).
For the disagree voters (I didn’t agreevote either way) -- perhaps a more neutral way to phrase this is might be:
Oxford and/or its philosophy department apparently decided that continuing to be affiliated with FHI wasn’t in its best interests. It seems this may have developed well before the Bostrom situation. Given that, and assuming EA may want to have orgs affiliated with other top universities, what lessons might be learned from this story? To the extent that keeping the university happy might limit the org’s activities, when is accepting that compromise worth it?
I think it’s kind of weird that the bar is no longer “<0 karma” but “quick and thorough rejection”.
This doesn’t strike me as weird. It is reasonable that people would react strongly to information suggesting that a position enjoys moderate-to-considerable support in the community.
Let’s suppose someone posted content equivalent to the infamous Bostrom listserv message today. I doubt (m)any people of color would walk away feeling comfortable being in this community merely because the post ended up with <0 karma. Information suggesting moderate-to-considerable support in the community would be very alarming to them, and for good reason! They would want to see quick and thorough rejection, at a bare minimum, in order to feel safe here.
I’m not expressing a view that Mr. Parr’s posts were of the same nature as the listserv message containing the slur. Where they are on the continuum from appropriate content to listserv-equivalent is likely a crux for many in this conversation, so my point here is to illustrate that whether you think “<0 karma” is enough likely depends on where you place Mr. Parr’s posts on that continuum.
Among other things, I don’t think that solution scales well.
As the voting history for this post shows, people with these kinds of views may have some voting power at their disposal (whether that be from allies or brigadiers). So we’d need a significant amount of voting power to quickly downvote this kind of content out of sight. As someone with a powerful strong downvote, I try to keep the standards for deploying that pretty high—to use a legal metaphor, I tend to give a poster a lot of “due process” before strong downvoting because a −9 can often contribute to the effect of squelching someone’s voice.
If we rely on voters to downvote content like this, that feels like either asking them to devote their time to careful reading of distasteful stuff they have no interest in, or asking them to actively and reflexively downvote stuff that looks off-base based on a quick scan. As to the first, few if any of us get paid for this. I think the latter is actually worse than an appropriate content ban—it risks burying content that should have been allowed to show on the frontpage for a while.
If we don’t deploy strongvotes on fairly short notice, the content is going to be on the front page for a while and the problems that @titotal brought up with strongly apply.
Finally, I am very skeptical that there would be any actionable, plausibly cost-effective actions for EAs to take even if we accepted much of the argument here (or on other eugenics & race topics). That does further reassure me that there is no great loss expecting those who wish to have those discussions to do so in their own space. The Forum software is open-source; they can run their own server.
It’s likely that no single answer is “the” sole answer. For instance, it’s likely that people believed they could assume that trusted insiders were more significantly more ethical than the average person. The insider-trusting bias has bitten any number of organizations and movements (e.g., churches, the Boy Scouts). However, it seems clear from Will’s recent podcast that the downsides of being linked to crypto were appreciated at some level. It would take a lot for me to be convinced that all that $$ wasn’t a major factor.
Which (if any) of titotal’s six numbered points only apply and/or have force if the post’s net karma is positive, as Mr. Parr’s have been at certain points in time?
As of October 30, the post was a week old and solidly negative in karma (-14). I don’t think people were finding the post at this point through the frontpage at that age and karma. There was a big change from that date to November 5 (+23), cause unknown. The other big change was March 13 to 29 (+18 to −16), probably motivated by David’s tweet. My guess is that the 37-point positive jump was also motivated by some sort of off-Forum mention. It’s unclear whether this net change represents authentic evidence of the broader community’s views vs. people inclined to be favorably disposed seeing it off-Forum vs. possible brigading.
But even after going up to +24, I doubt the post re-emerged on the frontpage given that it was about two weeks old at this point. In other words, it’s likely that relatively few people saw it after this point unless they were specifically looking for it or found it incidentally when searching for something else. Therefore, I would not infer much of anything from it “remained positive for 4 months.”
I do concur that “[t]he voting pattern does not suggest the EA community quickly and thoroughly rejected the post.”
To be fair to size per se, I think the big ones tend to pay more—but that’s not an inherent consequence of size.
Ah yes, I think I changed a setting because I didn’t like that the mod team was flagging stuff as personal blog (during the Bostrom affair, was it?) that I didn’t think met the standard for that treatment. So I guess seeing Devin’s appendices on my frontpage is the price I pay for opting out of mod-directed personal blog designations!
Optics would be great on that one—an EA has insight that there’s a good chance of FTX collapse (based on not generally-known info / rumors?), goes out and shorts SamCoins to profit on the collapse! Recall that any FTX collapse would gut the FTT token at least, so there would still be big customer losses.