I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
Jason
Thanks to GiveWell for sharing this!
It’s worth emphasizing that this analysis estimates StrongMinds at about 2.3X times effective as GiveDirectly-type programs, which is itself a pretty high bar, and as plausibly up to ~ 8X as effective (or as low as ~ 0.5X). If we take GD as the bar for a program being one of the most effective in the Global Health space, this conclusion suggests that StrongMinds is very likely to be a strong program (no pun intended), even if it isn’t the single best use of marginal funding. I know that’s obvious from reading the full post, but I think it bears some emphasis that we’re talking about donor choice among a variety of programs that we have reason to believe are rather effective.
I think it would be problematic if a society heaped full adoration on risk-takers when their risks worked out, but doled out negative social consequences (which I’ll call “shame” to track your comment) only based on ex ante expected-value analysis when things went awry. That would overincentivize risktaking.
To maintain proper incentives, one could argue that society should map the amount of public shame/adoration to the expected value of the decision(s) made in cases like this, whether the risk works out or not. However, it would be both difficult and burdensome to figure out all the decisions someone made, assign an EV to each, and then sum to determine how much public shame or adoration the person should get.
By assigning shame or adoration primarily based on the observed outcome, society administers the shame/adoration incentives in a way that at least makes the EV of public shame/adoration at least somewhat related to the EV of the decision(s) made. Unfortunately, that approach means that people whose risks don’t pan out often end up with shame that may not be morally justified.
I characterized the lawsuit is a fishing expedition because I saw no specific evidence in the complaint about what the VC firms actually knew—only assumptions based on rather general public statements from the VCs. And the complaints allege—and I think probably have to allege—actual knowledge of the fraudulent scheme against the depositors. The reason is that, as a general rule, the plaintiff has to establish that the defendant owed them a duty to do or refrain from doing something before negligence liability will attach.
Of course, you have to file the lawsuit in order to potentially get to discovery and start subpoenaing documents and deposing witnesses. It’s not an unreasonable fishing expedition to undertake, but I think the narrative that the VCs were sloppy, rushed, or underinvested on their due dilligence is much more likely than the complaint’s theory that they knew about the depositor fraud and actively worked to conceal it until FTX did an IPO and they unloaded their shares.
(I certainly do not think anyone in EA knew about the fraudulent scheme against depositors either.)
In general, standard corporate audits aren’t intended to be intelligible by consumers but instead by investors and regulators. It’s shocking that FTX’s regulator in the Bahamas apparently did not require a clean audit opinion addressing internal controls, and maybe no US regulator required it for FTX US either.
At present, my #2 on who to blame (after FTX insiders in the know) is the regulators. It’s plausible the auditors did what they were hired to do and issued opinion letters making it clear what their scope of work was in ways that were legible to their intended audience. I can’t find any plausible excuse for the regulators.
- 21 Mar 2023 0:54 UTC; 47 points) 's comment on Some Comments on the Recent FTX TIME Article by (
In a universe where EA leaders had a sufficiently high index of suspicion, they could have at least started publicly distancing themselves from SBF and done one of two things: (1) stop working with FTXFF or encouraging people to apply, and/or (2) obtain “insurance” against fraudlent collapse by enlisting some megadonors who privately agreed in advance to immediately commit to repay all monies paid out to EA-aligned grantees if fraud ended up being discovered that inflicted relevant losses.
Public whistleblowing would likely have been terrible . . .if the evidence were strong enough (which I really doubt it was) then it should have been communicated to the US Department of Justice or another appropriate government agency.
The assumption that this 1⁄3 would come from outside the community seems to rely on an assumption that there are no lawyers/accountants/governance experts/etc. in the community. It would be more accurate, I think, to say that the 1⁄3 would come from outside what Jack called “high status core EAs.”
Thanks for sharing this. I skimmed the relevant portions of the underlying lawsuit referenced in the press release, and my overall impression is “fishing expedition.” (Maybe more than that against the banks . . . but those banks just went bust and I doubt will have any money to pay a judgment, so I didn’t bother skimming that). Not that there aren’t reasonable grounds for a class-action law firm to engage in a fishing expedition, but they won’t have any real evidence until they (possibly) survive motions to dismiss and get to discovery.
Any competent outside firm would gather input from stakeholders before releasing a survey. But I hear the broader concern, and note that some sort of internal-external hybrid is possible. The minimal level of outside involvement, to me, would involve serving as a data guardian, data pre-processor, and auditor-of-sorts. This is related to the two reasons I think outside involvement is important: external credibility, and respondent assurance.
As far as external validity, I think media reports like this have the capacity to do significant harm to EA’s objectives. Longtermist EA remains, on the whole, more talent-constrained and influence-constrained than funding-constrained. The adverse effect on talent joining EA could be considerable. Social influence is underrated; for example, technically solving AI safety might not actually accomplish much without the ability to socially pressure corporations to adopt effective (but profit-reducing) safety methods or convince governments to compel them to do so.
When the next article comes out down the road, here’s what I think EA would be best served by being able to say if possible:
(A) According to a study overseen by a respected independent investigator, the EA community’s rate of sexual misconduct is at most no greater than the base rate.
(B) We have best-in-class systems in place for preventing sexual misconduct and supporting survivors, designed in connection with outside experts. We recognize that sexual misconduct does occur, and we have robust systems for responding to reports and taking the steps we can to protect the community. There is independent oversight over the response system.
(C) Unfortunately, there isn’t that much we can do about problematic individuals who run in EA-adjacent circles but are unaffiliated with institutional EA.
(A) isn’t externally credible without some independent organization vouching for the analysis in some fashion. In my view, (B) requires at some degree of external oversight to be externally credible after the Owen situation, but that’s another story. Interestingly, I think a lot of the potential responses are appropriate either as defensive measures under the “this is overblown reporting by hostile media outlets” hypothesis or “there is a significant problem here” hypothesis. I’d like to see at least funding and policy commitments on some of those initatives in the near term, which would reduce the time pressure on other initiatives for which there is a good chance that further datagathering would substantially change the desirability, scope, layout, etc.
I think one has to balance the goal of external credibility against other goals. But moving the research to (say) RP as opposed to CEA wouldn’t move the external-credibility needle in any appreciable fashion.
The other element here is respondent assurance. Some respondents, especially those no longer associated with EA, may be more comfortable giving responses if the initial data collection itself and any necessary de-identification is done by an outside organization. (It’s plausible to me that the combination of responses in a raw survey response could be uniquely identifying.)
Ideally, you would want to maximize the number of survivors who would be willing to confidentally name the person who committed misconduct. This would allow the outside organization to do a few things that would address methodological concerns in the Time article. First, it could identify perpetrators who had committed misconduct against multiple survivors, avoiding the incorrect impression that perpetrators were more numerous than they were. Second, it could use pre-defined criteria to determine if the perpetrator was actually an EA, again addressing one of the issues with the Time article. Otherwise, you end up with a numerator covering all instances in which someone reports misconduct by someone they identified as an EA . . . but use narrower criteria to develop the denominator, leading to an inflated figure. It would likely be legally safer for CEA to turn over its event-ban list to the outside organization under an NDA for very limited purposes than it would be to turn it over to RP. That would help another criticism of the Time article, that it failed to address CEA’s response to various incidents.
Contingent on budget and maybe early datagathering, I would consider polling men too about things like attitudes associated with rape culture. Surveying or focusing-grouping people about deviant beliefs and behaviors (I’m using “deviant” here as sociologists do), not to mention their own harassment or misconduct, is extremely challenging to start with. You need an independent investigator with ironclad promises of confidentiality to have a chance at that kind of research. But then again, it’s been almost 20 years since my somewhat limited graduate training in social science research methods, so I could be wrong on this.
I think Jack’s point was that having some technical expertise reduces the odds of a Bad Situation happening at a general level, not that it would have prevented exposure to the FTX bankruptcy specifically.
If one really does not want technical expertise on the board, a possible alternative is hiring someone with the right background to serve as in-house counsel, corporate secretary, or a similar role—and then listening to that person. Of course, that costs money.
Although most of us display extreme partiality with a large portion of our spending—e.g., I think of what I end up spending to keep my dog happy and well in an urban environment!
I don’t know the acceptable risk level either. I think it is clearly below 49%, and includes at least fraud against bondholders and investors that could reasonably be expected to cause them to lose money from what they paid in.
It’s not so much the status of the company as a fraud-commiter that is relevant, but the risk that you are taking and distributing money under circumstances that are too close to conversion (e.g., that the monies were procured by fraud and that the investors ultimately suffer a loss). I can think of two possible safe harbors under which other actors’ acceptance of a certain level of risk makes it OK for a charity to move forward:
In many cases, you could imply a maximum risk of fraud that the bondholders or other lenders were willing to accept from the interest rate minus inflation minus other risk of loss—that will usually reveal that bondholders at least were not factoring in more than a few percent fraud risk. The risk accepted by equity holders may be greater, but usually bondholders take a haircut in these types of situations—and the marginal dollars you’re spending would counterfactually have gone to them in preference to the equity holders. However, my understanding is that FTX didn’t have traditional bondholders.
If the investors were sophisticated, I think the percentage of fraud risk they accepted at the time of their investment is generally a safe harbor. For FTX, I don’t have any reason to believe this was higher than the single digits; as you said, the base rate is pretty low and I’d expect the public discourse pre-collapse to have been different if it were believed to be significantly higher.
However, those safe harbors don’t work if the charity has access to inside information (that bondholders and equity holders wouldn’t have) and that inside information updates the risk of fraud over the base rates adjusted for information known to the bond/equity holders. In that instance, I don’t think you can ride off of the investor/bondholder acceptance of the risk as low enough.
There is a final wrinkle here—for an entity as unregulated as FTX was, I don’t believe it is plausible to have a relatively high risk of investor fraud and a sufficiently low risk of depositor fraud. I don’t think people at high risk of cheating their investors can be deemed safe enough to take care of depositors. So in this case there is a risk of investor fraud that is per se unacceptable, and a risk of investor fraud that implies an unacceptable risk of depositor fraud. The acceptable risk of investor fraud is the lower of the two.
Exception: If you can buy insurance to ensure that no one is worse off because of your activity, there may be no maximum acceptable risk. Maybe that was the appropriate response under these circumstances—EA buys insurance against the risk of fraud in the amount of the donations, and returns that to the injured parties if there was fraud at the time of any donation which is discovered within a six-year period (the maximum statute of limitations for fraudulent conveyance in any U.S. state to my knowledge). If you can’t find someone to insure you against those losses at an acceptable rate . . . you may have just found your answer as to whether the risk is acceptable.
Agree that it wouldn’t work for every event. I could see it working for someone with a pattern of coming to shorter events—asking someone who has become a regular attender at events for a certificate would be appropriate. Although I suggested an hour-long class because I like the idea of everyone regularly in the community receiving training, the less-involved person training could be 10-15 minutes.
I think the increased visibility of the process (compared to CH-event organizer checks) could be a feature. If you hand over a green cert, you are subtly reminded of the advantages of being able to produce one. If you hand over a yellow one, you are made aware that the organizers are aware of your yellow status and will likely be keeping a closer eye on you . . . which is a good thing, I think. Asking to see a certificate before dating or having sex with another EA shouldn’t be an affirmatively encouraged use case, but some people might choose to ask—and that would be 100% up to the person. But that might be an additional incentive for some people to keep to green-cert behavior.
Although no one should take this as legal advice, one of the possible merits of a certificate-based approach is that the lack of merit in a defamation suit should be clear very early in the litigation. The plaintiff will realize quickly that they aren’t going to be able to come up with any evidence on a foundational element of the claim (a communication from defendant to a third party about the plaintff). With a more active check-in, you’re going to have to concede that element and go into discovery on whether there was communication that included (or implied) a false statement of fact. Discovery is generally the most expensive and painful part of litigation—and even better, a would-be plaintiff who can figure out that there was no communication will probably decide never to sue at all.
Yes, the Corolla comment looks less innocent if the speaker has significant reasons to believe Sam was ethically shady. If you know someone is ethically shady but decide to work them with anyway, you need to be extra careful not to make statements that a reasonable person could read as expressing a belief in that person’s good ethics.
Yes. The definition of “unauthorized practice of law” is murkier and depends more on context than one might think. For instance, I personally used—and recommend for most people without complex needs—the Nolo/Quicken WillMaker will-writing software.
On a more serious note, if there were 25 types of small legal harm commonly caused by AI chatbots, writing 25 books on “How to Sue a Chatbot Company For Harm X, Including Sample Pleadings” is probably not going to constitute unauthorized practice.
(not legal advice, not researched)
It seems that there would be partial workarounds here, at least in theory. Suppose that CEA or another organization offered a one-hour class called Sexual Misconduct Training for EAs that generated a green, digitally signed certificate of attendance “valid” for a year. The organization does not allow individuals who it has determined to have committed moderate-severity misconduct within the past few years to attend the one-hour class. They may, however, attend a four-hour Intensive Training class with which generates a yellow digitally-signed certificate with a validity of six months. Those known to have committed serious misconduct may only attend a class that does not generate a certificate at all.
A community organizer, party host, etc. could ask people for their certificates and take whatever action they deem appropriate if a person submits a yellow certificate or does not submit one at all. At a minimum, they would know to keep a close eye on the person, ask for references from prior EA involvement, etc. In this scenario, Organization hasn’t spoken about anyone to a third party at all! (Classically, defamation at least in the US requires a false statement purporting to be fact that is published or communicated to a third person.) It has, at most, exercised its right not to speak about the person, which is generally rather protected in the US. And if the person voluntarily shows a third party the certificate, that’s consent on their part.
The greater legal risk might be someone suing if a green-certificate holder commits misconduct . . . but I think that would be a tough sell. First, no one could plausibly claim reliance on the certificate for more than the proposition that Organization had not determined the individual ineligible to take the relevant class at the time the decision was made. To have a case, a plaintiff would have to show that Organization had received a report about the certificate holder, was at least negligent in issuing the certificate in light of that report, and owed them a legal duty not to issue a certificate under those circumstances. As long as Organization is clear about the limits of the certificate process, I think most courts and juries would be hesitant to issue a decision that strongly disincentivizes risk-reduction techniques deployed in good faith and at least moderate effort.
Saw this morning that Eugene Volokh, a well-respected libertarian-leaning law professor who specializes in U.S. free-speech law, and others are working on an law review article about libel lawsuits against developers of LLMs. The post below explains how he asked GPT-4 about someone, got false information claiming that he pled guilty to a crime, and got fake quotes attributed to major media outlets:
The mods can’t realistically call different strike zones based on whether or not “expected value of the stuff [a poster says] remains high.” Not only does that make them look non-impartial, it actually is non-impartial.
Plus, warnings and bans are the primary methods by which the mods give substance to the floor of what forum norms require. That educative function requires a fairly consistent floor. If a comment doesn’t draw a warning, it’s at least a weak signal that the comment doesn’t cross the line.
I do think a history of positive contributions is relevant to the sanction.
The linked article says—persuasively, in my view—that Section 230 generally doesn’t shield companies like OpenAI for what their chatbots say. But that merely takes away a shield; you still need a sword (a theory of liability) on top of that.
My guess is that most US courts will rely significantly on analogies in absence of legislative action. Some of those are not super-friendly to litigation. Arguably the broadest analogy is to buggy software with security holes that can be exploited and cause damage; I don’t think plaintiffs have had much success with those sorts of lawsuits. If there is an interveining human actor, that also can make causation more difficult to establish. Obviously that is all at the 100,000 foot level and off the cuff! To the extent the harmed person is a user of the AI, they may have signed an agreement that limits their ability to sue (both by waiving certain claims, by limiting potential damages, or by onerous procedural requirements that mandate private arbitration and preclude class actions).
There are some activities at common law that are seen as superhazardous and which impose strict liability on the entity conducting them—using explosives is the usual example. But—I don’t understand there to be a plausible case that using AI in an application right now is similarly superhazardous in a way that would justify extending those precedents to AI harm.
I think your last paragraph hits on a real risk here: litigation response is driven by fear of damages, and will drive the AI companies’ interest in what they call “safety” in the direction of wherever their damages exposure is greatest in the aggregate and/or the largest litigation-existential risk to their company.
Fair points. I’m not planning to move my giving to GiveWell All Grants to either SM or GD, and don’t mean to suggest anyone else does so either. Nor do I want to suggest we should promote all organizations over an arbitrary bar without giving potential donors any idea about how we would rank within the class of organizations that clear that bar despite meaningful differences.
I mainly wrote the comment because I think the temperature in other threads about SM has occasionally gotten a few degrees warmer than I think optimally conducive to what we’re trying to do here. So it was an attempt at a small preventive ice cube.
I think you’re right that we probably mean different things by “one of.” 5-10X differences are big and meaningful, but I don’t think that insight is inconsistent with the idea that a point estimate something around “above GiveDirectly” is around the point at which an organization should be on our radar as potentially worth recommending given the right circumstances.
One potential definition for the top class would be whether a person could reasonably conclude on the evidence that it was the most effective based on moral weights or assumptions that seem plausible. Here, it’s totally plausible to me that a donor’s own moral weights might value reducing suffering from depression relatively more than GiveWell’s analysis implies, and saving lives relatively less. GiveWell’s model here makes some untestable philosophical assumptions that seem relatively favorable to AMF: “deprivationist framework and assuming a ‘neutral point’ of 0.5 life satisfaction points.” As HLI’s analysis suggests at Section 3.4 of this study, the effectiveness of AMF under a WELLBY/subjective well-being model is significantly dependent on these assumptions.
For a donor with significantly different assumptions and/or moral weights, adjusting for those could put SM over AMF even accepting the rest of GiveWell’s analysis. More moderate philosophical differences could put one in a place where more optimistic empirical assumptions + a expectation that SM will continue reducing cost-per-participant and/or effectively refine its approach as it scales up could lead to the same conclusion.
Another potential definition for the top class would be whether one would feel more-than-comfortable recommending it to a potential donor for whom there are specific reasons to choose an approach similar to the organization’s. I think GiveWell’s analysis suggests the answer is yes for reasons similar to the above. If you’ve got a potential donor who just isn’t that enthuiastic about saving lives (perhaps due to emphasizing a more epicurean moral weighting) but is motivated to give to reducing human suffering, SM is a valuable organization to have in one’s talking points (and may well be a better pitch than any of the GiveWell top charities under those circumstances).