You touched on something here that I am coming to see as the key issue: whether there should be a justice system within the EA/Rationality community and whether Lightcone can self-appoint into the role of community police. In conversations with people from Lightcone re:NL posts, I was told that is wrong to try to guard your reputation because that information belongs to the community to decide. US law on reputation is that you do have a right to protect yourself from lies and misrepresentation. Emerson talking about suing for libel—his right—was seen as defection from the norms which that Lightcone employee thinks should apply to the whole EA/rationality community. When did Emerson opt into following these norms, being judged by these norms? Did any of us? The Lightcone employees also did not like that Kat made a veiled threat to either Chloe or Alice (can’t remember) that her reputation in EA could be ruined by NL if she kept saying bad things about them. They saw that as bad not just because it was a threat but because it conspired to hide information from the community. From what I understood, that Lightcone employee thought it would have been okay for Kat to talk shit about the former employee—it was trying to keep the employee from talking that was wrong. I see bargaining in private not to talk shit about each other as up to those people, not necessarily doing something wrong by the rest of the community. Again, who else knew that these were the rules for being in this community?
In general, I am not in favor of community justice just like I’m not in favor of campus justice. If someone commits a crime, it should be reported to the police. If individuals want to give a company a bad review, they can do so publicly online or privately to whomever they want. If EAs/rationalists want to belong to a community that polices itself, there should be an explicit agreement to join this community along with its expectations, and the enforcers should be accountable to the enforced somehow.
You touched on something here that I am coming to see as the key issue: whether there should be a justice system within the EA/Rationality community and whether Lightcone can self-appoint into the role of community police.
Pretty much every community has norms and means of enforcing those norms (“social control” to the sociologists). Those means may be more or less formal, but I don’t think communities are very viable without some means of norm enforcement. I think “justice system” implies something significantly different than what has happened here: e.g., the US justice system can throw me in a dungeon and take away all my money. To use a private example, if I were Catholic, the Catholic justice system could excommunicate me, defrock me as a priest, etc. A campus justice system can expel or fire me.
What happened here feels more like gossip on steroids. Lightcone said bad things about Nonlinear, which had the effect of decreasing community opinion of Nonlinear. That might in turn have concrete adverse effects on Nonlinear. But as far as I know: Lightcone did not, and could not, directly impose consequences on Nonlinear unmediated by the actions of the community.
Likewise, I don’t think “community police” is right, at least in the frame of modern Western societies. The police and prosecutors (collectively “police”) have the exclusive ability to charge people with crimes. They have the exclusive ability to use certain investigative tools, like search warrants. Although everyone has the ability to investigate possible wrongdoing in some capacity (e.g., journalists), the police are generally recognized as having a preeminent role. For example, if there is a police investigation into certain conduct, other processes like private organizations and civil investigations are usually expected to step aside to avoid disrupting the police investigation.
Rather, I think Lightcone’s role here is more akin to the prosecutorial role in classical Athens. Any citizen could bring a criminal prosecution. In fact, there were no public prosecutors. There was a preliminary stage, but then the matter was tried before a large jury of citizens (e.g., 500 for the most famous trial, that of Socrates). It’s not clear to me that Lightcone has claimed any role in norm enforcement that is superior to the role they would believe appropriate for any community member. True, it championed the cause of A and C, but that was a result of a voluntary decision among A, C, and Lightcone. Similar things happened in classical Athens, and Lightcone hasn’t appointed itself sole champion of people with grievances. What we are seeing on the Forum is vaguely like the deliberations of the citizen jury.
I’m not seeing a clear alternative to “private” prosecution of norm violations in such a decentralized community. Who is in a place to be the public prosecutor? One could argue for CHSP, but it is merely another private actor with no real democratic legitimacy and some serious conflicts of interests (e.g., due to being part of CEA, due to so much of CEA’s funding coming from Open Phil). I do not like the alternative of their being no norm enforcement except what is available through the legal system.
Although I can envision alternatives to adjudication by the whole community, I don’t think we can criticize the abstract idea of bringing disputes to the whole community for adjudication at this time. In such a decentralized structure, sanctions for norm violations are imposed by community actors in their individual capacities. In other words, everyone who hears of Lightcone’s charges has to decide for themselves whether they will change the way they interact with Nonlinear (and/or Lightcone) as a result. Likewise, there is no adjudicatory authority who would publicly warn everyone else of the risks of associating with an organization that has committed serious norm violations. Whatever its flaws, public adjudication seems to be the only real option at present for a certain class of matters.
Emerson talking about suing for libel—his right—was seen as defection from the norms which that Lightcone employee thinks should apply to the whole EA/rationality community. When did Emerson opt into following these norms, being judged by these norms? Did any of us?
Fair enough, but the flipside is that Lightcone didn’t opt in to following your proposed norm of not criticizing people for threatening to file defamation suits. Lightcone criticizing Emerson for his speech is Lightcone’s right. It is your right (and mine, and everyone else’s) to decide not to associate with Lightcone, Nonlinear, both, or neither based on your assessment of their various actions.
What you’re missing is that Lightcone is not just another citizen. They control a lot of money and influence. If Ben and Oli were just regular citizens these criticisms wouldn’t carry undue weight. If Alice and Chloe had published their experiences themselves, I think people would have interpreted them more in proportion (and they would have exposed to way risk), which would have been a lot closer to the system you’re talking about.
It is your right (and mine, and everyone else’s) to decide not to associate with Lightcone, Nonlinear, both, or neither based on your assessment of their various actions.
I don’t know you, but it sounds like you don’t live on EA/Rat grants. If you did, you would know it’s way more advantageous to side with Lightcone. Many would feel they could not afford not to. (Full disclosure: I have a Lightspeed grant, and obviously I feel okay criticizing Lightcone, but I might hesitate more if they were my only funding source.)
Concretely, I sometimes hear organization leaders say that they choose to have their organization not be “EA” because doing so opens them to criticism from random people on the EA Forum, and this doesn’t occur if they just describe themselves as working on “alternative proteins” or whatever.
(Although in this particular case, it’s not clear to me that Ben Pace wouldn’t have chosen to investigate Nonlinear if they didn’t self describe as “EA”. It seems like his investigation was triggered by them using Lightcone.)
I feel like we should also be discussing FTX here. My model of the Lightcone folks is something like:
They kinda knew SBF was sketchy.
They didn’t do anything because of diffusion of responsibility (and maybe also fear of reputation warring).
FTX fraud was uncovered.
They resolved to not let diffusion of responsibility/fear of reputation warring stop them from sharing sketchiness info in the future.
If you grant that the Community Health Team is too weak to police the community (they didn’t catch SBF), and also that a stronger institution may never emerge (the FTX incident was insufficient to trigger the creation of a stronger institution, so it’s hard to imagine what event would be sufficient), there’s the question of what “stopgap norms” to have in place until a stronger institution hypothetically emerges.
Even if you think Lightcone misfired here—If you add FTX in your dataset too, then the “see something? say something!” norm starts looking better overall.
With regard to explicit agreements: One could also argue from the other direction. No one in EA explicitly agreed to safeguard the reputation of other EAs. You say: “If individuals want to give a company a bad review, they can do so publicly online or privately to whomever they want.” Do the ethics of “giving Nonlinear a bad review” change depending on whether the person writing the bad review is a person in the EA community or outside of it? Depending on whether the bad review is written on the EA Forum vs some other website?
Suppose someone raised their hand and offered to work as an investigative journalist funded by and for the EA community. It seems fairly absurd to tell e.g. an investigative journalist from ProPublica that they’re only allowed to cover subjects who explicitly agreed to be covered. Why would such a hypothetical EA-funded investigative journalist be any different?
The best argument I can think of against such an EA investigative journalist is that it seems unfair to pick on people who are putting so much time and money towards doing good. However, insofar as EAs involve themselves in public issues, public scrutiny will often be warranted. I think the best policy would be: the journalist’s job is to cover people both inside and outside the EA community, who are working in areas of public and EA interest. They aspire to neutrality in their coverage, so the valence of their stories isn’t affected by a person’s EA affiliation.
We should also discuss what “stopgap norms” to have in place until something actually happens, because if FTX is any guide, nothing will ever happen. (Perhaps the simplest stopgap norm is: If Ben Pace is concerned with Nonlinear, he should hire a pro investigative journalist on the spot to look into it. This looks like a straightforward arbitrage anyway, since Ben says he values his time at $800K/year.)
I have very little inside perspective on SBF, but my general take on FTX is that there was not enough shady info known outside of the org to stop the fraud. (What’s the mechanism? Unless you knew about the fraud, idk how just saying what you knew could have caused him to change his ways or lose control of his company.) It’s possible EA/rationality might have relied less on SBF if more were known, but you have to consider the harm of a norm of sharing morally-loaded rumors as well.
The risk of a witch hunt environment seems worse to me than the value of giving people tidbits of info that a perfect Bayesian could update on in the correct proportion but which will have negative higher-order effects on any real community that hears it.
Asking out of ignorance here, as I was only exposed to the general news version and not EA perspectives about FTX. What difference would it have made if FTX fraud was uncovered before things crashed? Is it really that straightforward to conclude that most of the harm done would have been preventable?
I think the claim is not that fraud would have been uncovered, but rather that rumors about SBF acting deceptively would have been shared. (See e.g. this post as an example of what might have been shared.)
Even if you think Lightcone misfired here—If you add FTX in your dataset too, then the “see something? say something!” norm starts looking better overall.
No, I don’t think it does. You also need to assume that a “see something? say something!” rumor mill would have actually had any benefit for the FTX situation. I’m pretty sure that’s false, and I think it’s pretty plausible it would be harmful.
(1) The fraud wouldn’t have become publicly known under this norm, so I don’t think this actually helps.
(2) I don’t think it would be correct for EA to react strongly in response to the rumors about SBF- there are similar rumors or conflicts around a very substantial number of famous people, e.g. Zuckerberg vs. the Winklevoss Twins.
(3) Most importantly, how we get from “see something? say something?” to “the billionaire sending money to everybody, who has a professional PR firm, somehow ends up losing out” is just a gigantic question mark here. To me, the outcome here is that SBF now has a mandate to drive anybody he can dig up or manufacture dirt on out of EA. (I seem to recall that the sources of the rumors about him went to another failed crypto hedge fund that got sued; I can’t find a source, but even if that didn’t actually happen it would be easy him to make that happen to Lantern Ventures.) (I expect that the proposed “EA investigative journalist” would have probably been directly paid by SBF in this scenario.)
(1) The fraud wouldn’t have become publicly known under this norm, so I don’t think this actually helps.
If EA disavowed SBF, he wouldn’t have been able to use EA to launder his reputation.
(2) I don’t think it would be correct for EA to react strongly in response to the rumors about SBF- there are similar rumors or conflicts around a very substantial number of famous people, e.g. Zuckerberg vs. the Winklevoss Twins.
In this case it would’ve been correct, because the rumors were pointing at something real. We know that with the benefit of hindsight. One has to weigh false positives against false negatives.
I’m not saying rumors alone are enough for a disavowal, I’m saying rumors can be enough to trigger investigation.
(3) Most importantly, how we get from “see something? say something?” to “the billionaire sending money to everybody, who has a professional PR firm, somehow ends up losing out” is just a gigantic question mark here. To me, the outcome here is that SBF now has a mandate to drive anybody he can dig up or manufacture dirt on out of EA. (I seem to recall that the sources of the rumors about him went to another failed crypto hedge fund that got sued; I can’t find a source, but even if that didn’t actually happen it would be easy him to make that happen to Lantern Ventures.) (Similarly, I expect that such an “EA investigative journalist” would have probably been directly paid by SBF, had one existed.)
I think a war between SBF and EA would have been good for FTX users—the sooner things come to a head, the fewer depositors lose all their assets. It also would’ve been good for EA in the long run, since it would be more clear to the public that fraud isn’t what we’re about.
Your point about conflict of interest for investigative journalists is a good one. Maybe we should fund them anonymously so they don’t know which side their bread is buttered on. Maybe the ideal person is a freelancer who’s confident they can find other gigs if their relationship with EA breaks down.
I think a war between SBF and EA would have been good for FTX users
To be clear, what I’m saying is that SBF would just flat out win, and really easily too, I wouldn’t expect a war. The people who had criticized him would be driven out of EA on various grounds; I wouldn’t expect EA as a whole to end up fighting SBF; I would expect SBF would probably end up with more control over EA than he had in real life, because he’d be able to purge his critics on various grounds.
Your point about conflict of interest for investigative journalists is a good one. Maybe we should fund them anonymously so they don’t know which side their bread is buttered on.
I don’t think that’s enough; you’d need to not only fund some investigators anonymously, you’d also need to (a) have good control over selecting the investigators, and (b) ban anybody from paying or influencing investigators non-anonymously, which seems unenforceable. (Also, in real life, I think the investigators would eventually have just assumed that they were being paid by SBF or by Dustin Moskovitz.)
To be clear, what I’m saying is that SBF would just flat out win, and really easily too, I wouldn’t expect a war. The people who had criticized him would be driven out of EA on various grounds; I wouldn’t expect EA as a whole to end up fighting SBF; I would expect SBF would probably end up with more control over EA than he had in real life, because he’d be able to purge his critics on various grounds.
What would it take for EA to become the kind of movement where SBF would’ve lost?
I don’t think that’s enough; you’d need to not only fund some investigators anonymously, you’d also need to (a) have good control over selecting the investigators, and (b) ban anybody from paying or influencing investigators non-anonymously, which seems unenforceable. (Also, in real life, I think the investigators would eventually have just assumed that they were being paid by SBF or by Dustin Moskovitz.)
I agree that the ideal proposal would have answers here. However, this is also starting to sound like a proof that there’s no such thing as a clean judicial system, quality investigative journalism, honest scientific research into commercial products like drugs, etc. Remember, it’s looking like SBF is going to rot in jail despite all of the money he gave to politicians. The US judicial system is far from perfect, but let’s not let the perfect be the enemy of the good.
If EA just isn’t capable of trustworthy institutions for some reason, maybe there’s some clever way to outsource to an entity with a good track record? Denmark, Finland, and Norway seem to do quite well in international rankings based on a quick Google: 1, 2. Perhaps OpenAI should’ve incorporated in Denmark?
What would it take for EA to become the kind of movement where SBF would’ve lost?
I sorta feel like this is barking up the wrong tree, because: (a) the information that SBF was committing fraud was private and I cannot think of a realistic scenario where it would have become public, and (b) even if widely spread, the public information wouldn’t have been sufficient.
Before FTX’s fall, I’d remarked to several people that EA’s association with crypto (compare e.g. Ben Delo) was almost certainly bad for us, as it’s overrun with scams and fraud. At the time, I’d been thinking non-FTX scams affecting FTX or its customers, not FTX itself being fraudulent; but I do feel like the right way to prevent all this would have been to refuse any association between EA and crypto.
However, this is also starting to sound like a proof that there’s no such thing as a clean judicial system, quality investigative journalism, honest scientific research into commercial products like drugs, etc.
Good point! I’m probably being overly skeptical here, on reflection.
I think @chinscratch may have meant: What would it take for EA to become the kind of movement where SBF would’ve lost in his hypothetical efforts to squelch discussion of his general shadiness, and run those folks out of EA?
EA couldn’t have detected or stopped the fraud in my opinion, but more awareness of shady behavior could have caused people to distance themselves from SBF, not make major decisions in reliance on FTX cash, etc.
This looks like a straightforward arbitrage anyway, since Ben says he values his time at $800K/year
nit: I don’t know what Ben values his times at, though my guess is it’s generally not $800k/yr. Ben just said that he would consider doing this kind of work for $800k/year. This kind of work is really quite stressful, so it likely comes a premium compared to other kinds of work, and might be more expensive than how much Ben otherwise values his time, I am confident it would for the great majority of people.
My guess is most people would charge substantially more money to take on a dangerous job, or one they really don’t enjoy, or one that involves a lot of pain and stress than they would usually.
(Separately, I don’t currently know of investigative journalists you can hire this way. Hiring an investigative journalist for a bunch of EA stuff was one of the primary things I was arguing for at the most recent EA Coordination Forum. I think it’s a great idea, but it’s not a great stopgap norm because it’s genuinely quite hard to hire for, or at least I don’t super feel capable of doing it. Financially I would be willing to contribute quite a lot of money from a mixture of Lightcone, grantmaking, and personal funds)
Great journalists are getting laid off all the time these days. You could find any number of professional and highly accomplished journalists for a tiny fraction of $800k per year.
If you have any references for good ones, please send them to me! I think this kind of job is quite hard and many (my guess is most) journalists would not live up to a standard that I think would be acceptable in this kind of job, but I do think there are some, and I would love to talk to them about this.
I don’t have personal references, but a ton of great journalists have gotten laid off in 2023, and they never were paid that much in the first place (not being TV or celebrity journalists).
whether there should be a justice system within the EA/Rationality community and whether Lightcone can self-appoint into the role of community police
These are two different questions!
EA already has a justice system of sorts- the CEA Community Health Team. Ben chose to do this because he thought it was ineffective. The second question should instead be, whether Lightcone can self-appoint themselves as a replacement for the CHT.
The fact that somebody thought the CHT was ineffective and tried to replace it, then immediately faceplanted, makes me more confident in the actually existing CHT. (In particular, if Alice did indeed lie to Ben, it’s then pretty likely that she said she didn’t trust the CHT/didn’t want info shared with them because they would fact-check her claims.)
But CEA CHT doesn’t cancel people. They just answer questions about people in the most general way possible if you ask and maybe ban them from CEA-sponsored programs and events like EAGs. No coincidence that Julia Wise set it up and is a social worker by training.
(Anyone who knows more, please fee free to correct/elaborate on what CHT does.)
You touched on something here that I am coming to see as the key issue: whether there should be a justice system within the EA/Rationality community and whether Lightcone can self-appoint into the role of community police. In conversations with people from Lightcone re:NL posts, I was told that is wrong to try to guard your reputation because that information belongs to the community to decide. US law on reputation is that you do have a right to protect yourself from lies and misrepresentation. Emerson talking about suing for libel—his right—was seen as defection from the norms which that Lightcone employee thinks should apply to the whole EA/rationality community. When did Emerson opt into following these norms, being judged by these norms? Did any of us? The Lightcone employees also did not like that Kat made a veiled threat to either Chloe or Alice (can’t remember) that her reputation in EA could be ruined by NL if she kept saying bad things about them. They saw that as bad not just because it was a threat but because it conspired to hide information from the community. From what I understood, that Lightcone employee thought it would have been okay for Kat to talk shit about the former employee—it was trying to keep the employee from talking that was wrong. I see bargaining in private not to talk shit about each other as up to those people, not necessarily doing something wrong by the rest of the community. Again, who else knew that these were the rules for being in this community?
In general, I am not in favor of community justice just like I’m not in favor of campus justice. If someone commits a crime, it should be reported to the police. If individuals want to give a company a bad review, they can do so publicly online or privately to whomever they want. If EAs/rationalists want to belong to a community that polices itself, there should be an explicit agreement to join this community along with its expectations, and the enforcers should be accountable to the enforced somehow.
Pretty much every community has norms and means of enforcing those norms (“social control” to the sociologists). Those means may be more or less formal, but I don’t think communities are very viable without some means of norm enforcement. I think “justice system” implies something significantly different than what has happened here: e.g., the US justice system can throw me in a dungeon and take away all my money. To use a private example, if I were Catholic, the Catholic justice system could excommunicate me, defrock me as a priest, etc. A campus justice system can expel or fire me.
What happened here feels more like gossip on steroids. Lightcone said bad things about Nonlinear, which had the effect of decreasing community opinion of Nonlinear. That might in turn have concrete adverse effects on Nonlinear. But as far as I know: Lightcone did not, and could not, directly impose consequences on Nonlinear unmediated by the actions of the community.
Likewise, I don’t think “community police” is right, at least in the frame of modern Western societies. The police and prosecutors (collectively “police”) have the exclusive ability to charge people with crimes. They have the exclusive ability to use certain investigative tools, like search warrants. Although everyone has the ability to investigate possible wrongdoing in some capacity (e.g., journalists), the police are generally recognized as having a preeminent role. For example, if there is a police investigation into certain conduct, other processes like private organizations and civil investigations are usually expected to step aside to avoid disrupting the police investigation.
Rather, I think Lightcone’s role here is more akin to the prosecutorial role in classical Athens. Any citizen could bring a criminal prosecution. In fact, there were no public prosecutors. There was a preliminary stage, but then the matter was tried before a large jury of citizens (e.g., 500 for the most famous trial, that of Socrates). It’s not clear to me that Lightcone has claimed any role in norm enforcement that is superior to the role they would believe appropriate for any community member. True, it championed the cause of A and C, but that was a result of a voluntary decision among A, C, and Lightcone. Similar things happened in classical Athens, and Lightcone hasn’t appointed itself sole champion of people with grievances. What we are seeing on the Forum is vaguely like the deliberations of the citizen jury.
I’m not seeing a clear alternative to “private” prosecution of norm violations in such a decentralized community. Who is in a place to be the public prosecutor? One could argue for CHSP, but it is merely another private actor with no real democratic legitimacy and some serious conflicts of interests (e.g., due to being part of CEA, due to so much of CEA’s funding coming from Open Phil). I do not like the alternative of their being no norm enforcement except what is available through the legal system.
Although I can envision alternatives to adjudication by the whole community, I don’t think we can criticize the abstract idea of bringing disputes to the whole community for adjudication at this time. In such a decentralized structure, sanctions for norm violations are imposed by community actors in their individual capacities. In other words, everyone who hears of Lightcone’s charges has to decide for themselves whether they will change the way they interact with Nonlinear (and/or Lightcone) as a result. Likewise, there is no adjudicatory authority who would publicly warn everyone else of the risks of associating with an organization that has committed serious norm violations. Whatever its flaws, public adjudication seems to be the only real option at present for a certain class of matters.
Fair enough, but the flipside is that Lightcone didn’t opt in to following your proposed norm of not criticizing people for threatening to file defamation suits. Lightcone criticizing Emerson for his speech is Lightcone’s right. It is your right (and mine, and everyone else’s) to decide not to associate with Lightcone, Nonlinear, both, or neither based on your assessment of their various actions.
What you’re missing is that Lightcone is not just another citizen. They control a lot of money and influence. If Ben and Oli were just regular citizens these criticisms wouldn’t carry undue weight. If Alice and Chloe had published their experiences themselves, I think people would have interpreted them more in proportion (and they would have exposed to way risk), which would have been a lot closer to the system you’re talking about.
I don’t know you, but it sounds like you don’t live on EA/Rat grants. If you did, you would know it’s way more advantageous to side with Lightcone. Many would feel they could not afford not to. (Full disclosure: I have a Lightspeed grant, and obviously I feel okay criticizing Lightcone, but I might hesitate more if they were my only funding source.)
This is an excellent comment.
Concretely, I sometimes hear organization leaders say that they choose to have their organization not be “EA” because doing so opens them to criticism from random people on the EA Forum, and this doesn’t occur if they just describe themselves as working on “alternative proteins” or whatever.
(Although in this particular case, it’s not clear to me that Ben Pace wouldn’t have chosen to investigate Nonlinear if they didn’t self describe as “EA”. It seems like his investigation was triggered by them using Lightcone.)
I feel like we should also be discussing FTX here. My model of the Lightcone folks is something like:
They kinda knew SBF was sketchy.
They didn’t do anything because of diffusion of responsibility (and maybe also fear of reputation warring).
FTX fraud was uncovered.
They resolved to not let diffusion of responsibility/fear of reputation warring stop them from sharing sketchiness info in the future.
If you grant that the Community Health Team is too weak to police the community (they didn’t catch SBF), and also that a stronger institution may never emerge (the FTX incident was insufficient to trigger the creation of a stronger institution, so it’s hard to imagine what event would be sufficient), there’s the question of what “stopgap norms” to have in place until a stronger institution hypothetically emerges.
Even if you think Lightcone misfired here—If you add FTX in your dataset too, then the “see something? say something!” norm starts looking better overall.
With regard to explicit agreements: One could also argue from the other direction. No one in EA explicitly agreed to safeguard the reputation of other EAs. You say: “If individuals want to give a company a bad review, they can do so publicly online or privately to whomever they want.” Do the ethics of “giving Nonlinear a bad review” change depending on whether the person writing the bad review is a person in the EA community or outside of it? Depending on whether the bad review is written on the EA Forum vs some other website?
Suppose someone raised their hand and offered to work as an investigative journalist funded by and for the EA community. It seems fairly absurd to tell e.g. an investigative journalist from ProPublica that they’re only allowed to cover subjects who explicitly agreed to be covered. Why would such a hypothetical EA-funded investigative journalist be any different?
The best argument I can think of against such an EA investigative journalist is that it seems unfair to pick on people who are putting so much time and money towards doing good. However, insofar as EAs involve themselves in public issues, public scrutiny will often be warranted. I think the best policy would be: the journalist’s job is to cover people both inside and outside the EA community, who are working in areas of public and EA interest. They aspire to neutrality in their coverage, so the valence of their stories isn’t affected by a person’s EA affiliation.
We should also discuss what “stopgap norms” to have in place until something actually happens, because if FTX is any guide, nothing will ever happen. (Perhaps the simplest stopgap norm is: If Ben Pace is concerned with Nonlinear, he should hire a pro investigative journalist on the spot to look into it. This looks like a straightforward arbitrage anyway, since Ben says he values his time at $800K/year.)
I have very little inside perspective on SBF, but my general take on FTX is that there was not enough shady info known outside of the org to stop the fraud. (What’s the mechanism? Unless you knew about the fraud, idk how just saying what you knew could have caused him to change his ways or lose control of his company.) It’s possible EA/rationality might have relied less on SBF if more were known, but you have to consider the harm of a norm of sharing morally-loaded rumors as well.
The risk of a witch hunt environment seems worse to me than the value of giving people tidbits of info that a perfect Bayesian could update on in the correct proportion but which will have negative higher-order effects on any real community that hears it.
Habryka seems to think there was significant underreaction to shady info: https://forum.effectivealtruism.org/posts/b83Zkz4amoaQC5Hpd/time-article-discussion-effective-altruist-leaders-were?commentId=nGxkHbrikGeTxrLjZ
I think you have to balance cost of false negatives against cost of false positives.
Asking out of ignorance here, as I was only exposed to the general news version and not EA perspectives about FTX. What difference would it have made if FTX fraud was uncovered before things crashed? Is it really that straightforward to conclude that most of the harm done would have been preventable?
I think the claim is not that fraud would have been uncovered, but rather that rumors about SBF acting deceptively would have been shared. (See e.g. this post as an example of what might have been shared.)
No, I don’t think it does. You also need to assume that a “see something? say something!” rumor mill would have actually had any benefit for the FTX situation. I’m pretty sure that’s false, and I think it’s pretty plausible it would be harmful.
(1) The fraud wouldn’t have become publicly known under this norm, so I don’t think this actually helps.
(2) I don’t think it would be correct for EA to react strongly in response to the rumors about SBF- there are similar rumors or conflicts around a very substantial number of famous people, e.g. Zuckerberg vs. the Winklevoss Twins.
(3) Most importantly, how we get from “see something? say something?” to “the billionaire sending money to everybody, who has a professional PR firm, somehow ends up losing out” is just a gigantic question mark here. To me, the outcome here is that SBF now has a mandate to drive anybody he can dig up or manufacture dirt on out of EA. (I seem to recall that the sources of the rumors about him went to another failed crypto hedge fund that got sued; I can’t find a source, but even if that didn’t actually happen it would be easy him to make that happen to Lantern Ventures.) (I expect that the proposed “EA investigative journalist” would have probably been directly paid by SBF in this scenario.)
If EA disavowed SBF, he wouldn’t have been able to use EA to launder his reputation.
In this case it would’ve been correct, because the rumors were pointing at something real. We know that with the benefit of hindsight. One has to weigh false positives against false negatives.
I’m not saying rumors alone are enough for a disavowal, I’m saying rumors can be enough to trigger investigation.
I think a war between SBF and EA would have been good for FTX users—the sooner things come to a head, the fewer depositors lose all their assets. It also would’ve been good for EA in the long run, since it would be more clear to the public that fraud isn’t what we’re about.
Your point about conflict of interest for investigative journalists is a good one. Maybe we should fund them anonymously so they don’t know which side their bread is buttered on. Maybe the ideal person is a freelancer who’s confident they can find other gigs if their relationship with EA breaks down.
To be clear, what I’m saying is that SBF would just flat out win, and really easily too, I wouldn’t expect a war. The people who had criticized him would be driven out of EA on various grounds; I wouldn’t expect EA as a whole to end up fighting SBF; I would expect SBF would probably end up with more control over EA than he had in real life, because he’d be able to purge his critics on various grounds.
I don’t think that’s enough; you’d need to not only fund some investigators anonymously, you’d also need to (a) have good control over selecting the investigators, and (b) ban anybody from paying or influencing investigators non-anonymously, which seems unenforceable. (Also, in real life, I think the investigators would eventually have just assumed that they were being paid by SBF or by Dustin Moskovitz.)
What would it take for EA to become the kind of movement where SBF would’ve lost?
I agree that the ideal proposal would have answers here. However, this is also starting to sound like a proof that there’s no such thing as a clean judicial system, quality investigative journalism, honest scientific research into commercial products like drugs, etc. Remember, it’s looking like SBF is going to rot in jail despite all of the money he gave to politicians. The US judicial system is far from perfect, but let’s not let the perfect be the enemy of the good.
If EA just isn’t capable of trustworthy institutions for some reason, maybe there’s some clever way to outsource to an entity with a good track record? Denmark, Finland, and Norway seem to do quite well in international rankings based on a quick Google: 1, 2. Perhaps OpenAI should’ve incorporated in Denmark?
I sorta feel like this is barking up the wrong tree, because: (a) the information that SBF was committing fraud was private and I cannot think of a realistic scenario where it would have become public, and (b) even if widely spread, the public information wouldn’t have been sufficient.
Before FTX’s fall, I’d remarked to several people that EA’s association with crypto (compare e.g. Ben Delo) was almost certainly bad for us, as it’s overrun with scams and fraud. At the time, I’d been thinking non-FTX scams affecting FTX or its customers, not FTX itself being fraudulent; but I do feel like the right way to prevent all this would have been to refuse any association between EA and crypto.
Good point! I’m probably being overly skeptical here, on reflection.
I think @chinscratch may have meant: What would it take for EA to become the kind of movement where SBF would’ve lost in his hypothetical efforts to squelch discussion of his general shadiness, and run those folks out of EA?
EA couldn’t have detected or stopped the fraud in my opinion, but more awareness of shady behavior could have caused people to distance themselves from SBF, not make major decisions in reliance on FTX cash, etc.
nit: I don’t know what Ben values his times at, though my guess is it’s generally not $800k/yr. Ben just said that he would consider doing this kind of work for $800k/year. This kind of work is really quite stressful, so it likely comes a premium compared to other kinds of work, and might be more expensive than how much Ben otherwise values his time, I am confident it would for the great majority of people.
My guess is most people would charge substantially more money to take on a dangerous job, or one they really don’t enjoy, or one that involves a lot of pain and stress than they would usually.
(Separately, I don’t currently know of investigative journalists you can hire this way. Hiring an investigative journalist for a bunch of EA stuff was one of the primary things I was arguing for at the most recent EA Coordination Forum. I think it’s a great idea, but it’s not a great stopgap norm because it’s genuinely quite hard to hire for, or at least I don’t super feel capable of doing it. Financially I would be willing to contribute quite a lot of money from a mixture of Lightcone, grantmaking, and personal funds)
Great journalists are getting laid off all the time these days. You could find any number of professional and highly accomplished journalists for a tiny fraction of $800k per year.
If you have any references for good ones, please send them to me! I think this kind of job is quite hard and many (my guess is most) journalists would not live up to a standard that I think would be acceptable in this kind of job, but I do think there are some, and I would love to talk to them about this.
I don’t have personal references, but a ton of great journalists have gotten laid off in 2023, and they never were paid that much in the first place (not being TV or celebrity journalists).
https://www.poynter.org/business-work/2023/buzzfeed-news-closed-180-staffers-laid-off/
https://www.sfgate.com/tech/article/wired-layoffs-conde-nast-magazine-18550381.php
https://www.washingtonpost.com/style/media/2023/10/10/washington-post-staff-buyouts/
These are two different questions!
EA already has a justice system of sorts- the CEA Community Health Team. Ben chose to do this because he thought it was ineffective. The second question should instead be, whether Lightcone can self-appoint themselves as a replacement for the CHT.
The fact that somebody thought the CHT was ineffective and tried to replace it, then immediately faceplanted, makes me more confident in the actually existing CHT. (In particular, if Alice did indeed lie to Ben, it’s then pretty likely that she said she didn’t trust the CHT/didn’t want info shared with them because they would fact-check her claims.)But CEA CHT doesn’t cancel people. They just answer questions about people in the most general way possible if you ask and maybe ban them from CEA-sponsored programs and events like EAGs. No coincidence that Julia Wise set it up and is a social worker by training.
(Anyone who knows more, please fee free to correct/elaborate on what CHT does.)