Jeff—this is a useful perspective, and I agree with some of it, but I think it’s still loading a bit too much guilt onto EA people and organizations for being duped and betrayed by a major donor.
EAs might have put a little bit too much epistemic trust in subject matter experts regarding SBF and FTX—but how can we do otherwise, practically speaking?
In this case, I think there was a tacit, probably largely unconscious trust that if major VCs, investors, politicians, and journalists trusted SBF, then we can probably trust him too. This was not just a matter of large VC firms vetting SBF and giving him their seal of approval through massive investments (flawed and rushed though their vetting may have been.)
It’s also a matter of ordinary crypto investors, influencers, and journalists largely (though not uniformly) thinking FTX was OK, and trusting him with billions of dollars of their money, in an industry that is actually quite skeptical a lot of the time. And major politicians, political parties, and PACs who accepted millions in donations trusting that SBD’s reputation would not suffer such a colossal downturn that they would be implicated. And journalists from leading national publications doing their own forms of due diligence and investigative journalism on their interview subjects.
So, we have a collective failure of at least four industries outside EA—venture capital, crypto experts, political fund-raisers, and mainstream journalists—missing most of the alleged, post-hoc, red flags about SBF. The main difference between EA and those other four industries is that I see us doing a lot of healthy, open-minded, constructive, critical dialogue about what we could have done differently, and I don’t see the other four industries doing much—or any—of that.
Let’s consider an analogous situation in cause-area science rather than donor finance. Suppose EAs read some expert scientific literature about a potential cause area—whether global catastrophic biological risks, nuclear containment, deworming efficacy, direct cash transfers, geoengineering, or any other domain. Suppose we convince each other, and donors, to spend billions on a particular cause area based on expert consensus about what will work to reduce suffering or risk. And then suppose that some of the key research that we used to recommend that cause area turns out to have been based on false data fabricated by a powerful sociopathic scientist and their lab—but the data were published in major journals, peer-reviewed by leading scientists, cited by hundreds of other experts, informed public policy, etc.
How much culpability would EA have in that situation? Should we have done our own peer review of the key evidence in the cause area? Should we have asked the key science labs for their original data? Should we have hired subject matter experts to do some forensic analysis of the peer-reviewed papers? That seems impractical. At a certain point, we just have to trust the peer-review process—whether in science, or in finance, politics, and journalism—with the grim understanding that we will sometimes be fooled and betrayed.
The major disanalogy here would be if the key sociopathic scientist who faked the data was personally known to the leaders of a movement for many years, and was directly involved in the community. But even there, I don’t think we should be too self-castigating. I have known several behavioral scientists more-or-less well, over the years, who turned out to be very bad actors who faked data, but who were widely trusted in their fields, who didn’t raise any big red flags, and who left all of their colleagues scratching their heads afterwards, asking ‘How on Earth did I miss the fact that this was a really shady researcher?’ The answer usually turns out to be, the disgraced researcher allocated most of the time that other researchers would have put into collecting real data, into covering their tracks and duping their colleagues, and they were just very good at being deceptive and manipulative.
Science relies on trust, so it’s relatively vulnerable to intentionally bad, deceptive actors. EA also relies on trust in subject matter experts, so we’re also relatively vulnerable to bad actors. But unless we want to replicate every due diligence process, every vetting process, every political ‘opposition research’ process, every peer review process, every investigative journalism process, then we will remain vulnerable to the occasional error—and sometimes those errors will be very big and very harmful.
That might just be the price of admission when trying to do evidence-based good using finances from donors.
Of course, there are lots of ways we could do better in the future, especially in doing somewhat deeper dives into key donors, the integrity of key organizations and leaders, and the epistemics around key cause areas. I’m just cautioning against over-correcting in the direction of distrust and paranoia.
Epistemic status of this comment: I’m slightly steel-manning a potential counter-argument against Jeff’s original post, and I think I’m mostly right, but I could easily be persuaded otherwise.
What’s the evidence people actually went through the virtuous described process of thinking about whether to trust SBF and checking all these independent sources? (Science analogy is an interesting one though I agree.)
I wasn’t claiming there was a systematic, formalized process of checking all these independent sources in an exhaustive, detailed, skeptical way.
I was only suggesting that from the viewpoint of most EAs, ‘there was a tacit, probably largely unconscious trust that if major VCs, investors, politicians, and journalists trusted SBF, then we can probably trust him too’....
“At a certain point, we just have to trust the peer-review process”
Coming here late, found it an interesting comment overall, but just thought I’d say something re interpreting the peer reviewed literature as an academic, as I think people often misunderstand what peer review does. It’s pretty weak and you don’t just trust what comes out! Instead, look for consistent results being produced by at least a few independent groups, without there being contradictory research (researchers will rarely publish replications of results, but if a set of results don’t corroborate a single plausible theoretical picture, then something is iffy). (Note it can happen for whole communities of researchers to go down the wrong path, though—it’s just less likely than for an individual study.) Also, talk to people in the field about it! So there are fairly low cost ways to make better judgements than believing what one researcher tells you. The scientific fraud cases that I know involved results from just one researcher or group, and sensible people would have had a fair degree of scepticism without future corroboration. Just in case anyone reading this is ever in the position of deciding whether to allocate significant funding based on published research.
“Science relies on trust, so it’s relatively vulnerable to intentionally bad, deceptive actors”
I don’t think science does rely on trust particularly highly, as you can have research groups corroborating or casting doubt on others’ research. “Relatively” compared to what? I don’t see why it would be more vulnerable to be actors than most other things humans do.
Jeff—this is a useful perspective, and I agree with some of it, but I think it’s still loading a bit too much guilt onto EA people and organizations for being duped and betrayed by a major donor.
EAs might have put a little bit too much epistemic trust in subject matter experts regarding SBF and FTX—but how can we do otherwise, practically speaking?
In this case, I think there was a tacit, probably largely unconscious trust that if major VCs, investors, politicians, and journalists trusted SBF, then we can probably trust him too. This was not just a matter of large VC firms vetting SBF and giving him their seal of approval through massive investments (flawed and rushed though their vetting may have been.)
It’s also a matter of ordinary crypto investors, influencers, and journalists largely (though not uniformly) thinking FTX was OK, and trusting him with billions of dollars of their money, in an industry that is actually quite skeptical a lot of the time. And major politicians, political parties, and PACs who accepted millions in donations trusting that SBD’s reputation would not suffer such a colossal downturn that they would be implicated. And journalists from leading national publications doing their own forms of due diligence and investigative journalism on their interview subjects.
So, we have a collective failure of at least four industries outside EA—venture capital, crypto experts, political fund-raisers, and mainstream journalists—missing most of the alleged, post-hoc, red flags about SBF. The main difference between EA and those other four industries is that I see us doing a lot of healthy, open-minded, constructive, critical dialogue about what we could have done differently, and I don’t see the other four industries doing much—or any—of that.
Let’s consider an analogous situation in cause-area science rather than donor finance. Suppose EAs read some expert scientific literature about a potential cause area—whether global catastrophic biological risks, nuclear containment, deworming efficacy, direct cash transfers, geoengineering, or any other domain. Suppose we convince each other, and donors, to spend billions on a particular cause area based on expert consensus about what will work to reduce suffering or risk. And then suppose that some of the key research that we used to recommend that cause area turns out to have been based on false data fabricated by a powerful sociopathic scientist and their lab—but the data were published in major journals, peer-reviewed by leading scientists, cited by hundreds of other experts, informed public policy, etc.
How much culpability would EA have in that situation? Should we have done our own peer review of the key evidence in the cause area? Should we have asked the key science labs for their original data? Should we have hired subject matter experts to do some forensic analysis of the peer-reviewed papers? That seems impractical. At a certain point, we just have to trust the peer-review process—whether in science, or in finance, politics, and journalism—with the grim understanding that we will sometimes be fooled and betrayed.
The major disanalogy here would be if the key sociopathic scientist who faked the data was personally known to the leaders of a movement for many years, and was directly involved in the community. But even there, I don’t think we should be too self-castigating. I have known several behavioral scientists more-or-less well, over the years, who turned out to be very bad actors who faked data, but who were widely trusted in their fields, who didn’t raise any big red flags, and who left all of their colleagues scratching their heads afterwards, asking ‘How on Earth did I miss the fact that this was a really shady researcher?’ The answer usually turns out to be, the disgraced researcher allocated most of the time that other researchers would have put into collecting real data, into covering their tracks and duping their colleagues, and they were just very good at being deceptive and manipulative.
Science relies on trust, so it’s relatively vulnerable to intentionally bad, deceptive actors. EA also relies on trust in subject matter experts, so we’re also relatively vulnerable to bad actors. But unless we want to replicate every due diligence process, every vetting process, every political ‘opposition research’ process, every peer review process, every investigative journalism process, then we will remain vulnerable to the occasional error—and sometimes those errors will be very big and very harmful.
That might just be the price of admission when trying to do evidence-based good using finances from donors.
Of course, there are lots of ways we could do better in the future, especially in doing somewhat deeper dives into key donors, the integrity of key organizations and leaders, and the epistemics around key cause areas. I’m just cautioning against over-correcting in the direction of distrust and paranoia.
Epistemic status of this comment: I’m slightly steel-manning a potential counter-argument against Jeff’s original post, and I think I’m mostly right, but I could easily be persuaded otherwise.
What’s the evidence people actually went through the virtuous described process of thinking about whether to trust SBF and checking all these independent sources? (Science analogy is an interesting one though I agree.)
I don’t know. Others know much more than I do.
I wasn’t claiming there was a systematic, formalized process of checking all these independent sources in an exhaustive, detailed, skeptical way.
I was only suggesting that from the viewpoint of most EAs, ‘there was a tacit, probably largely unconscious trust that if major VCs, investors, politicians, and journalists trusted SBF, then we can probably trust him too’....
“At a certain point, we just have to trust the peer-review process”
Coming here late, found it an interesting comment overall, but just thought I’d say something re interpreting the peer reviewed literature as an academic, as I think people often misunderstand what peer review does. It’s pretty weak and you don’t just trust what comes out! Instead, look for consistent results being produced by at least a few independent groups, without there being contradictory research (researchers will rarely publish replications of results, but if a set of results don’t corroborate a single plausible theoretical picture, then something is iffy). (Note it can happen for whole communities of researchers to go down the wrong path, though—it’s just less likely than for an individual study.) Also, talk to people in the field about it! So there are fairly low cost ways to make better judgements than believing what one researcher tells you. The scientific fraud cases that I know involved results from just one researcher or group, and sensible people would have had a fair degree of scepticism without future corroboration. Just in case anyone reading this is ever in the position of deciding whether to allocate significant funding based on published research.
“Science relies on trust, so it’s relatively vulnerable to intentionally bad, deceptive actors”
I don’t think science does rely on trust particularly highly, as you can have research groups corroborating or casting doubt on others’ research. “Relatively” compared to what? I don’t see why it would be more vulnerable to be actors than most other things humans do.