I have very little inside perspective on SBF, but my general take on FTX is that there was not enough shady info known outside of the org to stop the fraud. (What’s the mechanism? Unless you knew about the fraud, idk how just saying what you knew could have caused him to change his ways or lose control of his company.) It’s possible EA/rationality might have relied less on SBF if more were known, but you have to consider the harm of a norm of sharing morally-loaded rumors as well.
The risk of a witch hunt environment seems worse to me than the value of giving people tidbits of info that a perfect Bayesian could update on in the correct proportion but which will have negative higher-order effects on any real community that hears it.
I have very little inside perspective on SBF, but my general take on FTX is that there was not enough shady info known outside of the org to stop the fraud. (What’s the mechanism? Unless you knew about the fraud, idk how just saying what you knew could have caused him to change his ways or lose control of his company.) It’s possible EA/rationality might have relied less on SBF if more were known, but you have to consider the harm of a norm of sharing morally-loaded rumors as well.
The risk of a witch hunt environment seems worse to me than the value of giving people tidbits of info that a perfect Bayesian could update on in the correct proportion but which will have negative higher-order effects on any real community that hears it.
Habryka seems to think there was significant underreaction to shady info: https://forum.effectivealtruism.org/posts/b83Zkz4amoaQC5Hpd/time-article-discussion-effective-altruist-leaders-were?commentId=nGxkHbrikGeTxrLjZ
I think you have to balance cost of false negatives against cost of false positives.