I expect the best output we could reasonably hope for from any improved detection system would be relatively modest. For example: “several community members have come forward with specific allegations of past serious misbehavior by megadonor X, and so we estimate that there is a 20% chance that X’s company will be revealed as (or end up committing) massive fraud in the next ten years.” If someone has strongly probative evidence of fraud, that person should not be going an outfit set up by the EA community with that information . . . they should be going to the appropriate authorities.
Let’s say a detection system had discerned a 20% chance of significant fraud by SBF—this seems to at least several times better performance than the results obtained by organizations with better access to FTX’s internal accounting and lots of resources/motivation. What then? Does the community turn down any FTX-related money, even though there is an 80% chance there is nothing scandalous about FTX? How does that get communicated in a decentralized community where everyone makes their own decisions about who to accept funding from?
And how is that communicated—especially in a PR/optics fashion—in a way that doesn’t create a serious risk of slander/libel liability? “We think Megadonor X poses an unacceptable risk of causing grave reputational harm to the community” sure sounds like an opinion based on undisclosed facts, which is a potentially slanderous form of opinion even in the free-speech friendly USA.
It was widely known that crypto-linked assets are inherently volatile and can disappear in a flash, so while better intel on SBF would have better informed the odds of catastrophic funding loss it was not necessary to understand that this risk existed.
All that is to say that the better approach might be more focused on what healthcare workers would call universal precautions than on attempting to identify the higher-risk individuals. Wear gloves with all patients. Always “hedge on reputation” as Nathan put it below.
I expect the best output we could reasonably hope for from any improved detection system would be relatively modest. For example: “several community members have come forward with specific allegations of past serious misbehavior by megadonor X, and so we estimate that there is a 20% chance that X’s company will be revealed as (or end up committing) massive fraud in the next ten years.” If someone has strongly probative evidence of fraud, that person should not be going an outfit set up by the EA community with that information . . . they should be going to the appropriate authorities.
Let’s say a detection system had discerned a 20% chance of significant fraud by SBF—this seems to at least several times better performance than the results obtained by organizations with better access to FTX’s internal accounting and lots of resources/motivation. What then? Does the community turn down any FTX-related money, even though there is an 80% chance there is nothing scandalous about FTX? How does that get communicated in a decentralized community where everyone makes their own decisions about who to accept funding from?
And how is that communicated—especially in a PR/optics fashion—in a way that doesn’t create a serious risk of slander/libel liability? “We think Megadonor X poses an unacceptable risk of causing grave reputational harm to the community” sure sounds like an opinion based on undisclosed facts, which is a potentially slanderous form of opinion even in the free-speech friendly USA.
It was widely known that crypto-linked assets are inherently volatile and can disappear in a flash, so while better intel on SBF would have better informed the odds of catastrophic funding loss it was not necessary to understand that this risk existed.
All that is to say that the better approach might be more focused on what healthcare workers would call universal precautions than on attempting to identify the higher-risk individuals. Wear gloves with all patients. Always “hedge on reputation” as Nathan put it below.