the choice is like âshould I pour in a ton of energy to try to set up this investigation that will struggle to get off the ground to learn kinda boring stuff I already know?â
Iâm not the person quoted, but I agree with this part, and some of the reasons for why I expect the results of an investigation like this to be boring arenât based on any private or confidential information, so perhaps worth sharing.
One key reason: I think rumor mills are not very effective fraud detection mechanisms.
(This seems almost definitionally true: if something was clear evidence of fraud then it would just be described as âclear evidence of fraudâ; describing something as a ârumorâ seems to almost definitionally imply a substantial probability that the rumor is false or at least unclear or hard to update on.[1])
E.g. If I imagine a bank whose primary fraud detection mechanism was âhope the executives hear rumors of malfeasance,â I would not feel very satisfied with their risk management. If fraud did occur, I wouldnât expect that their primary process improvement to be âsee if the executives could have updated from rumors better.â I am therefore somewhat confused by how much interest there seems to be in investigating how well the rumor mill worked for FTX.[2]
To be clear: I assume that the rumor mill could function more efficiently, and that thereâs probably someone who heard âSBF is often overconfidentâ or whatever and could have updated from that information more accurately than they did. (If youâre interested in my experience, you can read my comments here.) Iâm just very skeptical that a new and improved rumor mill is substantial protection against fraud, and donât understand what an investigation could show me that would change my mind.[3] Moreover, even if I somehow became convinced that rumors could have been effective in the specific case of FTX, I will still likely be skeptical of their efficacy in the future.
Relatedly, Iâve heard people suggest that 80k shouldnât have put SBF on their website given some rumors that were floating around. My take is that the base rate of criminality among large donors is high, having a rumor mill does not do very much to lower that rate, and so I expect to believe that the risk will be relatively high for high net worth people 80k puts on the front page in the future, and I donât need an investigation to tell me that.
To make some positive suggestions about things I could imagine learning from/âfinding useful:
I have played around with the idea of some voluntary pledge for earning to give companies where they could opt into additional risk management and transparency policies (e.g. selecting some processes from Sarbanes-Oxley). My sense is that these policies do actually substantially reduce the risk of fraud (albeit at great expense), and might be worth doing.[4]
At least, it seems like this should be our first port of call. Maybe we canât actually implement industry best practices around risk management, but it feels like we should at least try before giving up and doing the rumor mill thing.
My understanding is that a bunch of work has gone into making regulations so that publicly traded companies are less likely to commit fraud, and these regulations are somewhat effective, but they are so onerous that many companies are willing to stay private and forgo billions of dollars in investment just to not have to deal with them. I suspect that EA might find itself in a similarly unfortunate situation where reducing risks from âprominent individualsâ requires the individuals in question to do something so onerous that no one is willing to become âprominent.â I would be excited about research into a) whether this is in fact the case, and b) what to do about it, if so.
Some people probably disagree with my claim that rumor mills are ineffective. If so, research into this would be useful. E.g. itâs been on my backlog for a while to write up a summary of Why They Do It, or a fraud management textbook.
Why They Do It is perhaps particularly useful, given that one of its key claims is that, unlike with blue-collar crime, character traits donât correlate well with propensity to commit white-collar crimes crimes, and I think this may be a crux between me and people who disagree with me.
All that being said, I think Iâm weakly in favor of someone more famous than me[5] doing some sort of write up about what rumors they heard, largely because I donât expect the above to convince many people, and I think such a write up will mostly result in people realizing that the rumors were not very motivating.
One possible reason for this is that people are aiming for goals other than detecting fraud, e.g. they are hoping that rumors could also be used to identify other types of misconduct. I have opinions about this, but this comment is already too long so Iâm not going to address it here.
e.g. I appreciate Nate writing this, but if in the future I learned that a certain person has spoken to Nate, Iâm not going to update my beliefs about the likelihood of them committing financial misconduct very much (and I believe that Nate would agree with this assessment)
Part of why I havenât prioritized this is that there arenât a lot of earning to give companies anymore, but I think itâs still potentially worth someone spending time on this
I feel like âpeople who worked with Sam told people about specific instances of quite serious dishonesty they had personally observedâ is being classed as ârumourâ here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word ârumourâ conjures. Also, even if people only did receive stuff that was more centrally rumour, I feel like we still want to know if any one in leadership argued âoh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upsideâ. Thatâs a signal someone is a bad leader in my view, which is useful knowledge going forward. (Iâm not saying it is instant proof they should never hold leadership positions ever again: I think quite a lot of people might have said something like that in similar circumstances. But it is a bad sign.)
I feel like âpeople who worked with Sam told people about specific instances of quite serious dishonesty they had personally observedâ is being classed as ârumourâ here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word ârumourâ conjures.
I agree with this.
[...] I feel like we still want to know if any one in leadership argued âoh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upsideâ. Thatâs a signal someone is a bad leader in my view, which is useful knowledge going forward.
I donât really agree with this. Everyone has some probability of turning out to be dodgy; it matters exactly how strong the available evidence was. âThis EA leader writes people off immediately when they have even a tiny probability of being untrustworthyâ would be a negative update about the personâs decision-making too!
I meant something in between âisâ and âhas a non-zero chance of beingâ, like assigning significant probability to it (obviously I didnât have an exact number in mind), and not just for base rate reasons about believing all rich people to be dodgy.
Iâm not the person quoted, but I agree with this part, and some of the reasons for why I expect the results of an investigation like this to be boring arenât based on any private or confidential information, so perhaps worth sharing.
One key reason: I think rumor mills are not very effective fraud detection mechanisms.
Huh, the same reason you cite for why you are not interested in doing an investigation is one of the key reasons why I want an investigation.
It seems to me that current EA leadership is basically planning to continue a âour primary defense against bad actors is the rumor millâ strategy. Having an analysis of how that strategy did not work, and in some sense canât work for things like this seems like itâs one of the things that would have the most potential to give rise to something better here.
Do you think â[doing an investigation is] one of the things that would have the most potential to give rise to something better hereâ because you believe it is very hard to find alternatives to the rumor mill strategy? Or because you expect alternatives to not be adopted, even if found?
My current sense is that there is no motivation to find an alternative because people mistakenly think it works fine enough and so there is no need to try to find something better (and also in the absence of an investigation and clear arguments about why the rumor thing doesnât work, people probably think they canât really be blamed if the strategy fails again)
Suppose I want to devote some amount of resources towards finding alternatives to a rumor mill. I had been interpreting you as claiming that, instead of directly investing these resources towards finding an alternative, I should invest these resources towards an investigation (which will then in turn motivate other people to find alternatives).
Is that correct? If so, Iâm interested in understanding why â usually if you want to do a thing, the best approach is to just do that thing.
It seems to me that a case study of how exactly FTX occurred, and where things failed, would be among one of the best things to use to figure out what thing to do instead.
Currently the majority of people who have an interest in this are blocked by not really knowing what worked and didnât work in the FTX case, and so probably will have trouble arguing compellingly for any alternative, and also lack some of the most crucial data. My guess is you might have the relevant information from informal conversations, but most donât.
I do think also just directly looking for an alternative seems good. I am not saying that doing an FTX investigation is literally the very best thing to do in the world, it just seems better than what I see EA leadership spending their time on instead. If you had the choice between âfigure out a mechanism detecting and propagating information about future adversarial behaviorâ and âdo an FTX investigationâ, I would feel pretty great about both, and honestly donât really know which one I would prefer. As far as I can tell neither of these things is seeing much effort invested into it.
Okay, that seems reasonable. But I want to repeat my claim[1] that people are not blocked by ânot really knowing what worked and didnât work in the FTX caseâ â even if e.g. there was some type of rumor which was effective in the FTX case, I still think we shouldnât rely on that type of rumor being effective in the future, so knowing whether or not this type of rumor was effective in the FTX case is largely irrelevant.[2]
I think the blockers are more like: fraud management is a complex and niche area that very few people in EA have experience with, and getting up to speed with it is time-consuming, and also ~all of the practices are based under assumptions like âthe risk manager has some amount of formal authorityâ which arenât true in EA.
(And to be clear: I think these are very big blockers! They just arenât resolved by doing an investigation.)
Or maybe more specifically: would like people to explicitly refute my claim. If someone does think that rumor mills are a robust defense against fraud but were just implemented poorly last time, I would love to hear that!
Again, under the assumption that your goal is fraud detection.
It seems like a goal of ~âfraud detectionâ not further specified may be near the nadir of utility for an investigation.
If you go significantly narrower, then how EA managed (or didnât manage) SBF fraud seems rather important to figuring out how to deal with the risk of similar fraudulent schemes in the future.[1]
If you go significantly broader (cf. Oliâs reference to âdetecting and propagating information about future adversarial behaviorâ), the blockers you identify seem significantly less relevant, which may increase the expected value of an investigation.
My tentative guess is that it would be best to analyze potential courses of action in terms of their effects on the âEA immune systemâ at multiple points of specificity, not just close relations of a specific known pathogen (e.g., SBF-like schemes), a class of pathogens (e.g., âfraudâ), or pathogens writ large (e.g., âfuture adversarial behaviorâ).
Given past EA involvement with crypto, and the base rate of not-too-subtle fraud in crypto, the risk of similar fraudulent schemes seems more than theoretical to me.
I have played around with the idea of some voluntary pledge for earning to give companies where they could opt into additional risk management and transparency policies (e.g. selecting some processes from Sarbanes-Oxley). My sense is that these policies do actually substantially reduce the risk of fraud (albeit at great expense), and might be worth doing.
I think that would be worth exploring. I suspect you are correct that full Sarbanes-Oxley treatment would be onerous.
On the other hand, I donât see how a reasonably competent forensic accountant or auditor could have spent more than a few days at FTX (or at Madoff) without having a stroke. Seeing the commingled bank accounts would have sent alarm bells racing through my head, at least. (One of the core rules of legal ethics is that you do not commingle your money with that of your clients because experience teaches all sorts of horrible things can and often do happen.)
I certainly donât mean to imply that fraud against sophisticated investors and lenders is okay, but there is something particularly bad about straight-up conversion of client funds like at FTX/âMadoff. At least where hedge funds and big banks are concerned, they have the tools and access to protect themselves if they so wish. Moreover, the link between the fraud and the receipt of funds is particularly strong in those casesâEnron was awash in fraud, but it wouldnât be fair to say that a charity that received a grant from Enron at certain points in time was approximately and unknowingly in possession of stolen funds.
Thankfully, procedures meant to ferret out sophisticated Enron-style fraud shouldnât be necessary to rule out most straight-up conversion schemes. Because of the risk that someone will rat the fraudsters out, my understanding is that the conspiracy usually is kept pretty small in these sorts of frauds. That imposes a real limit on how well the scheme will withstand even moderate levels of probing with auditor-level access.
If you want a reference class of similar frauds, here is the prosecutionâs list of cases (after the Booker decision in 2005) with losses > $100MM and fraud type of Ponzi scheme, misappropriation, or embezzlement:
For example, one might be really skeptical if auditing red flags associated with prior frauds are present. Madoff famously had his audits done by a two-person firm that reported not conducting audits. FTX was better, but apparently still used âquestionableâ third-tier firms that âdo audit a few public companies but none of the size or complexity of FTX.â Neither âthe Armanino nor the Prager Metis audit reports for 2021 provides an opinion on the FTX US or FTX Trading internal controls over accounting and financial reportingââand the audit reports tell the reader as much (same source). The article, written by an accounting lecturer at Wharton, goes on to describe other weirdness in the audit reports. Of course, thatâs not foolproofâEnron had one of the then-Big Five accounting firms, for instance.
Catching all fraud is not realistic . . . for anyone, much less a charitable social movement. But it seems like some basic checks to make fairly sure the major or whole basis for the company /â for the individualâs wealth is not a fraudulent house of cards seems potentially attainable at a reasonable burden level.
I guess the question I have is, if the fraud wasnât noticed by SBFâs investors, who had much better access to information and incentives to find fraud, why would anyone expect the recipients of his charitable donations to notice it? If it was a failure of the EA movement not to know that FTX was fraudulent, isnât it many times more of a failure that the fraud was unnoticed by the major sophisticated investment firms that were large FTX shareholders?
I think investing in FTX was genuinely a good idea, if you were a profit maximizer, even if you strongly suspected the fraud. As Jason says, as an investor losing money due to fraud isnât any worse than losing money because a company fails to otherwise be profitable, so even assigning 20%-30% probability to fraud for a high-risk investment like FTX where you are expecting >2x returns in a short number of years will not make a huge difference to your bottomline.
In many ways you should expect being the kind of person who is willing to commit fraud to be positively associated with returns, because doing illegal and fradulent things means that the people who run the organization take on massive risk where you are not exposed to the downside, but you are exposed to the upside. Itâs not worth it to literally invest in fraud, but it is worth it to invest into the kind of company where the CEO is willing to go to prison, since you donât really have any risk of going to prison, but you get the upside of the legal risk they take on (think of Uber blatantly violating laws until they established a new market, which probably exposed leadership to substantial legal risk, but investors just got to reap the profits).
I wasnât suggesting we should expect this fraud to have been found in this case with the access that was available to EA sources. (Perhaps the FTXFF folks might have caught the scent if they were forensic accountantsâbut they werenât. And Iâm not at all confident on that in any event.) Iâm suggesting that, in response to this scandal, EA organizations could insist on certain third-party assurances in the future before taking significant amounts of money from certain sources.
Why the big money was willing to fork over nine figures each to FTX without those assurances is unclear to me. But one observation: as far as a hedge fund or lender is concerned, a loss due to fraud is no worse than a loss due to the invested-in firm being outcompeted, making bad business decisions, experiencing a general crypto collapse, getting shut down for regulatory issues, or any number of scenarios that were probably more likely ex ante than a massive conversion scheme. In fact, such a scheme might even be less bad to the extent that the firm thought it might get more money back in a fraud loss than from some ordinarily-business failure modes. Given my understanding that these deals often move very quickly, and the presence of higher-probability failure modes, it is understandable that investors and lenders wouldnât have prioritized fraud detection.
In contrast, charitable grantees are much more focused in their concern about fraud; taking money from a solvent, non-fraudulent business that later collapses doesnât raise remotely the same ethical, legal, operational, and reputational concerns. Their potential exposure in that failure mode are likely several times larger than those of the investors/âlenders after all non-financial exposures are considered. They are also not on a tight time schedule.
Iâm not the person quoted, but I agree with this part, and some of the reasons for why I expect the results of an investigation like this to be boring arenât based on any private or confidential information, so perhaps worth sharing.
One key reason: I think rumor mills are not very effective fraud detection mechanisms.
(This seems almost definitionally true: if something was clear evidence of fraud then it would just be described as âclear evidence of fraudâ; describing something as a ârumorâ seems to almost definitionally imply a substantial probability that the rumor is false or at least unclear or hard to update on.[1])
E.g. If I imagine a bank whose primary fraud detection mechanism was âhope the executives hear rumors of malfeasance,â I would not feel very satisfied with their risk management. If fraud did occur, I wouldnât expect that their primary process improvement to be âsee if the executives could have updated from rumors better.â I am therefore somewhat confused by how much interest there seems to be in investigating how well the rumor mill worked for FTX.[2]
To be clear: I assume that the rumor mill could function more efficiently, and that thereâs probably someone who heard âSBF is often overconfidentâ or whatever and could have updated from that information more accurately than they did. (If youâre interested in my experience, you can read my comments here.) Iâm just very skeptical that a new and improved rumor mill is substantial protection against fraud, and donât understand what an investigation could show me that would change my mind.[3] Moreover, even if I somehow became convinced that rumors could have been effective in the specific case of FTX, I will still likely be skeptical of their efficacy in the future.
Relatedly, Iâve heard people suggest that 80k shouldnât have put SBF on their website given some rumors that were floating around. My take is that the base rate of criminality among large donors is high, having a rumor mill does not do very much to lower that rate, and so I expect to believe that the risk will be relatively high for high net worth people 80k puts on the front page in the future, and I donât need an investigation to tell me that.
To make some positive suggestions about things I could imagine learning from/âfinding useful:
I have played around with the idea of some voluntary pledge for earning to give companies where they could opt into additional risk management and transparency policies (e.g. selecting some processes from Sarbanes-Oxley). My sense is that these policies do actually substantially reduce the risk of fraud (albeit at great expense), and might be worth doing.[4]
At least, it seems like this should be our first port of call. Maybe we canât actually implement industry best practices around risk management, but it feels like we should at least try before giving up and doing the rumor mill thing.
My understanding is that a bunch of work has gone into making regulations so that publicly traded companies are less likely to commit fraud, and these regulations are somewhat effective, but they are so onerous that many companies are willing to stay private and forgo billions of dollars in investment just to not have to deal with them. I suspect that EA might find itself in a similarly unfortunate situation where reducing risks from âprominent individualsâ requires the individuals in question to do something so onerous that no one is willing to become âprominent.â I would be excited about research into a) whether this is in fact the case, and b) what to do about it, if so.
Some people probably disagree with my claim that rumor mills are ineffective. If so, research into this would be useful. E.g. itâs been on my backlog for a while to write up a summary of Why They Do It, or a fraud management textbook.
Why They Do It is perhaps particularly useful, given that one of its key claims is that, unlike with blue-collar crime, character traits donât correlate well with propensity to commit white-collar crimes crimes, and I think this may be a crux between me and people who disagree with me.
All that being said, I think Iâm weakly in favor of someone more famous than me[5] doing some sort of write up about what rumors they heard, largely because I donât expect the above to convince many people, and I think such a write up will mostly result in people realizing that the rumors were not very motivating.
Thanks to Chana Messinger for this point
One possible reason for this is that people are aiming for goals other than detecting fraud, e.g. they are hoping that rumors could also be used to identify other types of misconduct. I have opinions about this, but this comment is already too long so Iâm not going to address it here.
e.g. I appreciate Nate writing this, but if in the future I learned that a certain person has spoken to Nate, Iâm not going to update my beliefs about the likelihood of them committing financial misconduct very much (and I believe that Nate would agree with this assessment)
Part of why I havenât prioritized this is that there arenât a lot of earning to give companies anymore, but I think itâs still potentially worth someone spending time on this
I have done my own version of this, but my sense is that people (very reasonably) would prefer to hear from someone like Will
I feel like âpeople who worked with Sam told people about specific instances of quite serious dishonesty they had personally observedâ is being classed as ârumourâ here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word ârumourâ conjures. Also, even if people only did receive stuff that was more centrally rumour, I feel like we still want to know if any one in leadership argued âoh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upsideâ. Thatâs a signal someone is a bad leader in my view, which is useful knowledge going forward. (Iâm not saying it is instant proof they should never hold leadership positions ever again: I think quite a lot of people might have said something like that in similar circumstances. But it is a bad sign.)
I agree with this.
I donât really agree with this. Everyone has some probability of turning out to be dodgy; it matters exactly how strong the available evidence was. âThis EA leader writes people off immediately when they have even a tiny probability of being untrustworthyâ would be a negative update about the personâs decision-making too!
I took that second quote to mean âeven if Sam is dodgy itâs still good to publicly back himâ
I meant something in between âisâ and âhas a non-zero chance of beingâ, like assigning significant probability to it (obviously I didnât have an exact number in mind), and not just for base rate reasons about believing all rich people to be dodgy.
Huh, the same reason you cite for why you are not interested in doing an investigation is one of the key reasons why I want an investigation.
It seems to me that current EA leadership is basically planning to continue a âour primary defense against bad actors is the rumor millâ strategy. Having an analysis of how that strategy did not work, and in some sense canât work for things like this seems like itâs one of the things that would have the most potential to give rise to something better here.
Interesting! Iâm glad I wrote this then.
Do you think â[doing an investigation is] one of the things that would have the most potential to give rise to something better hereâ because you believe it is very hard to find alternatives to the rumor mill strategy? Or because you expect alternatives to not be adopted, even if found?
My current sense is that there is no motivation to find an alternative because people mistakenly think it works fine enough and so there is no need to try to find something better (and also in the absence of an investigation and clear arguments about why the rumor thing doesnât work, people probably think they canât really be blamed if the strategy fails again)
Suppose I want to devote some amount of resources towards finding alternatives to a rumor mill. I had been interpreting you as claiming that, instead of directly investing these resources towards finding an alternative, I should invest these resources towards an investigation (which will then in turn motivate other people to find alternatives).
Is that correct? If so, Iâm interested in understanding why â usually if you want to do a thing, the best approach is to just do that thing.
It seems to me that a case study of how exactly FTX occurred, and where things failed, would be among one of the best things to use to figure out what thing to do instead.
Currently the majority of people who have an interest in this are blocked by not really knowing what worked and didnât work in the FTX case, and so probably will have trouble arguing compellingly for any alternative, and also lack some of the most crucial data. My guess is you might have the relevant information from informal conversations, but most donât.
I do think also just directly looking for an alternative seems good. I am not saying that doing an FTX investigation is literally the very best thing to do in the world, it just seems better than what I see EA leadership spending their time on instead. If you had the choice between âfigure out a mechanism detecting and propagating information about future adversarial behaviorâ and âdo an FTX investigationâ, I would feel pretty great about both, and honestly donât really know which one I would prefer. As far as I can tell neither of these things is seeing much effort invested into it.
Okay, that seems reasonable. But I want to repeat my claim[1] that people are not blocked by ânot really knowing what worked and didnât work in the FTX caseâ â even if e.g. there was some type of rumor which was effective in the FTX case, I still think we shouldnât rely on that type of rumor being effective in the future, so knowing whether or not this type of rumor was effective in the FTX case is largely irrelevant.[2]
I think the blockers are more like: fraud management is a complex and niche area that very few people in EA have experience with, and getting up to speed with it is time-consuming, and also ~all of the practices are based under assumptions like âthe risk manager has some amount of formal authorityâ which arenât true in EA.
(And to be clear: I think these are very big blockers! They just arenât resolved by doing an investigation.)
Or maybe more specifically: would like people to explicitly refute my claim. If someone does think that rumor mills are a robust defense against fraud but were just implemented poorly last time, I would love to hear that!
Again, under the assumption that your goal is fraud detection. Investigations may be more or less useful for other goals.
It seems like a goal of ~âfraud detectionâ not further specified may be near the nadir of utility for an investigation.
If you go significantly narrower, then how EA managed (or didnât manage) SBF fraud seems rather important to figuring out how to deal with the risk of similar fraudulent schemes in the future.[1]
If you go significantly broader (cf. Oliâs reference to âdetecting and propagating information about future adversarial behaviorâ), the blockers you identify seem significantly less relevant, which may increase the expected value of an investigation.
My tentative guess is that it would be best to analyze potential courses of action in terms of their effects on the âEA immune systemâ at multiple points of specificity, not just close relations of a specific known pathogen (e.g., SBF-like schemes), a class of pathogens (e.g., âfraudâ), or pathogens writ large (e.g., âfuture adversarial behaviorâ).
Given past EA involvement with crypto, and the base rate of not-too-subtle fraud in crypto, the risk of similar fraudulent schemes seems more than theoretical to me.
I think that would be worth exploring. I suspect you are correct that full Sarbanes-Oxley treatment would be onerous.
On the other hand, I donât see how a reasonably competent forensic accountant or auditor could have spent more than a few days at FTX (or at Madoff) without having a stroke. Seeing the commingled bank accounts would have sent alarm bells racing through my head, at least. (One of the core rules of legal ethics is that you do not commingle your money with that of your clients because experience teaches all sorts of horrible things can and often do happen.)
I certainly donât mean to imply that fraud against sophisticated investors and lenders is okay, but there is something particularly bad about straight-up conversion of client funds like at FTX/âMadoff. At least where hedge funds and big banks are concerned, they have the tools and access to protect themselves if they so wish. Moreover, the link between the fraud and the receipt of funds is particularly strong in those casesâEnron was awash in fraud, but it wouldnât be fair to say that a charity that received a grant from Enron at certain points in time was approximately and unknowingly in possession of stolen funds.
Thankfully, procedures meant to ferret out sophisticated Enron-style fraud shouldnât be necessary to rule out most straight-up conversion schemes. Because of the risk that someone will rat the fraudsters out, my understanding is that the conspiracy usually is kept pretty small in these sorts of frauds. That imposes a real limit on how well the scheme will withstand even moderate levels of probing with auditor-level access.
If you want a reference class of similar frauds, here is the prosecutionâs list of cases (after the Booker decision in 2005) with losses > $100MM and fraud type of Ponzi scheme, misappropriation, or embezzlement:
For example, one might be really skeptical if auditing red flags associated with prior frauds are present. Madoff famously had his audits done by a two-person firm that reported not conducting audits. FTX was better, but apparently still used âquestionableâ third-tier firms that âdo audit a few public companies but none of the size or complexity of FTX.â Neither âthe Armanino nor the Prager Metis audit reports for 2021 provides an opinion on the FTX US or FTX Trading internal controls over accounting and financial reportingââand the audit reports tell the reader as much (same source). The article, written by an accounting lecturer at Wharton, goes on to describe other weirdness in the audit reports. Of course, thatâs not foolproofâEnron had one of the then-Big Five accounting firms, for instance.
Catching all fraud is not realistic . . . for anyone, much less a charitable social movement. But it seems like some basic checks to make fairly sure the major or whole basis for the company /â for the individualâs wealth is not a fraudulent house of cards seems potentially attainable at a reasonable burden level.
I guess the question I have is, if the fraud wasnât noticed by SBFâs investors, who had much better access to information and incentives to find fraud, why would anyone expect the recipients of his charitable donations to notice it? If it was a failure of the EA movement not to know that FTX was fraudulent, isnât it many times more of a failure that the fraud was unnoticed by the major sophisticated investment firms that were large FTX shareholders?
I think investing in FTX was genuinely a good idea, if you were a profit maximizer, even if you strongly suspected the fraud. As Jason says, as an investor losing money due to fraud isnât any worse than losing money because a company fails to otherwise be profitable, so even assigning 20%-30% probability to fraud for a high-risk investment like FTX where you are expecting >2x returns in a short number of years will not make a huge difference to your bottomline.
In many ways you should expect being the kind of person who is willing to commit fraud to be positively associated with returns, because doing illegal and fradulent things means that the people who run the organization take on massive risk where you are not exposed to the downside, but you are exposed to the upside. Itâs not worth it to literally invest in fraud, but it is worth it to invest into the kind of company where the CEO is willing to go to prison, since you donât really have any risk of going to prison, but you get the upside of the legal risk they take on (think of Uber blatantly violating laws until they established a new market, which probably exposed leadership to substantial legal risk, but investors just got to reap the profits).
I wasnât suggesting we should expect this fraud to have been found in this case with the access that was available to EA sources. (Perhaps the FTXFF folks might have caught the scent if they were forensic accountantsâbut they werenât. And Iâm not at all confident on that in any event.) Iâm suggesting that, in response to this scandal, EA organizations could insist on certain third-party assurances in the future before taking significant amounts of money from certain sources.
Why the big money was willing to fork over nine figures each to FTX without those assurances is unclear to me. But one observation: as far as a hedge fund or lender is concerned, a loss due to fraud is no worse than a loss due to the invested-in firm being outcompeted, making bad business decisions, experiencing a general crypto collapse, getting shut down for regulatory issues, or any number of scenarios that were probably more likely ex ante than a massive conversion scheme. In fact, such a scheme might even be less bad to the extent that the firm thought it might get more money back in a fraud loss than from some ordinarily-business failure modes. Given my understanding that these deals often move very quickly, and the presence of higher-probability failure modes, it is understandable that investors and lenders wouldnât have prioritized fraud detection.
In contrast, charitable grantees are much more focused in their concern about fraud; taking money from a solvent, non-fraudulent business that later collapses doesnât raise remotely the same ethical, legal, operational, and reputational concerns. Their potential exposure in that failure mode are likely several times larger than those of the investors/âlenders after all non-financial exposures are considered. They are also not on a tight time schedule.
Re your footnote 4, CE/âAIM are starting an earning-to-give incubation program, so that is likely to change pretty soon
Oh good point! That does seem to increase the urgency of this. Iâd be interested to hear if CE/âAIM had any thoughts on the subject.