I’m saying that if your clearance process is unable to tell whether or not two firms are arms-length, when they have a great deal to potentially gain from illegally interoperating, without the further piece of info about whether the CEOs are dating, you’re screwed. This is like trying to fix the liar loan problem during the mortgage meltdown by asking whether the loan issuer is dating the loan recipient. The problem is not that, besides the profit motive, two people might also be fond of each other and that’s terrible; the problem is if your screening process isn’t enough to counterbalance the profit motive. A screening process that can make sure two firms aren’t colluding to illegally profit should not then break down if the CEOs go on a date.
Or to put it more compactly and specifically: Given the potential energy between Alameda and FTX as firms, not to mention their other visible degrees of prior entanglement, you’d have to be nuts to rely on an assurance process that made a big deal about whether or not the CEOs were dating.
Maybe even more compactly: Any time two firms could gain a lot of financial free energy by colluding, just pretend you’ve been told their CEOs are dating, okay, and now ask what assurances or tests you want to run past that point.
...I think there must be some basic element of my security mindset that isn’t being shared with voters here (if they’re not just a voting ring, a possibility that somebody else raised in comments), and I’m at somewhat of a loss for what it could be exactly. We’re definitely not operating in the same frame here; the things I’m saying here sure feel like obvious good practices from inside my frame.
Taking prurient interest in other people’s sex lives, trying to regulate them as you deem moral, is a classic easy-mode-to-fall-into of pontificating within your tribe, but it seems like an absurd pillar on which to rest the verification that two finance companies are not intermingling their interests. Being like “Oh gosh SBF and Caroline were dating, how improper” seems like finding this one distracting thing to jump on… which would super not be a key element of any correctly designed corporate assurance process about anything? You’d go audit their books and ask for proofs about crypto cold storage, not demand that somebody’s romance be a dark secret that nobody got to hear about?
We sure are working in different frames here, and I don’t understand the voters’ (if they’re not just a voting ring).
I work (indirectly) in financial risk management. Paying special attention to special categories of risk—like romantic relationships—is very fundamental to risk management. It is not that institutions are face with a binary choice of ‘manage risk’ or ‘don’t manage risk’ where people in romantic relationships are ‘managed’ and everyone else is ‘not’. Risk management is a spectrum, and there are good reasons to think that people with both romantic and financial entanglements are higher risk than those with financial entanglements only. For example:
Romantic relationships inspire particularly strong feelings, not usually characterising financial relationships. People in romantic relationships will take risks on each other’s behalf that people in financial relationships will not. We should be equally worried about familial relationships, which also inspire very strong feelings.
Romantic relationships inspire different feelings from financial relationships. Whereas with a business partner you might be tempted to act badly to make money, with a romantic partner you might be tempted to act badly for many other reasons. For example, to make your partner feel good, or to spare your romantic partner embarrassment
Romantic relationships imply a different level of access than financial relationships. People in romantic relationships have levers to make their partner do things they might not want to—for example abusive relationships, threatening to end the relationship unless X is done, watching the partner enter their password into their computer to gain access to systems.
So if I were writing these rules, I might very well rephrase it as “do you have a very strong friendship with this other person” and “do you occasionally spend time at each other’s houses” to avoid both allonormativity and the temptation to prurient sniffing; and I’d work hard to keep any disclosed information of that form private, like “don’t store in Internet-connected devices or preferably on computers at all” private, to minimize incentives against honest disclosure. And even then, I might expect that among the consequences of the regulation, would be that CEOs in relationships would occasionally just lie to me about it, now that such incentives had been established against disclosure.
When you optimize against visible correlates of possible malfeasance, you optimize first and above all against visibility; and maybe secondarily against possible malfeasance if the visibility is very reliable and the correlations are strong enough to take causal leaning on them.
But, sure, if you know all that and you understand the consequences, then Sequoia could’ve asked if SBF and Caroline were in a relationship, understanding that a No answer might be a lie given the incentives they’d established, and that a Yes answer indicated unusual honesty.
I don’t really understand why you are describing this as a hypothetical (“If I were writing these rules...”). You are the founder and head of a highly visible EA organisation recieving charitable money from donors, and presumably have some set of policies in place to prevent staff at that organisation from systematically defrauding those donors behind your back. You have written those policies (or delegated someone else to write them for you). You are sufficiently visible in the EA space that your views on financial probity materially affect the state of EA discourse. What you are telling me is that the policies which you wrote don’t include a ‘no undeclared sexual relationships with people who are supposed to act as a check on you defrauding MIRI’ rule, based on your view that it is excessively paternalistic to inquire about people’s sex life when assessing risk, and that your view is that this is the position that should be adopted in EA spaces generally.
This is—to put it mildly—not the view of the vast majority of organisations which handle money at any significant scale. No sane risk management approach would equate a romantic relationship with ‘a very strong friendship’. Romantic love is qualitatively different to fraternal love. No sane risk management approach would equate “occasionally spend[ing] time at each other’s house” to living together. My wife is often alone in the house for extended periods of time, but I usually hang out with friends when they come over (to give just one difference from an enormous list of possibilities).
EA leadership—which includes you—has clearly made a catastrophic error of financial risk management with this situation. The extent to which they are ‘responsible’ is a fair debate, but it is unquestionable they failed to protect people who trusted them to steer EA into the future—hundreds of people have been made unemployed overnight and EA is potentially facing its first existential risk as a result. I am genuinely baffled how you can look at this situation and conclude that the approach you are describing—a very intelligent non-expert such as yourself creates their own standards of financial conduct at significant odds with the mainstream accepted approach—could still possibly be appropriate in the face of the magnitude of the error this thinking has led to.
I also think it is extremely unedifying that you make the case elsewhere that the disagreement votes you are recieving for your position are from vote manipulation. A more plausible explanation is that people have independently reached the conclusion you are wrong that romantic love presents no special financial risks.
Somebody else in that thread was preemptively yelling “vote manipulation!” and “voting ring!”, and as much as it sounds recursively strange, this plus some voting patterns (early upvotes, then suddenly huge amounts of sudden downvoting) did lead me to suspect that the poster in question was running a bunch of fake accounts and voting with them.
We would in fact be concerned if it turned out that two people who were supposed to have independent eyes on the books were in a relationship and didn’t tell us! And we’d try to predictably conduct ourselves in such a mature, adult, understanding, and non-pearl-clutching fashion that it would be completely safe for those two people to tell the MIRI Board, “Hey, we’ve fallen in love, you need to take auditing responsibility off one of us and move it to somebody else” and have us respond to that in a completely routine, nonthreatening, and unexcited way that created no financial or reputational penalties for us being told about it.
That’s what I think is the healthy, beneficial, and actually useful for minimizing actual fraud in real life culture, of which I do think present EA has some, and which I think is being threatened by performative indignation.
I’m struggling to follow your argument here. What you describe as the situation at MIRI is basically standard risk management approach—if two people create a risk to MIRI’s financial security processes by falling in love, you make sure that neither signs off on risk taken by the other.
But in this thread you are responding with strong disagreement to a comment which says “if this relationship [between SBF and Caroline] were known to be hidden from investors and other stakeholders, should this not have raised red flags?”. You said “who EAs are fucking is none of EA’s business”, amongst other comments of a similar tone.
I don’t understand what exactly you disagree with if you agree SBF and Caroline should have disclosed their relationship so that proper steps could be taken to de-risk their interactions (as would happen at MIRI). It seems that you do agree it matters who EAs are fucking in contexts like this? And therefore that it is relevant to know whether Will MacAskill knew about the undisclosed relationship?
You could plausibly claim it gets disclosed to Sequoia Capital, if SC has shown themselves worthy of being trusted with information like that and responding to it in a sensible fashion eg with more thorough audits. Disclosing to FTX Future Fund seems like a much weirder case, unless FTX Future Fund is auditing FTX’s books well enough that they’d have any hope of detecting fraud—otherwise, what is FTXFF supposed to do with that information?
EA generally thinking that it has a right to know who its celebrity donors are fucking strikes me as incredibly unhealthy.
I think we might be straying from the main point a bit; nobody is proposing a general right to peer into EA sex lives, and I agree that would be unhealthy.
There are some relatively straightforward financial risk management principles which msinstream orgs have been successfully using for decades. You seem to believe one of the pillars of these principles—surfacing risk due to romantic entanglements between parties—shouldn’t apply to EA, and instead some sort of ‘commonsense’ approach should prevail instead (inverted commas because I think the standard way is basically common sense too).
But I don’t understand where your confidence that you’re right here is coming from—EA leadership has just materially failed to protect EA membership from bad actor risk stemming at least in part from a hidden conflict of interest due to a romantic entanglement. EA leadership has been given an opportunity to run risk management their way, and the result is that EA is now associated with the biggest crypto fraud in history. Surely the Bayesian update here is that there are strong reasons to believe mainstream finance had it approximately right?
Rereading the above, I think I might just be unproductively repeating myself at this point, so I’ll duck out of the discussion. I appreciated the respectful back-and-forth, especially considering parts of what I was saying were (unavoidably) pretty close to personal attacks on you and the EA leadership more broadly. Hope you had a pleasant evening too!
My (possibly wrong) understanding of what Eliezer is saying:
FTX ought to have responded internally to the conflict of interest, but they had no obligation to disclose it externally (to Future Fund staff or wider EA community).
The failure in FTX was that they did not implement the right internal controls—not that the relationship was “hidden from investors and other stakeholders.”
If EA leadership and FTX investors made a mistake, it was failing to ensure that FTX had implemented the right internal controls—not failing to know about the relationship.
I couldn’t quite bottom out exactly what EY was saying, but I’m pretty sure it wasn’t that. On your interpretation, EY said, “who EAs are fucking is none of [wider] EA’s business [except people who are directly affected by the COI]”. But he goes on to clarify “There are very limited exceptions to this rule like ‘maybe don’t fuck your direct report’ ”. If that’s an exception to the rule of EAs fucking being only of interest to directly affected parties, then it mean EY thinks an EA having sex with a subordinate should be broadcast to the entire community. That’s a very strict standard (although I guess not crazy—just odd that EY was presenting it as a more relaxed / less prurient standard than conventional financial risk management).
It also doesn’t address my core objection, which is that EA leadership failed very badly to implement proper financial risk management processes. Generally my point was that EA leadership should be epistemically humble now and just implement the risk management processes that work for banks, rather than tinkering around and introducing their own version of these systems. Regardless of what EY meant, unless he meant ‘We should hire in PWC to implement the same financial controls as every Fortune company’ then he is making exactly the same mistake EA leadership made with FTX—assuming that they could create better risk management from first principles than the mainstream system could from actual experience
By the way, I disagree with the objective position here too. Every FTX investor needed to know about the COI and the management strategy FTX adopted in order to assess their risk exposure. This would be the standard at a conventional company (if the company knew about such a blatant COI from their CEO and didn’t tell investors at a conventional company then their risk officers would potentially be liable for the fraud too, iirc)
What’s disappointing is not that Eliezer can’t make even a minor acknowledgement for the relevance of the models or experiences of others, that he is probably outright wrong on the substantial issues, but that Eliezer struggles to communicate and hold a thread in this conversation.
His counterpart is a literal domain expert and maybe very valuable talent to EA. (As a statement considering the totality of the votes and writing) this person is being badgered under what to any outsider should be the scary or unclear norms and power structures of the EA community on its own forum, while Eliezer’s de facto community keeps him afloat.
Elizier’s behavior is unacceptable for a funded, junior community builder, much less a senior leader.
Imagine a newcomer witnessing this, much less experiencing this.
I also think it is extremely unedifying that you make the case elsewhere that the disagreement votes you are recieving for your position are from vote manipulation. A more plausible explanation is that people have independently reached the conclusion you are wrong that romantic love presents no special financial risks.
I agree that it does not seem likely that there was a manipulation here with the votes (I casted strong-disagreement-vote on multiple comments by Eliezer on this page). But concerns about potential voting manipulation on this forum are reasonable by default, considering that it’s an open platform in which it’s technically possible for someone to vote from multiple anonymous accounts.
Fair enough, as long as that’s the standard that is applied to all commenters and not just EA leadership. I appreciate EY agreed that there was likely no manipulation after I pointed this out
It is extremely inaccurate to characterise the relationship between Bankman-Fried and Ellison as merely “dating”, and that people are merely saying that this was “improper”.
Bankman-Fried and Ellison were living together in a shared apartment. An unverified suggestion in one article suggests that Ellison may have been in some form of relationship with other residents of the apartment—residents who included the CTO and Director of Engineering of FTX.
If true, this is a genuinely alarming piece of information that would very obviously have caused anybody to question whether they should have placed their funds in the care of this specific group of people. However this information was not made public, and that lack of transparency is where the problem lies.
I’m saying that if your clearance process is unable to tell whether or not two firms are arms-length, when they have a great deal to potentially gain from illegally interoperating, without the further piece of info about whether the CEOs are dating, you’re screwed. This is like trying to fix the liar loan problem during the mortgage meltdown by asking whether the loan issuer is dating the loan recipient. The problem is not that, besides the profit motive, two people might also be fond of each other and that’s terrible; the problem is if your screening process isn’t enough to counterbalance the profit motive. A screening process that can make sure two firms aren’t colluding to illegally profit should not then break down if the CEOs go on a date.
Or to put it more compactly and specifically: Given the potential energy between Alameda and FTX as firms, not to mention their other visible degrees of prior entanglement, you’d have to be nuts to rely on an assurance process that made a big deal about whether or not the CEOs were dating.
Maybe even more compactly: Any time two firms could gain a lot of financial free energy by colluding, just pretend you’ve been told their CEOs are dating, okay, and now ask what assurances or tests you want to run past that point.
...I think there must be some basic element of my security mindset that isn’t being shared with voters here (if they’re not just a voting ring, a possibility that somebody else raised in comments), and I’m at somewhat of a loss for what it could be exactly. We’re definitely not operating in the same frame here; the things I’m saying here sure feel like obvious good practices from inside my frame.
Taking prurient interest in other people’s sex lives, trying to regulate them as you deem moral, is a classic easy-mode-to-fall-into of pontificating within your tribe, but it seems like an absurd pillar on which to rest the verification that two finance companies are not intermingling their interests. Being like “Oh gosh SBF and Caroline were dating, how improper” seems like finding this one distracting thing to jump on… which would super not be a key element of any correctly designed corporate assurance process about anything? You’d go audit their books and ask for proofs about crypto cold storage, not demand that somebody’s romance be a dark secret that nobody got to hear about?
We sure are working in different frames here, and I don’t understand the voters’ (if they’re not just a voting ring).
I work (indirectly) in financial risk management. Paying special attention to special categories of risk—like romantic relationships—is very fundamental to risk management. It is not that institutions are face with a binary choice of ‘manage risk’ or ‘don’t manage risk’ where people in romantic relationships are ‘managed’ and everyone else is ‘not’. Risk management is a spectrum, and there are good reasons to think that people with both romantic and financial entanglements are higher risk than those with financial entanglements only. For example:
Romantic relationships inspire particularly strong feelings, not usually characterising financial relationships. People in romantic relationships will take risks on each other’s behalf that people in financial relationships will not. We should be equally worried about familial relationships, which also inspire very strong feelings.
Romantic relationships inspire different feelings from financial relationships. Whereas with a business partner you might be tempted to act badly to make money, with a romantic partner you might be tempted to act badly for many other reasons. For example, to make your partner feel good, or to spare your romantic partner embarrassment
Romantic relationships imply a different level of access than financial relationships. People in romantic relationships have levers to make their partner do things they might not want to—for example abusive relationships, threatening to end the relationship unless X is done, watching the partner enter their password into their computer to gain access to systems.
So if I were writing these rules, I might very well rephrase it as “do you have a very strong friendship with this other person” and “do you occasionally spend time at each other’s houses” to avoid both allonormativity and the temptation to prurient sniffing; and I’d work hard to keep any disclosed information of that form private, like “don’t store in Internet-connected devices or preferably on computers at all” private, to minimize incentives against honest disclosure. And even then, I might expect that among the consequences of the regulation, would be that CEOs in relationships would occasionally just lie to me about it, now that such incentives had been established against disclosure.
When you optimize against visible correlates of possible malfeasance, you optimize first and above all against visibility; and maybe secondarily against possible malfeasance if the visibility is very reliable and the correlations are strong enough to take causal leaning on them.
But, sure, if you know all that and you understand the consequences, then Sequoia could’ve asked if SBF and Caroline were in a relationship, understanding that a No answer might be a lie given the incentives they’d established, and that a Yes answer indicated unusual honesty.
I don’t really understand why you are describing this as a hypothetical (“If I were writing these rules...”). You are the founder and head of a highly visible EA organisation recieving charitable money from donors, and presumably have some set of policies in place to prevent staff at that organisation from systematically defrauding those donors behind your back. You have written those policies (or delegated someone else to write them for you). You are sufficiently visible in the EA space that your views on financial probity materially affect the state of EA discourse. What you are telling me is that the policies which you wrote don’t include a ‘no undeclared sexual relationships with people who are supposed to act as a check on you defrauding MIRI’ rule, based on your view that it is excessively paternalistic to inquire about people’s sex life when assessing risk, and that your view is that this is the position that should be adopted in EA spaces generally.
This is—to put it mildly—not the view of the vast majority of organisations which handle money at any significant scale. No sane risk management approach would equate a romantic relationship with ‘a very strong friendship’. Romantic love is qualitatively different to fraternal love. No sane risk management approach would equate “occasionally spend[ing] time at each other’s house” to living together. My wife is often alone in the house for extended periods of time, but I usually hang out with friends when they come over (to give just one difference from an enormous list of possibilities).
EA leadership—which includes you—has clearly made a catastrophic error of financial risk management with this situation. The extent to which they are ‘responsible’ is a fair debate, but it is unquestionable they failed to protect people who trusted them to steer EA into the future—hundreds of people have been made unemployed overnight and EA is potentially facing its first existential risk as a result. I am genuinely baffled how you can look at this situation and conclude that the approach you are describing—a very intelligent non-expert such as yourself creates their own standards of financial conduct at significant odds with the mainstream accepted approach—could still possibly be appropriate in the face of the magnitude of the error this thinking has led to.
I also think it is extremely unedifying that you make the case elsewhere that the disagreement votes you are recieving for your position are from vote manipulation. A more plausible explanation is that people have independently reached the conclusion you are wrong that romantic love presents no special financial risks.
Somebody else in that thread was preemptively yelling “vote manipulation!” and “voting ring!”, and as much as it sounds recursively strange, this plus some voting patterns (early upvotes, then suddenly huge amounts of sudden downvoting) did lead me to suspect that the poster in question was running a bunch of fake accounts and voting with them.
We would in fact be concerned if it turned out that two people who were supposed to have independent eyes on the books were in a relationship and didn’t tell us! And we’d try to predictably conduct ourselves in such a mature, adult, understanding, and non-pearl-clutching fashion that it would be completely safe for those two people to tell the MIRI Board, “Hey, we’ve fallen in love, you need to take auditing responsibility off one of us and move it to somebody else” and have us respond to that in a completely routine, nonthreatening, and unexcited way that created no financial or reputational penalties for us being told about it.
That’s what I think is the healthy, beneficial, and actually useful for minimizing actual fraud in real life culture, of which I do think present EA has some, and which I think is being threatened by performative indignation.
I’m struggling to follow your argument here. What you describe as the situation at MIRI is basically standard risk management approach—if two people create a risk to MIRI’s financial security processes by falling in love, you make sure that neither signs off on risk taken by the other.
But in this thread you are responding with strong disagreement to a comment which says “if this relationship [between SBF and Caroline] were known to be hidden from investors and other stakeholders, should this not have raised red flags?”. You said “who EAs are fucking is none of EA’s business”, amongst other comments of a similar tone.
I don’t understand what exactly you disagree with if you agree SBF and Caroline should have disclosed their relationship so that proper steps could be taken to de-risk their interactions (as would happen at MIRI). It seems that you do agree it matters who EAs are fucking in contexts like this? And therefore that it is relevant to know whether Will MacAskill knew about the undisclosed relationship?
You could plausibly claim it gets disclosed to Sequoia Capital, if SC has shown themselves worthy of being trusted with information like that and responding to it in a sensible fashion eg with more thorough audits. Disclosing to FTX Future Fund seems like a much weirder case, unless FTX Future Fund is auditing FTX’s books well enough that they’d have any hope of detecting fraud—otherwise, what is FTXFF supposed to do with that information?
EA generally thinking that it has a right to know who its celebrity donors are fucking strikes me as incredibly unhealthy.
I think we might be straying from the main point a bit; nobody is proposing a general right to peer into EA sex lives, and I agree that would be unhealthy.
There are some relatively straightforward financial risk management principles which msinstream orgs have been successfully using for decades. You seem to believe one of the pillars of these principles—surfacing risk due to romantic entanglements between parties—shouldn’t apply to EA, and instead some sort of ‘commonsense’ approach should prevail instead (inverted commas because I think the standard way is basically common sense too).
But I don’t understand where your confidence that you’re right here is coming from—EA leadership has just materially failed to protect EA membership from bad actor risk stemming at least in part from a hidden conflict of interest due to a romantic entanglement. EA leadership has been given an opportunity to run risk management their way, and the result is that EA is now associated with the biggest crypto fraud in history. Surely the Bayesian update here is that there are strong reasons to believe mainstream finance had it approximately right?
Rereading the above, I think I might just be unproductively repeating myself at this point, so I’ll duck out of the discussion. I appreciated the respectful back-and-forth, especially considering parts of what I was saying were (unavoidably) pretty close to personal attacks on you and the EA leadership more broadly. Hope you had a pleasant evening too!
My (possibly wrong) understanding of what Eliezer is saying:
FTX ought to have responded internally to the conflict of interest, but they had no obligation to disclose it externally (to Future Fund staff or wider EA community).
The failure in FTX was that they did not implement the right internal controls—not that the relationship was “hidden from investors and other stakeholders.”
If EA leadership and FTX investors made a mistake, it was failing to ensure that FTX had implemented the right internal controls—not failing to know about the relationship.
I couldn’t quite bottom out exactly what EY was saying, but I’m pretty sure it wasn’t that. On your interpretation, EY said, “who EAs are fucking is none of [wider] EA’s business [except people who are directly affected by the COI]”. But he goes on to clarify “There are very limited exceptions to this rule like ‘maybe don’t fuck your direct report’ ”. If that’s an exception to the rule of EAs fucking being only of interest to directly affected parties, then it mean EY thinks an EA having sex with a subordinate should be broadcast to the entire community. That’s a very strict standard (although I guess not crazy—just odd that EY was presenting it as a more relaxed / less prurient standard than conventional financial risk management).
It also doesn’t address my core objection, which is that EA leadership failed very badly to implement proper financial risk management processes. Generally my point was that EA leadership should be epistemically humble now and just implement the risk management processes that work for banks, rather than tinkering around and introducing their own version of these systems. Regardless of what EY meant, unless he meant ‘We should hire in PWC to implement the same financial controls as every Fortune company’ then he is making exactly the same mistake EA leadership made with FTX—assuming that they could create better risk management from first principles than the mainstream system could from actual experience
By the way, I disagree with the objective position here too. Every FTX investor needed to know about the COI and the management strategy FTX adopted in order to assess their risk exposure. This would be the standard at a conventional company (if the company knew about such a blatant COI from their CEO and didn’t tell investors at a conventional company then their risk officers would potentially be liable for the fraud too, iirc)
Voting ring? That sounds preposterous to me
This comment is, at time of writing, sitting at −7 karma from 5 votes. Can someone who downvoted or strong downvoted this clarify why they did so?
What’s disappointing is not that Eliezer can’t make even a minor acknowledgement for the relevance of the models or experiences of others, that he is probably outright wrong on the substantial issues, but that Eliezer struggles to communicate and hold a thread in this conversation.
His counterpart is a literal domain expert and maybe very valuable talent to EA. (As a statement considering the totality of the votes and writing) this person is being badgered under what to any outsider should be the scary or unclear norms and power structures of the EA community on its own forum, while Eliezer’s de facto community keeps him afloat.
Elizier’s behavior is unacceptable for a funded, junior community builder, much less a senior leader.
Imagine a newcomer witnessing this, much less experiencing this.
I agree that it does not seem likely that there was a manipulation here with the votes (I casted strong-disagreement-vote on multiple comments by Eliezer on this page). But concerns about potential voting manipulation on this forum are reasonable by default, considering that it’s an open platform in which it’s technically possible for someone to vote from multiple anonymous accounts.
Fair enough, as long as that’s the standard that is applied to all commenters and not just EA leadership. I appreciate EY agreed that there was likely no manipulation after I pointed this out
It is extremely inaccurate to characterise the relationship between Bankman-Fried and Ellison as merely “dating”, and that people are merely saying that this was “improper”.
Bankman-Fried and Ellison were living together in a shared apartment. An unverified suggestion in one article suggests that Ellison may have been in some form of relationship with other residents of the apartment—residents who included the CTO and Director of Engineering of FTX.
If true, this is a genuinely alarming piece of information that would very obviously have caused anybody to question whether they should have placed their funds in the care of this specific group of people. However this information was not made public, and that lack of transparency is where the problem lies.
People who read this far seem to have upvoted