Do you think EA’s self-reflection about this is at all productive, considering most people had even less information than you?
I don’t have terribly organized thoughts about this. (And I am still not paying all that much attention—I have much more patience for picking apart my own reasoning processes looking for ways to improve them, than I have for reading other people’s raw takes :-p)
But here’s some unorganized and half-baked notes:
I appreciated various expressions of emotion. Especially when they came labeled as such.
I think there was also a bunch of other stuff going on in the undertones that I don’t have a good handle on yet, and that I’m not sure about my take on. Stuff like… various people implicitly shopping around proposals about how to readjust various EA-internal political forces, in light of the turmoil? But that’s not a great handle for it, and I’m not terribly articulate about it.
There’s a phenomenon where a gambler places their money on 32, and then the roulette wheel comes up 23, and they say “I’m such a fool; I should have bet 23”.
More useful would be to say “I’m such a fool; I should have noticed that the EV of this gamble is negative.” Now at least you aren’t asking for magic lottery powers.
Even more useful would be to say “I’m such a fool; I had three chances to notice that this bet was bad: when my partner was trying to explain EV to me; when I snuck out of the house and ignored a sense of guilt; and when I suppressed a qualm right before placing the bet. I should have paid attention in at least one of those cases and internalized the arguments about negative EV, before gambling my money.” Now at least you aren’t asking for magic cognitive powers.
My impression is that various EAs respond to crises in a manner that kinda rhymes with saying “I wish I had bet 23”, or at best “I wish I had noticed this bet was negative EV”, and in particular does not rhyme with saying “my second-to-last chance to do better (as far as I currently recall) was the moment that I suppressed the guilt from sneaking out of the house”.
(I think this is also true of the general population, to be clear. Perhaps even moreso.)
I have a vague impression that various EAs perform self-flagellation, while making no visible attempt to trace down where, in their own mind, they made a misstep. (Not where they made a good step that turned out in this instance to have a bitter consequence, but where they made a wrong step of the general variety that they could realistically avoid in the future.)
(Though I haven’t gone digging up examples, and in lieu of examples, for all I know this impression is twisted by influence from the zeitgeist.)
My guess is that most EAs didn’t make mental missteps of any import.
And, of course, most folk on this forum aren’t rushing to self-flagellate. Lots of people who didn’t make any mistake, aren’t saying anything about their non-mistakes, as seems entirely reasonable.
I think the scrupulous might be quick to object that, like, they had some flicker of unease about EA being over-invested in crypto, that they should have expounded upon. And so surely they, too, erred.
And, sure, they’d’ve gotten more coolness points if they’d joined the ranks of people who aired that concern in advance.
And there is, I think, a healthy chain of thought from there to the hypothesis that the community needs better mechanisms for incentivizing and aggregating distributed knowledge.
(For instance: some people did air that particular concern in advance, and it didn’t do much. There’s perhaps something to be said for the power that a thousand voices would have had when ten didn’t suffice, but an easier fix than finding 990 voices is probably finding some other way to successfully heed the 10, which requires distinguishing them from the background noise—and distinguishing them as something actionable—before it’s too late, and then routing the requisite action to the people who can do something about it. etc.)
I hope that some version of this conversation is happening somewhere, and it seems vaguely plausible that there’s a variant happening behind closed doors at CEA or something.
I think that maybe a healthier form of community reflection would have gotten to a public and collaborative version of that discussion by now. Maybe we’ll still get there.
(I caveat, though, that it seems to me that many good things die from the weight of the policies they adopt in attempts to win the last war, with a particularly egregious example that springs to mind being the TSA. But that’s getting too much into the object-level weeds.)
(I also caveat that I in fact know a pair of modestly-high-net-worth EA friends who agreed, years ago, that the community was overexposed to crypto, and that at most one of them should be exposed to crypto. The timing of this thought is such that the one who took the non-crypto fork is now significantly less comparatively wealthy. This stuff is hard to get right in real life.)
(And I also caveat that I’m not advocating design-by-community-committee when it comes to community coordination mechanisms. I think that design-by-committee often fails. I also think there’s all sorts of reasons why public attempts to discuss such things can go off the rails. Trying to have smaller conversations, or in-person conversations, seems eminently reasonable to me.)
I think that another thing that’s been going on is that there are various rumors around that “EA leaders” knew something about all this in advance, and this has caused a variety of people to feel (justly) perturbed and uneasy.
Insofar as someone’s thinking is influenced by a person with status in their community, I think it’s fair to ask what they knew and when, as is relevant to the question of whether and how to trust them in the future.
And insofar as other people are operating the de-facto community coordination mechanisms, I think it’s also fair to ask what they knew and when, as is relevant to the question of how (as a community) to fix or change or add or replace some coordination mechanisms.
I don’t particularly have a sense that the public EA discourse around FTX stuff was headed in a healthy and productive direction.
It’s plausible to me that there are healthy and productive processes going on behind closed doors, among the people who operate the de-facto community coordination mechanisms.
Separately, it kinda feels to me like there’s this weird veil draped over everything, where there’s rumors that EA-leader-ish folk knew some stuff but nobody in that reference class is just, like, coming clean.
This post is, in part, an attempt to just pierce the damn veil (at least insofar as I personally can, as somebody who’s at least EA-leader-adjacent).
I can at least show some degree to which the rumors were true (I run an EA org, and Alameda did start out in the offices downstairs from ours, and I was privy to a bunch more data than others) versus false (I know of no suspicion that Sam was defrauding customers, nor have I heard any hint of any coverup).
One hope I have is that this will spark some sort of productive conversation.
For instance, my current hypothesis is that we’d do well to look for better community mechanisms for aggregating hints and acting on them. (Where I’m having trouble visualizing ways of doing it that don’t also get totally blindsided by the next crisis, when it turns out that the next war is not exactly the same as the last one. But this, again, is getting more into the object-level.)
Regardless of whether that theory is right, it’s at least easier to discuss in light of a bunch of the raw facts. Whether or not everybody was completely blindsided, vs whether we had a bunch of hints that we failed to assemble, vs whether there was a fraudulent conspiracy we tried to cover up, matters quite a bit as to how we should react!
It’s plausible to me that a big part of the reason why the discussion hasn’t yet produced Nate!legible fruit, is because it just wasn’t working with all that many details. This post is intended in part to be a contribution towards that end.
(Though I of course also entertain the hypotheses that there’s all sorts of different forces pushing the conversation off the rails (such that this post won’t help much), and the hypothesis that the conversation is happening just fine behind closed doors somewhere (such that this post isn’t all that necessary).)
(And I note, again, that insofar as this post does help the convo, Rob Bensinger gets a share of the credit. I was happy to shelve this post indefinitely, and wouldn’t have dug it out of my drafts folder if he hadn’t argued that it had a chance of rerailing the conversation.)
Fwiw, for common knowledge (though I don’t know everything happening at CEA), so that other people can pick up the slack and not assume things are covered, or so that people can push me to change my prioritization, here’s what I see happening at CEA in regard to:
“finding some other way to successfully heed the 10, which requires distinguishing them from the background noise—and distinguishing them as something actionable—before it’s too late, and then routing the requisite action to the people who can do something about it”
I’ve been thinking some about it, mostly in the context of thinking that every time something shifts a lot in power or funding, that should potentially be an active trigger for us as a team investigating / figuring out if anything’s suss. We’re not often going to be the relevant subject matter experts, but we can find others and ask a bunch of EAs what they personally know if they’re comfortable speaking.
It’s also been more salient to me since reading your comment!
Maybe stronger due diligence than normal financial checks on major EA donors shouldn’t actually be my team’s responsibility, in which case we should figure out whose it is.
The community health team as a whole is doing thinking about it, especially via the mechanism “how do we gather more of people’s vague fuzzy concerns that wouldn’t normally rise to the level of calling out / how do we make it easier to talk to us”, but also at some point planning to do a reflection on what we missed / didn’t make happen that we wish we’d did given our particular remit
Nicole Ross, the normal manager of the team, who’s been doing board work for months, has been thinking a lot about what should change generally in EA and plans to make that reflection and orienting a top priority as she comes back.
The org as a whole is definitely thinking about “what are the root causes that made this kind of failure happen and what do we do about that”, and one of my colleagues says they’re thinking about the particular mechanism you point to, but conversations I’ve been a part of have not emphasized it.
There’s a plan to think about structural and governance reform, which I would strongly assume would engage with the question of better/alternate whistleblowing structures as well as other related things, and only end up not suggest them if it seemed bad or other things were higher priority.
If my colleagues disagree, please say so! I think overall it’s correct to say this particular thread isn’t a top priority of any person or team right now. Perhaps it should be! But there are lots of threads, and I think this one is important but less urgent. I’d like to spend some time on it at some point, though. Happy to get on a call and chat about it.
FWIW, I would totally want to openly do a postmortem. once the bankruptcy case is over, i’ll be pretty happy to publicly say what i knew at various points of time. but i’m currently holding back for legal reasons, and instead discuss it (as you said) “behind closed doors”. (Which is frustrating for everyone who would like to have transparent public discussion, sorry about that. it’ is also really frustrating for me!)
I think the truth is closest to “we had a bunch of hints that we failed to assemble”
FWIW, I think such a postmortem should start w/ the manner in which Sam left JS. As far as I’m aware, that was the first sign of any sketchiness, several months before the 2018 Alameda walkout.
Some characteristics apparent at the time:
joining CEA as “director of development” which looks like it was a ruse to avoid JS learning about true intentions
hiring away young traders who were in JS’s pipeline at the time
I believe these were perfectly legal, but to me they look like the first signs that SBF was inclined to:
choose the (naive) utilitarian path over the virtuous one
risk destroying a common resource (good will / symbiotic relationship between JS and EA) for the sake of a potential prize
These were also the first opportunities I’m aware of that the rest of us had to push back and draw a harder line in favor of virtuous / common-sense ethical behavior.
If we want to analyze what we as a community did wrong, this to me looks like the first place to start.
I don’t have terribly organized thoughts about this. (And I am still not paying all that much attention—I have much more patience for picking apart my own reasoning processes looking for ways to improve them, than I have for reading other people’s raw takes :-p)
But here’s some unorganized and half-baked notes:
I appreciated various expressions of emotion. Especially when they came labeled as such.
I think there was also a bunch of other stuff going on in the undertones that I don’t have a good handle on yet, and that I’m not sure about my take on. Stuff like… various people implicitly shopping around proposals about how to readjust various EA-internal political forces, in light of the turmoil? But that’s not a great handle for it, and I’m not terribly articulate about it.
There’s a phenomenon where a gambler places their money on 32, and then the roulette wheel comes up 23, and they say “I’m such a fool; I should have bet 23”.
More useful would be to say “I’m such a fool; I should have noticed that the EV of this gamble is negative.” Now at least you aren’t asking for magic lottery powers.
Even more useful would be to say “I’m such a fool; I had three chances to notice that this bet was bad: when my partner was trying to explain EV to me; when I snuck out of the house and ignored a sense of guilt; and when I suppressed a qualm right before placing the bet. I should have paid attention in at least one of those cases and internalized the arguments about negative EV, before gambling my money.” Now at least you aren’t asking for magic cognitive powers.
My impression is that various EAs respond to crises in a manner that kinda rhymes with saying “I wish I had bet 23”, or at best “I wish I had noticed this bet was negative EV”, and in particular does not rhyme with saying “my second-to-last chance to do better (as far as I currently recall) was the moment that I suppressed the guilt from sneaking out of the house”.
(I think this is also true of the general population, to be clear. Perhaps even moreso.)
I have a vague impression that various EAs perform self-flagellation, while making no visible attempt to trace down where, in their own mind, they made a misstep. (Not where they made a good step that turned out in this instance to have a bitter consequence, but where they made a wrong step of the general variety that they could realistically avoid in the future.)
(Though I haven’t gone digging up examples, and in lieu of examples, for all I know this impression is twisted by influence from the zeitgeist.)
My guess is that most EAs didn’t make mental missteps of any import.
And, of course, most folk on this forum aren’t rushing to self-flagellate. Lots of people who didn’t make any mistake, aren’t saying anything about their non-mistakes, as seems entirely reasonable.
I think the scrupulous might be quick to object that, like, they had some flicker of unease about EA being over-invested in crypto, that they should have expounded upon. And so surely they, too, erred.
And, sure, they’d’ve gotten more coolness points if they’d joined the ranks of people who aired that concern in advance.
And there is, I think, a healthy chain of thought from there to the hypothesis that the community needs better mechanisms for incentivizing and aggregating distributed knowledge.
(For instance: some people did air that particular concern in advance, and it didn’t do much. There’s perhaps something to be said for the power that a thousand voices would have had when ten didn’t suffice, but an easier fix than finding 990 voices is probably finding some other way to successfully heed the 10, which requires distinguishing them from the background noise—and distinguishing them as something actionable—before it’s too late, and then routing the requisite action to the people who can do something about it. etc.)
I hope that some version of this conversation is happening somewhere, and it seems vaguely plausible that there’s a variant happening behind closed doors at CEA or something.
I think that maybe a healthier form of community reflection would have gotten to a public and collaborative version of that discussion by now. Maybe we’ll still get there.
(I caveat, though, that it seems to me that many good things die from the weight of the policies they adopt in attempts to win the last war, with a particularly egregious example that springs to mind being the TSA. But that’s getting too much into the object-level weeds.)
(I also caveat that I in fact know a pair of modestly-high-net-worth EA friends who agreed, years ago, that the community was overexposed to crypto, and that at most one of them should be exposed to crypto. The timing of this thought is such that the one who took the non-crypto fork is now significantly less comparatively wealthy. This stuff is hard to get right in real life.)
(And I also caveat that I’m not advocating design-by-community-committee when it comes to community coordination mechanisms. I think that design-by-committee often fails. I also think there’s all sorts of reasons why public attempts to discuss such things can go off the rails. Trying to have smaller conversations, or in-person conversations, seems eminently reasonable to me.)
I think that another thing that’s been going on is that there are various rumors around that “EA leaders” knew something about all this in advance, and this has caused a variety of people to feel (justly) perturbed and uneasy.
Insofar as someone’s thinking is influenced by a person with status in their community, I think it’s fair to ask what they knew and when, as is relevant to the question of whether and how to trust them in the future.
And insofar as other people are operating the de-facto community coordination mechanisms, I think it’s also fair to ask what they knew and when, as is relevant to the question of how (as a community) to fix or change or add or replace some coordination mechanisms.
I don’t particularly have a sense that the public EA discourse around FTX stuff was headed in a healthy and productive direction.
It’s plausible to me that there are healthy and productive processes going on behind closed doors, among the people who operate the de-facto community coordination mechanisms.
Separately, it kinda feels to me like there’s this weird veil draped over everything, where there’s rumors that EA-leader-ish folk knew some stuff but nobody in that reference class is just, like, coming clean.
This post is, in part, an attempt to just pierce the damn veil (at least insofar as I personally can, as somebody who’s at least EA-leader-adjacent).
I can at least show some degree to which the rumors were true (I run an EA org, and Alameda did start out in the offices downstairs from ours, and I was privy to a bunch more data than others) versus false (I know of no suspicion that Sam was defrauding customers, nor have I heard any hint of any coverup).
One hope I have is that this will spark some sort of productive conversation.
For instance, my current hypothesis is that we’d do well to look for better community mechanisms for aggregating hints and acting on them. (Where I’m having trouble visualizing ways of doing it that don’t also get totally blindsided by the next crisis, when it turns out that the next war is not exactly the same as the last one. But this, again, is getting more into the object-level.)
Regardless of whether that theory is right, it’s at least easier to discuss in light of a bunch of the raw facts. Whether or not everybody was completely blindsided, vs whether we had a bunch of hints that we failed to assemble, vs whether there was a fraudulent conspiracy we tried to cover up, matters quite a bit as to how we should react!
It’s plausible to me that a big part of the reason why the discussion hasn’t yet produced Nate!legible fruit, is because it just wasn’t working with all that many details. This post is intended in part to be a contribution towards that end.
(Though I of course also entertain the hypotheses that there’s all sorts of different forces pushing the conversation off the rails (such that this post won’t help much), and the hypothesis that the conversation is happening just fine behind closed doors somewhere (such that this post isn’t all that necessary).)
(And I note, again, that insofar as this post does help the convo, Rob Bensinger gets a share of the credit. I was happy to shelve this post indefinitely, and wouldn’t have dug it out of my drafts folder if he hadn’t argued that it had a chance of rerailing the conversation.)
Fwiw, for common knowledge (though I don’t know everything happening at CEA), so that other people can pick up the slack and not assume things are covered, or so that people can push me to change my prioritization, here’s what I see happening at CEA in regard to:
I’ve been thinking some about it, mostly in the context of thinking that every time something shifts a lot in power or funding, that should potentially be an active trigger for us as a team investigating / figuring out if anything’s suss. We’re not often going to be the relevant subject matter experts, but we can find others and ask a bunch of EAs what they personally know if they’re comfortable speaking.
It’s also been more salient to me since reading your comment!
Maybe stronger due diligence than normal financial checks on major EA donors shouldn’t actually be my team’s responsibility, in which case we should figure out whose it is.
The community health team as a whole is doing thinking about it, especially via the mechanism “how do we gather more of people’s vague fuzzy concerns that wouldn’t normally rise to the level of calling out / how do we make it easier to talk to us”, but also at some point planning to do a reflection on what we missed / didn’t make happen that we wish we’d did given our particular remit
Nicole Ross, the normal manager of the team, who’s been doing board work for months, has been thinking a lot about what should change generally in EA and plans to make that reflection and orienting a top priority as she comes back.
The org as a whole is definitely thinking about “what are the root causes that made this kind of failure happen and what do we do about that”, and one of my colleagues says they’re thinking about the particular mechanism you point to, but conversations I’ve been a part of have not emphasized it.
There’s a plan to think about structural and governance reform, which I would strongly assume would engage with the question of better/alternate whistleblowing structures as well as other related things, and only end up not suggest them if it seemed bad or other things were higher priority.
If my colleagues disagree, please say so! I think overall it’s correct to say this particular thread isn’t a top priority of any person or team right now. Perhaps it should be! But there are lots of threads, and I think this one is important but less urgent. I’d like to spend some time on it at some point, though. Happy to get on a call and chat about it.
FWIW, I would totally want to openly do a postmortem. once the bankruptcy case is over, i’ll be pretty happy to publicly say what i knew at various points of time. but i’m currently holding back for legal reasons, and instead discuss it (as you said) “behind closed doors”. (Which is frustrating for everyone who would like to have transparent public discussion, sorry about that. it’ is also really frustrating for me!)
I think the truth is closest to “we had a bunch of hints that we failed to assemble”
FWIW, I think such a postmortem should start w/ the manner in which Sam left JS. As far as I’m aware, that was the first sign of any sketchiness, several months before the 2018 Alameda walkout.
Some characteristics apparent at the time:
joining CEA as “director of development” which looks like it was a ruse to avoid JS learning about true intentions
hiring away young traders who were in JS’s pipeline at the time
I believe these were perfectly legal, but to me they look like the first signs that SBF was inclined to:
choose the (naive) utilitarian path over the virtuous one
risk destroying a common resource (good will / symbiotic relationship between JS and EA) for the sake of a potential prize
These were also the first opportunities I’m aware of that the rest of us had to push back and draw a harder line in favor of virtuous / common-sense ethical behavior.
If we want to analyze what we as a community did wrong, this to me looks like the first place to start.