Do you think EA’s self-reflection about this is at all productive, considering most people had even less information than you?
I don’t have terribly organized thoughts about this. (And I am still not paying all that much attention—I have much more patience for picking apart my own reasoning processes looking for ways to improve them, than I have for reading other people’s raw takes :-p)
But here’s some unorganized and half-baked notes:
I appreciated various expressions of emotion. Especially when they came labeled as such.
I think there was also a bunch of other stuff going on in the undertones that I don’t have a good handle on yet, and that I’m not sure about my take on. Stuff like… various people implicitly shopping around proposals about how to readjust various EA-internal political forces, in light of the turmoil? But that’s not a great handle for it, and I’m not terribly articulate about it.
There’s a phenomenon where a gambler places their money on 32, and then the roulette wheel comes up 23, and they say “I’m such a fool; I should have bet 23”.
More useful would be to say “I’m such a fool; I should have noticed that the EV of this gamble is negative.” Now at least you aren’t asking for magic lottery powers.
Even more useful would be to say “I’m such a fool; I had three chances to notice that this bet was bad: when my partner was trying to explain EV to me; when I snuck out of the house and ignored a sense of guilt; and when I suppressed a qualm right before placing the bet. I should have paid attention in at least one of those cases and internalized the arguments about negative EV, before gambling my money.” Now at least you aren’t asking for magic cognitive powers.
My impression is that various EAs respond to crises in a manner that kinda rhymes with saying “I wish I had bet 23”, or at best “I wish I had noticed this bet was negative EV”, and in particular does not rhyme with saying “my second-to-last chance to do better (as far as I currently recall) was the moment that I suppressed the guilt from sneaking out of the house”.
(I think this is also true of the general population, to be clear. Perhaps even moreso.)
I have a vague impression that various EAs perform self-flagellation, while making no visible attempt to trace down where, in their own mind, they made a misstep. (Not where they made a good step that turned out in this instance to have a bitter consequence, but where they made a wrong step of the general variety that they could realistically avoid in the future.)
(Though I haven’t gone digging up examples, and in lieu of examples, for all I know this impression is twisted by influence from the zeitgeist.)
My guess is that most EAs didn’t make mental missteps of any import.
And, of course, most folk on this forum aren’t rushing to self-flagellate. Lots of people who didn’t make any mistake, aren’t saying anything about their non-mistakes, as seems entirely reasonable.
I think the scrupulous might be quick to object that, like, they had some flicker of unease about EA being over-invested in crypto, that they should have expounded upon. And so surely they, too, erred.
And, sure, they’d’ve gotten more coolness points if they’d joined the ranks of people who aired that concern in advance.
And there is, I think, a healthy chain of thought from there to the hypothesis that the community needs better mechanisms for incentivizing and aggregating distributed knowledge.
(For instance: some people did air that particular concern in advance, and it didn’t do much. There’s perhaps something to be said for the power that a thousand voices would have had when ten didn’t suffice, but an easier fix than finding 990 voices is probably finding some other way to successfully heed the 10, which requires distinguishing them from the background noise—and distinguishing them as something actionable—before it’s too late, and then routing the requisite action to the people who can do something about it. etc.)
I hope that some version of this conversation is happening somewhere, and it seems vaguely plausible that there’s a variant happening behind closed doors at CEA or something.
I think that maybe a healthier form of community reflection would have gotten to a public and collaborative version of that discussion by now. Maybe we’ll still get there.
(I caveat, though, that it seems to me that many good things die from the weight of the policies they adopt in attempts to win the last war, with a particularly egregious example that springs to mind being the TSA. But that’s getting too much into the object-level weeds.)
(I also caveat that I in fact know a pair of modestly-high-net-worth EA friends who agreed, years ago, that the community was overexposed to crypto, and that at most one of them should be exposed to crypto. The timing of this thought is such that the one who took the non-crypto fork is now significantly less comparatively wealthy. This stuff is hard to get right in real life.)
(And I also caveat that I’m not advocating design-by-community-committee when it comes to community coordination mechanisms. I think that design-by-committee often fails. I also think there’s all sorts of reasons why public attempts to discuss such things can go off the rails. Trying to have smaller conversations, or in-person conversations, seems eminently reasonable to me.)
I think that another thing that’s been going on is that there are various rumors around that “EA leaders” knew something about all this in advance, and this has caused a variety of people to feel (justly) perturbed and uneasy.
Insofar as someone’s thinking is influenced by a person with status in their community, I think it’s fair to ask what they knew and when, as is relevant to the question of whether and how to trust them in the future.
And insofar as other people are operating the de-facto community coordination mechanisms, I think it’s also fair to ask what they knew and when, as is relevant to the question of how (as a community) to fix or change or add or replace some coordination mechanisms.
I don’t particularly have a sense that the public EA discourse around FTX stuff was headed in a healthy and productive direction.
It’s plausible to me that there are healthy and productive processes going on behind closed doors, among the people who operate the de-facto community coordination mechanisms.
Separately, it kinda feels to me like there’s this weird veil draped over everything, where there’s rumors that EA-leader-ish folk knew some stuff but nobody in that reference class is just, like, coming clean.
This post is, in part, an attempt to just pierce the damn veil (at least insofar as I personally can, as somebody who’s at least EA-leader-adjacent).
I can at least show some degree to which the rumors were true (I run an EA org, and Alameda did start out in the offices downstairs from ours, and I was privy to a bunch more data than others) versus false (I know of no suspicion that Sam was defrauding customers, nor have I heard any hint of any coverup).
One hope I have is that this will spark some sort of productive conversation.
For instance, my current hypothesis is that we’d do well to look for better community mechanisms for aggregating hints and acting on them. (Where I’m having trouble visualizing ways of doing it that don’t also get totally blindsided by the next crisis, when it turns out that the next war is not exactly the same as the last one. But this, again, is getting more into the object-level.)
Regardless of whether that theory is right, it’s at least easier to discuss in light of a bunch of the raw facts. Whether or not everybody was completely blindsided, vs whether we had a bunch of hints that we failed to assemble, vs whether there was a fraudulent conspiracy we tried to cover up, matters quite a bit as to how we should react!
It’s plausible to me that a big part of the reason why the discussion hasn’t yet produced Nate!legible fruit, is because it just wasn’t working with all that many details. This post is intended in part to be a contribution towards that end.
(Though I of course also entertain the hypotheses that there’s all sorts of different forces pushing the conversation off the rails (such that this post won’t help much), and the hypothesis that the conversation is happening just fine behind closed doors somewhere (such that this post isn’t all that necessary).)
(And I note, again, that insofar as this post does help the convo, Rob Bensinger gets a share of the credit. I was happy to shelve this post indefinitely, and wouldn’t have dug it out of my drafts folder if he hadn’t argued that it had a chance of rerailing the conversation.)
(I agree; thanks for the nuance)