The quality of reasoning in the text seems somewhat troublesome. Using two paragraphs as example
On Halloween this past year, I was hanging out with a few EAs. Half in jest, someone declared that the best EA Halloween costume would clearly be a crypto-crash — and everyone laughed wholeheartedly. Most of them didn’t know what they were dealing with or what was coming. I often call this epistemic risk: the risk that stems from ignorance and obliviousness, the catastrophe that could have been avoided, the damage that could have been abated, by simply knowing more. Epistemic risks contribute ubiquitously to our lives: We risk missing the bus if we don’t know the time, we risk infecting granny if we don’t know we carry a virus. Epistemic risk is why we fight coordinated disinformation campaigns and is the reason countries spy on each other.
Still, it is a bit ironic for EAs to have chosen ignorance over due diligence. Here are people who (smugly at times) advocated for precaution and preparedness, who made it their obsession to think about tail risks, and who doggedly try to predict the future with mathematical precision. And yet, here they were, sharing a bed with a gambler against whom it was apparently easy to find allegations of shady conduct. The affiliation was a gamble that ended up putting their beloved brand and philosophy at risk of extinction.
It appears that a chunk of Zoe’s epistemic risk bears a striking resemblance to financial risk. For instance, if one simply knew more about tomorrow’s stock prices, they could sidestep all stock market losses and potentially become stupendously rich.
This highlights the fact that gaining knowledge in certain domains can be difficult task, with big hedge funds splashing billions and hiring some of the brightest minds just to gain a slight edge in simply knowing a bit more about asset prices. It extends to having more info about which companies may go belly up or engage in fraud.
Acquiring more knowledge comes at a cost. Processing knowledge comes at cost. Choosing ignorance is mostly not a result of recklessness or EA institutional design but a practical choice given the resources required to process information. It’s actually rational for everyone to ignore most information most of the time (this is standard econ, check rational inattention and extensive literature on the topic).
One real question in this space is if EAs have allocated their attention wisely. The answer seems to be “mostly yes.” In case of FTX, heavyweights like Temasek, Sequoia Capital, and SoftBank with billions on the line did their due diligence but still missed what was happening. Expecting EAs to be better evaluators of FTX’s health than established hedge funds is somewhat odd. EAs, like everyone else, face the challenge of allocating attention and their expertise lies in “using money for good” rather than “evaluating the health of big financial institutions”. For the typical FTX grant recipient to assume they need to be smarter than Sequoia or SoftBank about FTX would likely not be a sound decision.
l question in this space is if EAs have allocated their attention wisely. The answer seems to be “mostly yes.” In case of FTX, heavyweights like Temasek, Sequoia Capital, and SoftBank with billions on the line did their due diligence but still missed what was happening. Expecting EAs to be better evaluators of FTX’s health than established hedge funds is somewhat odd.
Two things:
Sequoia et al. isn’t a good benchmark –
(i) those funds were doing diligence in a very hot investing environment where there was a substantial tradeoff between depth of diligence and likelihood of closing the deal. Because EAs largely engaged FTX on the philanthropic side, they didn’t face this pressure.
(ii) SBF was inspired and mentored by prominent EAs, and FTX was incubated by EA over the course of many years. So EAs had built relationships with FTX staff much deeper than what funds would have been able to establish over the course of a months-long diligence process.
The entire EA project is premised on the idea that it can do better at figuring things out than legacy institutions.
a. Sequoia led FTX round B in Jul 2021 and had notably more time to notice any irregularities than grant recipients.
b. I would expect the funds to have much better expertise in something like “evaluating the financial health of a company”.
Also it seem you are somewhat shifting the goalposts: Zoe’s paragraph with “On Halloween this past year, I was hanging out with a few EAs.” It is reasonable to assume the reader will interpret it as hanging out with basically random/typical EAs, and the argument should hold for these people. Your argument would work better if she was hanging out with “EAs working at FTX” or “EAs advising SBF” who could have probably done better than funds on evaluating stuff like how the specific people work.
The EA project is clearly not promised on the idea that it should, for example, “figure out stuff like stock price better than legacy institutions”. Quite the contrary—the claim is while humanity actually invests decent amount of competent effort in stock, in comparison, it neglects problems like poverty or xrisk.
It seems like we’re talking past each other here, in part because as you note we’re referring to different EA subpopulations:
Elite EAs who mentored SBF & incubated FTX
Random/typical EAs who Cremer would hang out with at parties
EA grant recipients
I don’t really know who knew what when; most of my critical feeling is directed at folks in category (1). Out of everyone we’ve mentioned here (EA or not), they had the most exposure to and knowledge about (or at least opportunity to learn about) SBF & FTX’s operations.
I think we should expect elite EAs to have done better than Sequoia et al. at noticing red flags (e.g. the reports of SBF being shitty at Alameda in 2017; e.g. no ring-fence around money earmarked for the Future Fund) and acting on what they noticed.
Which quality? I really liked the first part of of your comment and even weakly upvoted it on both votes for that reason, but I feel like the second point has no substance. (Longtermist EA is about doing things that existing institutions are neglecting; not doing the work of existing institutions better.)
I read Cremer as gesturing in these passages to the point Tyler Cowen made here (a):
Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be.
I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant. And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated). When it comes to existential risk, I generally prefer to invest in talent and good institutions, rather than trying to fine-tune predictions about existential risk itself.
If EA is going to do some lesson-taking, I would not want this point to be neglected.
The quality of reasoning in the text seems somewhat troublesome. Using two paragraphs as example
It appears that a chunk of Zoe’s epistemic risk bears a striking resemblance to financial risk. For instance, if one simply knew more about tomorrow’s stock prices, they could sidestep all stock market losses and potentially become stupendously rich.
This highlights the fact that gaining knowledge in certain domains can be difficult task, with big hedge funds splashing billions and hiring some of the brightest minds just to gain a slight edge in simply knowing a bit more about asset prices. It extends to having more info about which companies may go belly up or engage in fraud.
Acquiring more knowledge comes at a cost. Processing knowledge comes at cost. Choosing ignorance is mostly not a result of recklessness or EA institutional design but a practical choice given the resources required to process information. It’s actually rational for everyone to ignore most information most of the time (this is standard econ, check rational inattention and extensive literature on the topic).
One real question in this space is if EAs have allocated their attention wisely. The answer seems to be “mostly yes.” In case of FTX, heavyweights like Temasek, Sequoia Capital, and SoftBank with billions on the line did their due diligence but still missed what was happening. Expecting EAs to be better evaluators of FTX’s health than established hedge funds is somewhat odd. EAs, like everyone else, face the challenge of allocating attention and their expertise lies in “using money for good” rather than “evaluating the health of big financial institutions”. For the typical FTX grant recipient to assume they need to be smarter than Sequoia or SoftBank about FTX would likely not be a sound decision.
Two things:
Sequoia et al. isn’t a good benchmark –
(i) those funds were doing diligence in a very hot investing environment where there was a substantial tradeoff between depth of diligence and likelihood of closing the deal. Because EAs largely engaged FTX on the philanthropic side, they didn’t face this pressure.
(ii) SBF was inspired and mentored by prominent EAs, and FTX was incubated by EA over the course of many years. So EAs had built relationships with FTX staff much deeper than what funds would have been able to establish over the course of a months-long diligence process.
The entire EA project is premised on the idea that it can do better at figuring things out than legacy institutions.
a.
Sequoia led FTX round B in Jul 2021 and had notably more time to notice any irregularities than grant recipients.
b.
I would expect the funds to have much better expertise in something like “evaluating the financial health of a company”.
Also it seem you are somewhat shifting the goalposts: Zoe’s paragraph with “On Halloween this past year, I was hanging out with a few EAs.” It is reasonable to assume the reader will interpret it as hanging out with basically random/typical EAs, and the argument should hold for these people. Your argument would work better if she was hanging out with “EAs working at FTX” or “EAs advising SBF” who could have probably done better than funds on evaluating stuff like how the specific people work.
The EA project is clearly not promised on the idea that it should, for example, “figure out stuff like stock price better than legacy institutions”. Quite the contrary—the claim is while humanity actually invests decent amount of competent effort in stock, in comparison, it neglects problems like poverty or xrisk.
It seems like we’re talking past each other here, in part because as you note we’re referring to different EA subpopulations:
Elite EAs who mentored SBF & incubated FTX
Random/typical EAs who Cremer would hang out with at parties
EA grant recipients
I don’t really know who knew what when; most of my critical feeling is directed at folks in category (1). Out of everyone we’ve mentioned here (EA or not), they had the most exposure to and knowledge about (or at least opportunity to learn about) SBF & FTX’s operations.
I think we should expect elite EAs to have done better than Sequoia et al. at noticing red flags (e.g. the reports of SBF being shitty at Alameda in 2017; e.g. no ring-fence around money earmarked for the Future Fund) and acting on what they noticed.
I think your comment would’ve been a lot stronger if you had left it at 1. Your second point seems a bit snarky.
I don’t think snark cuts against quality, and we come from a long lineage of it.
Which quality? I really liked the first part of of your comment and even weakly upvoted it on both votes for that reason, but I feel like the second point has no substance. (Longtermist EA is about doing things that existing institutions are neglecting; not doing the work of existing institutions better.)
I read Cremer as gesturing in these passages to the point Tyler Cowen made here (a):
I previously addressed this here.
Thanks. I think Cowen’s point is a mix of your (a) & (b).
I think this mixture is concerning and should prompt reflection about some foundational issues.