I think this is excellent criticism!
Jonas_
Predict 2025 AI capabilities (by Sunday)
Would he have been allowed to attend if he wanted to? (I think you really need to have a process to filter out people like him.)
I agree it’s not clear there’s anything useful to be done, which is why I asked for a good plan.
If someone has a good plan for how to make good/useful things happen here, but requires funding for it, please contact me.
I recall feeling most worried about hacks resulting in loss of customer funds, including funds not lent out for margin trading. I was also worried about risky investments or trades resulting in depleting cash reservers that could be used to make up for hacking losses.
I don’t think I ever generated the thought “customer monies need to be segregated, and they might not be”, primarily because at the time I wasn’t familiar with financial regulations.
E.g. in 2023 I ran across an article written in ~2018 that commented an SIPC payout in a case of a broker co-mingling customer funds with an associated trading firm. If I had read that article in 2021, I would have probably suspected FTX of doing this.
Based on some of the follow-up questions, I decided to share this specific example of my thinking at the time (which didn’t prevent me from losing some of my savings in the bankruptcy):
(See my edit)
A 10-15% annual risk of startup failure is not alarming, but a comparable risk of it losing customer funds is. Your comment prompted me to actually check my prediction logs, and I made the following edit to my original comment:
predicting a 10% annual risk of FTX collapsing with
FTX investors and the Future Fund (though not customers)FTX investors, the Future Fund, and possibly customers losing all of their money,[edit: I checked my prediction logs and I actually did predict a 10% annual risk of loss of customer funds in November 2021, though I lowered that to 5% in March 2022. Note that I predicted hacks and investment losses, but not fraud.]
I don’t think so, because:
A 10–15% annual risk was predicted by a bunch of people up until late 2021, but I’m not aware of anyone believing that in late 2022, and Will points out that Metaculus was predicting ~1.3% at the time. I personally updated downwards on the risk because 1) crypto markets crashed, but FTX didn’t, which seems like a positive sign, 2) Sequoia invested, 3) they got a GAAP audit.
I don’t think there was a great implementation of the trade. Shorting FTT on Binance was probably a decent way to do it, but holding funds on Binance for that purpose is risky and costly in itself.
That said, I’m aware that some people (not including myself) closely monitored the balance sheet issue and subsequent FTT liquidations, and withdrew their full balances a couple days before the collapse.
I agree it’s probably a pretty bad idea but I don’t think this supports your conclusion that “the EA community may have hard a hard time seeing through tech hype”
Going even further on legibly acting in accordance with common-sense virtues than one would otherwise, because onlookers will be more sceptical of people associated with EA than they were before.
Here’s an analogy I’ve found helpful. Suppose it’s a 30mph zone, where almost everyone in fact drives at 35mph. If you’re an EA, how fast should you drive? Maybe before it was ok to go at 35, in line with prevailing norms. Now I think we should go at 30.
Wanting to push back against this a little bit:
The big issue here is that SBF was recklessly racing ahead at 60mph, and EAs who saw that didn’t prevent him from doing so. So, I think the main lesson here is that EAs should learn to become strict enforcers of 35mph speed limits among their collaborators, which requires courage and skill in speaking out, rather than being highly strictly law-abiding.
The vast majority of EAs were/are reasonably law-abiding and careful (going at 35mph) and it seems perfectly fine for them to continue the same way. Extra trustworthiness signalling is helpful insofar as the world distrusts EAs due to what happened at FTX, but this effect is probably not huge.
EAs will get less done, be worse collaborators, and lose out on entrepreneurial talent if they become overly cautious. A non-zero level of naughtiness is often desirable, though this is highly domain-dependent.
From personal experience, I thought community health would be responsible, and approached them about some concerns I had, but they were under-resourced in several ways.
I’d be interested in specific scenarios or bad outcomes that we may have averted. E.g., much more media reporting on the EA-FTX association resulting in significantly greater brand damage? Prompting the legal system into investigating potential EA involvement in the FTX fraud, costing enormous further staff time despite not finding anything? Something else? I’m still not sure what example issues we were protecting against.
I broadly agree with the picture and it matches my perception.
That said, I’m also aware of specific people who held significant reservations about SBF and FTX throughout the end of 2021 (though perhaps not in 2022 anymore), based on information that was distinct from the 2018 disputes. This involved things like:
predicting a 10% annual risk of FTX collapsing with
FTX investors and the Future Fund (though not customers)FTX investors, the Future Fund, and possibly customers losing all of their money,[edit: I checked my prediction logs and I actually did predict a 10% annual risk of loss of customer funds in November 2021, though I lowered that to 5% in March 2022. Note that I predicted hacks and investment losses, but not fraud.]
recommending in favor of ‘Future Fund’ and against ‘FTX Future Fund’ or ‘FTX Foundation’ branding, and against further affiliation with SBF,
warnings that FTX was spending its US dollar assets recklessly, including propping up the price of its own tokens by purchasing large amounts of them on open markets (separate from the official buy & burns),
concerns about Sam continuing to employ very risky and reckless business practices throughout 2021.
I think several people had pieces of the puzzle but failed to put them together or realize the significance of it all. E.g. I told a specific person about all of the above issues, but they didn’t have a ‘holy shit’ reaction, and when I later checked with them they had forgotten most of the information I had shared with them.
I also tried to make several further conversations about these concerns happen, but it was pretty hard because many people were often busy and not interested, or worried about the significant risks from sharing sensitive information. Also, with the benefit of hindsight, I clearly didn’t try hard enough.
I also think it was (and I think still is) pretty unclear what, if anything, should’ve been done at the time, so it’s unclear how action-relevant any of this would’ve been.
It’s possible that most of this didn’t reach Will (perhaps partly because many, including myself, perceived him as more of an SBF supporter). I certainly don’t think these worries were as widely disseminated as they should’ve been.
I disagree-voted because I have the impression that there’s a camp of people who left Alameda that has been misleading in their public anti-SBF statements, and has a separate track record of being untrustworthy.
So, given that background, I think it’s unlikely that Will threatened someone in a strong sense of the word, and possible that Bouscal or MacAulay might be misleading, though I haven’t tried to get to the bottom of it.
I wish this post’s summary was clearer on what, exactly, readers could/should do to help with vote pairing. I think this could be valuable during the 2024 election!
Vote pairing seems to be more cost-effective than making calls, going door to door, or other standard forms of changing election outcomes, provided you are in the very special circumstances which make it effective.
What are those circumstances?
Tens of thousands of people have participated in swaps
Do you have a source for this? How many of those were in swing states?
I worry ‘wholesomeness’ overemphasizes doing what’s comfortable and convenient and feels good, rather than what makes the world better:
As mentioned, wholesomeness could stifle visionaries, and this downside wasn’t discussed further.
Fighting to abolish slavery wasn’t a particularly wholesome act, in fact it created a lot of unwholesome conflict. Protests aren’t wholesome. I expect a lot of important future work to look and feel unwholesome. (I’m aware you could fit it into the framework somehow, but it’s an awkward fit.)
I worry it’ll make EA focus even more on creating a cushy environment for its own members (further expanding its parental leave policy and mental health benefits for the third time and running wonderful team retreats in fancy retreat centers), rather than on getting important things done in the world.
Things like virtues and integrity in my opinion do a better job at addressing the naïve consequentialist failure modes that wholesomeness is supposed to address.
If someone else had written my comment, I would ask myself how good that person’s manipulation detection skills are. If I judge them to be strong, I would deem the comment to be significant evidence, and think it more likely that Owen has a flaw that he healed, and less likely that he’s a manipulator. If I judge them to be weak (or I simply don’t have enough information about the person writing the comment), I would not update.
If there are a lot of upvotes on my comment, that may indicate that readers are naïvely trusting me and making an error, or have good reason to trust my judgment, or have independently reached similar conclusions. I think it’s most likely a combination of all of these three factors.
Nitpick:
Gary Marcus was shared the full draft including all the background research / forecast drafts. So it would be more accurate to say “only read bits of it”.