I’m experimenting with “norms-pledges” to help reduce forum anxiety. Maybe it could be a good social technology IDK. Click [Show More] to read them all:
🕊 Fresh Slate After Disagreement Pledge: I hereby pledge that if we disagree on the forum, I will not hold it against you. (1) I will try not to allow a disagreement to meaningfully impact how I treat you in further discourse, should we meet in another EA Forum thread, on another website or virtual space, or IRL. I know that if we disagree, it doesn’t necessarily mean we will disagree on other topics, nor does it necessarily imply we are on opposing teams. We are most likely on the same team in that we both wish to have the most good done possible and are working in service of finding out what that means. (2) Relatedly, I pledge to not claim to know what you believe in future, I can only confidently claim to know what you wrote or believed at a given time, and I can say what I think you believe given that. I know that people change their minds, and it may be you or me who does so, so I understand that the disagreement may not even still stand and is not necessarily set in stone.
👨👩👧👦 No Gatekeeping Pledge: I hereby pledge that if I am seeking a collaborator, providing an opportunity, or doing hiring or anything akin to hiring, and you would otherwise be a top candidate if not for the following, I will try not to gatekeep: (1) If an opinion you’ve shared or broken-norm you’ve done (on the EA forum or elsewhere) is relevant in a potentially negative way to our collaboration, that I will ask you about it to gain clarity. I will not assume that such an incident means you will not be suitable for a role. I will especially try hard not to make assumptions about your suitability based on old or isolated incidents, or if your digital footprint is too small to get a good picture of who you are and how you think about things. (2) I will not penalize based on someone being a social or professional newcomer or being otherwise unknown to me or my colleagues. If the person is a top candidate otherwise, I will do my due diligence to determine cultural fit separate from that.
🤔 Rationalist Discourse Pledge: (1) I hereby pledge to try to uphold rationalist discourse norms as presented here and here, and comedically summed up here.
🦸♀️Preferring My Primary Account Pledge: (1) I hereby pledge that this is my main EA Forum account. I will never use multiple accounts to manipulate the system, as by casting multiple votes or stating similar opinions with different accounts. (2) I also pledge that, although I can’t be sure what comes, I strongly intend to not use an anonymous or different account (alt or sockpuppet), or any account other than this, my primary account. I pledge that I am willing to take on some reputational risks on this, my primary account, in service of putting truth, transparency, integrity, and a complete narrative over my own anxiety, and to give ideas I think are worth advocating for the best chance at adoption. Therefore I pledge that I will not use an alternate account out of general anxiety around personal or professional retribution or losing clout. CAVEAT 1: I reserve the right to use an alt account in cases where *specific* retribution or other danger can be expected in my particular instance. As example: I reserve the right to use an alt account out of concern about riling up a suspected stalker, specific known bad-faith actor, specific known slanderer, etc. CAVEAT 2: I also reserve the right to use an alt account for the benefit of others. Example: in cases where revealing my own identity would reveal the identity or betray the privacy of some other party I am discussing.
🙇♂️Humility in Pledging Pledge: I hereby pledge that I take these pledges for my own self-improvement and for altruistic reasons. It’s okay to disagree that pledges are useful and important for you. (1) I don’t expect others should necessarily take a norms pledge. I believe the pledges only work if people take them after deep consideration, and I don’t expect I can know all the considerations for others’ situations. Therefore I understand there may be situations that it is actually right that a user avoid taking a pledge. Therefore I will not judge others for not having taken a pledge, including that I will not dismiss other’s character if I see other accounts without a pledge. (3) Additionally, I don’t presume that others not taking a pledge means they would even necessarily act differently than that pledge would imply. I don’t assume their intentions are even different from mine. Perhaps a person is new to the idea or just trying to protect their energy by not opening themselves to criticism. (3) I won’t automatically dismiss a user’s reasoning if I see the user violating norms pledges I’ve made. I still will give their claims a chance to stand on their own merits. (4) If you see me violating a pledge I’ve taken, I will always appreciate if you bring it up to me.
Ivy Mazzola
That makes sense. It is really shocking. I agree on blaming regulators [although I don’t give others a pass].
[I think a section on regulations def belongs from the POV of improving world models too. Before I added my long thinking-out-loud footnote, I didn’t realize just how much it all points at regulators as the original permissiveness break.]
Yeah my thoughts exactly. Or, it doesn’t send a big signal to non-finance people. But like, I think it should send a signal to people in finance, eg the auditors. That FTX should have been able to afford a different service and yet didn’t. Or maybe, idk, should have revealed their internals for different certification (better than GAAP cert idk, I know nothing). I just think it should have raised flags for the auditor. If someone is enlisting you for purposes of increasing trust but they are clearly not doing their damnedest, according to their abilities, to ensure that trust is accurate. The US consumers can’t be expected to know the difference, but the auditor should. I think.
Yeah, turns out there was not only sloppiness here though. Like Enron, things were labelled to look much better than they really were. Like, stocks of coins held in such volumes and accounted as their current exchange price such that it looked like billions in net worth, IIRC, but of course if they would have sold the coins, the value of the coins would have plummeted far before they got through their sale order, so there is no way they could have expected to gather that much USD for the coins they held. This might be normal for stocks valuation? But my impression from the tone is that it was handled differently/worse. Maybe there were other things too, tbh I didn’t look very closely. Plus there was SBF’s backdoor (that was real right?) and I’d bet other weird money movements I assume he has noticed by now.
Glad you liked it. Perhaps I wasn’t clear in purpose though. My point was not to talk about payouts, but to explain how things like this can happen. Because it “violates model of how the world works”. [Edit: I cut stuff from here and expanded on it in footnote above]
I’m just saying there are systemic reasons why the fiasco got as far as it did.
[Edit: Ah this is one of those times I might be being dramatic, but I may as well say] I’m a bit sad to hear “fishing expedition” to this. But [so many EAs didn’t feel similarly] about EA leaders taking some of the blame. I didn’t want to bring EA into this particular comment, but gee. The actors I’ve named here had like 1000x the likelihood of knowing what was going on and being negligent or otherwise at fault somehow than non-Alameda EAs did. It’s a bit flippant to call it a fishing expedition to go after others, if it is worth doing, and when our community has just spent so much time talking about EA fault.dies a bit inside on behalf of EATbh they seem like reasonable suits to me in general, and I hope disincentivize something like this from happening again :/
Eh, I still agree with Ben here. He said they are the worst financial documents he has ever seen, including Enron. And given he’s the guy who steered Enron after bankruptcy, that’s a concrete claim. And as the documents will surely all be submitted, he’d be putting his neck on the line to say such things if it weren’t clearly like the most egregious thing you could imagine. shrugs :/
Somehow they were doing this while having audited financials, passing due diligence from major investors, etc.? And Sam was supposedly a great fundraiser but was circulating a balance sheet with a $8B line item for “hidden poorly labeled account”? I would find it pretty helpful for someone to explain what actually happened here because this violates my models of how the world works.
Mine too, so I went digging. All in all, one can argue (and lawyers are) that there were a lot of enablers. Certainly, people trying to actively dodge being noticed as they violate standard models, is a predictable (but not necessarily pin-pointable at the time) way our models end up failing us.
For talk about audits specifically, there’s this and this. Essentially, “the firm’s auditors weren’t tapped to look into internal controls at FTX, and auditing the internal workings of a company isn’t a [legal] requisite for private corporations.” Of course plenty of people feel that doesn’t hold water, and a lawsuit by FTX customers is pending.
[Edit: A suit may be reasonable because audit firms know well they are commissioned to literally prove financial safety, and the audit was used in FTX’s self-promotion. That FTX was still very unsafe might prove there is negligence in the firm’s business model. If I try to make this fit my model of the world, I get a thought like “corporate greed incentivizes taking on clients that want useless/partial audits that end up being no better than shams, and you look the other way as you assist them in their likely sham”. I reflect more on this in the footnote which you don’t have to read but>>[1]]For talk about investors, there is this Feb 23rd piece on a major lawsuit against a few different VC banks who helped out FTX. That piece is frickin nuts just by virtue of the amount of info they fit into a mere 5 paragraphs. Here’s one quote to inspire a read:
The suit alleges that some of world’s largest venture capital firms, including Sequoia Capital, SoftBank Group, and Thoma Bravo, learned through due diligence that FTX was a fraudulent scheme, but nevertheless incited the Fraud with billions in necessary capital, provided guidance and other support critical to the Fraud, and publicly promoted Bankman-Fried and FTX to keep the Fraud concealed until FTX could go public or cash out in a private sale
And of course there was internal assistance. I’d bet there were both (1) employees who knew what they were doing and (2) employees just following direction without realizing they were committing crimes or aiding in them.
When it comes to improving models of how the world works, Zvi’s piece had some good discussion of this (and much more!). It’s long, but very worth reading even months later. Here’s a section on VC:
If you are a VC investor or taking VC money, the optimal amount of fraud, from your perspective, is not zero. You are more excited to invest if you suspect a fraud, those kinds of founders make good and make dreams real for you. So what if sometimes it all blows up? This is a game of hits for you, and you are much more worried about being asked why you didn’t invest in FTX than you should worry about people asking why you were. This goes double for crypto.
Notice this VC ready to invest in SBF again if he asked. Which, from a pure EV perspective, sure why not, same as they invested in Adam Neumann.
Therefore, I do not view ‘the VCs didn’t catch this’ as much of a justification. They are not supposed to catch it. It is not their job to catch it. I mean, they are supposed to do some things to catch this level of fraud, demanding voting shares and a company board in which they have a seat and doing proper audits, but they got none of that, because [SBF] needed to not allow it and the pitch was so good otherwise they went along (or, they took this to mean ‘this is a fraud’ and invested anyway thinking they were not the sucker, also plausible). [If] SBF said ‘you can’t check for fraud, invest or don’t’ then they are doing what they do, as much as we might hate that. Better to make that common knowledge. [emphasis mine]
The institutional traders are different. They face a different risk profile. If the exchange blows up in 10% of years that is a real drag on returns, whereas a VC expects 80% or more of their investments to go to zero, fraud or otherwise. Why did they trade?
Some of them Did Not Do the Research, no doubt. Others likely decided it was worth the risk. If FTX is an easy place to make money, because trading is very good against Alameda and generally they treat customers great, and they pay interest on USD and BTC, it is not difficult to imagine a decision that the blowup risk is worth taking, at least for some portion of one’s bankroll. Or one can say that if FTX goes down everything is terrible anyway, it’s already systemic risk, so assume it won’t fail.
Another play is to hedge the risk with a short position elsewhere, and put your leveraged longs at FTX since if FTX goes broke you would have gotten liquidated at Binance anyway.
[Note: There could also be a section about lack of regulation here. As commented on here. From a “how the world works” POV, it is the mother of all permissiveness that allowed the rest, especially the useless audits. Global heuristic = Pretty nuts how much can go wrong with a regulation gap. But FTX also maybe would have found away around/already did break regulations so idk]
Anyway, just thought that all might interest you. Thank you for sharing your insights. Really useful stuff.
- ^
Interestingly, now the company has stopped offering crypto audits due to “changing market conditions”. And I was going to say “good of them to notice that the bare legal minimum is not enough for one of the most volatile, least regulated assets the world has ever seen, eyeroll.” But actually that auditor firm is starting up crypto audits again under a different name, sigh. It appears that they only stopped offering crypto audits because some non-crypto clients of the audit firm put pressure on them to do so. Those non-crypto clients felt that the auditor being known to audit crypto, might reduce the trust their potential clients/users end up having in their own commissioned audits. So it appears that, to make sure they can catch the most non-crypto and crypto audit contracts without one segment compromising the other, the auditors are just splitting them under two companies. I didn’t see any mention of raising the crypto audit standards.
Additionally, through this process. I found mention a couple times that there are other auditing firms who do a much better job, “The big 4”, and it is sort-of business wisdom to downgrade trust in audits that aren’t from them. I was very surprised at this! There are auditors that business-people know to trust less? Then why do they exist? I guess because most consumers don’t know enough to downgrade trust? One has to wonder if the big 4 would have put their stamp of approval on FTX or even many of this audit firm’s non-FTX-clients. Probably not, right? And it makes you wonder why companies would go to these known-worse-auditors, especially if they can afford the best auditing like FTX should have been able to, if they don’t have something to hide. If the goal is consumer trust, and a company doesn’t have something to hide, that company should go to the best auditors, those that in-the-know-consumers trust the most, right? And surely a “worse” audit firm like this one would realize that is how the selection effect cookie crumbles. So lesser-known auditors should expect they will likely get a slew of clients who are nowhere near as reputable as those who hire the big 4 audit firms.
Sooo I think it’s easy to argue that the minor audit firm(s) are negligent or intentionally turning a blind eye somewhere in their business model. But it’s more a problem of incentives of corporate America and low federal standards for private corporations allowing those incentives to play out in auditing contracts, than it is a problem of a single actor. Even though the single actor/auditor might know damn well that they will end up giving a scammer cover eventually, maybe even frequently among their really wealthy clients like FTX who could have afforded better-known services. It appears to be part of the business model, chronic not acute. And as long as the private corporation and auditor are following the law in reviewing the bare minimum, oh well?
- ^
I think that most of your comment is reasonable, so I’m only going to respond to the second-to-last paragraph. Because that is the bit that critiques my comment, my response is going to sound defensive. But I agree with everything else, and I also think what went on with my original comment leads back into what I see as the actual crux, so it’s worth me saying what’s on my mind:
But I do think that confidently asserting that the only thing Jacy did was “ask some people out over FB messenger” is likely inaccurate, and it is important to track that. It might be accurate to say “the only thing he has been publicly accused of is asking people out” or “the only thing he has admitted to is asking people out” or “No one has provided any proof he did anything beyond ask people out”, but none of those are the same as “the only thing he did is ask people out”.
I have long ago edited the original comment where I wrote that. I didn’t change that particular wording because I wrote the original on mobile (which I deeply regretted and am now incredibly averse to) so I didn’t have fancy strikethrough edit features, even when I tried on PC (I didn’t realize it worked like that). Without strikethrough ability, I thought it would be epistemically dishonest to just edit that sentence. Instead I promptly, right after that sentence, told people to make their conclusions elsewhere in a way that I feel clearly tells readers to take that part with a grain of salt. All in all I edited that comment ~5 times. I don’t have the spoons to re-edit again given I think it’s fine.
More importantly, the transparency of info is obviously a problem if someone like me who usually tries to be pretty airtight on EA Forum things had to edit so much going back and forth from “here’s a thing” to “maybe he did worse” to “maybe he did less” to “maybe he did worse” again. That’s not okay. And now I feel like I’m getting punished for trying to do what no other outsider of the case was willing to try to do (that I saw)… figure out the ground truth [and what it means for EA behavior] publicly.Honestly trying to figure out what happened regarding Jacy was a heckin nightmare with people coming out of left field about it after each correction I tried to make, including over DM (again not publicly), and giving multiple comments to comb through on multiple other posts and with their own attached threads. It’s good people chimed in sharing the existence of different pieces of discussion/info that I’d guess hardly any single person knew every single bit of, but damn, I have to be honest that I’m now really frustrated about what a nightmare it was. I was trying to do a public service and it was a huge waste of time with little to glean for certain. [And some of the more interesting bits are not public and I feel very, very weird about that, even saying that I now know of (know of, not know for certain) stuff others don’t and can’t find out about (I can’t even doublecheck myself).]
Was that always the expected outcome just lurking underneath the surface? If so then why would people judge SFF? I’m no longer surprised SFF just granted tbh. They saved themselves the time I wasted. I no longer expect any single person to get it right and I see that as a problem worth talking about because that will lead to either (1) actually-abusive people getting involvement sooner which is a safety risk, or (2) appearing-abusive-but-actually-non-abusive people getting involvement sooner which is a PR-risk and comfort risk.
I apologize for fucking up. I am now frustrated at myself for even trying. But if people other than me care about my messed up original comment they need to look at the systems because other people will fuck up as I did. It just won’t be public til after the decision is made, if ever. And you won’t get to correct them as they make their fumbles along the way.
Hm I wouldn’t have thought of your second paragraph. I’m not sure I agree that was an intention, but interesting.
IDK, CEA did do an investigation in 2019 into CEA/Alameda relations, according to the news article, so I’m not sure (yet!) they behaved unreasonably here given the nature of the complaints made. (I’m also not sure they behaved reasonably). Somebody tried a bit to actually figure things out at least. And I prefer that than just saying “Hey, SBF, check out these rumors. Rather than try to figure out which side is right, we will do some sort of average thing where we take your money and help you out in some ways but not others, and possibly leave a lot of value on the table or keep a lot of risk, not sure which, but oh well”. That doesn’t seem like the optimal outcome for either possible situation.
I’m reminded of split and commit. If I see something that looks like a 10-ft alligator on my property, but it might also be a log, is it an optimal strategy to continue the yardwork anyway but give it a wide berth? Or am I going to investigate further to see whether it is an alligator (and if so call animal control and have it removed) or if it is a log (and if so I can mow my grass right up to the base, even take a rest on it if I want). It’s not a perfect analogy but you get the idea. [1]
Anyway, it looks possible that people thought about this to the extent it seemed reasonable at the time, given the scale of the complaints made (which the article admits never implied anything like what happened or even implied fraud for sure—perhaps lazy accounting that they’d hopefully grow out of as they professionalized, yes). They came to the wrong conclusions, but I might be okay with this tbh. (but we’ll see how reasonable the conclusion was)
Were the claims about inappropriate sexual relationships either by the women themselves, or at least about nonconsensual relationships? Without commenting on how appropriate the relationships were otherwise, I’m not in favor of consensual relationships (consensual meaning, the women themselves would say they were consensual) beingbranded part oflumped in with episodes of harassment and the #metoo movement. You can’t really create a #metoo moment for someone else.- ^
[Edit: Maybe if I try to make the analogy better, maybe 2019 CEA investigated to the extent they could (I very much doubt they were allowed to feel the actual shape of the alligator/see many of Alameda’s internal documents) and (reasonably?) decided that SBF was not either an alligator or a log, but an alligator statue (that people keep complaining is an alligator), or a dead alligator (that people were right to complain about before but it looks like things have changed), or a crippled alligator you aren’t worried about, or an otherwise-chill alligator protecting it’s babies which you don’t want to move. IDK. But then in 2022 we all learned that this was wrong too, and he was actually a frickin T-Rex pretending (excellently) to be an [alligator statue etc]. Because actually the fiasco that ended up happening was way out of scale with what even the complainants said, and the Time piece notes that in multiple places. Nobody expected a sneaky T-Rex!]
- ^
That’s fair but next time I strongly recommend you include context and thoughts so lurkers don’t latch onto what you say as proof that leaders or anyone else did anything unreasonable. Lilly’s comment is:
“I am also eager to see what the investigation concludes, but I’m pretty convinced at this point that EA leaders made big mistakes....
...I cannot wrap my head around why—knowing what it appears they knew then—anyone thought it was a good idea to put this guy on a pedestal; to elevate him as a moral paragon and someone to emulate; to tie EA’s reputation so closely to his. It really feels like they should’ve (at least) known not to do that.”
This comment has a negative connotation, and is arguably the opposite connotation of the comment you wrote just now. So when you jump in to provide examples of that without context, it appears you support the negative conclusion.
Additionally, maybe this is a cultural difference, but “put someone on a pedestal” is only used negatively where I am from. It’s about putting someone in a position above that where they’d naturally belong. I argue that if you think his treatment was normal, you also don’t think he was “put on a pedestal” in a colloquial interpretation.
Here’s a slightly different point I couldn’t quite word well before: From my perspective, I am not even sure that anything EA did about him was anything they didn’t do about other prominent people to the extent it made sense. Like, I don’t pay that much attention and suck at names tbh, but I still know with high confidence that essentially everything people name as proof that SBF was unjustly elevated, anything an EA leader did toward/about SBF, they also did toward/about many others they have respected over the years. I just don’t see it as unusual or bad (yet) at all. If ppl think that things like named above count as “putting someone on a pedestal” and “tying EA’s rep to theirs” (implied=bad), then there are dozens, maybe 100+ who could be counted as such[1]. I honestly wonder if it just looks like SBF was boosted a lot because he is a billionaire (who wanted to be boosted) and the general public think that is so cool. And EAs do too, like I think maybe EAs forget about other names who got boosted over the years but maybe who just didn’t stick in our consciousness or public consciousness because they got little media fanfare? If the public had wanted to hear about something other than billionaires, wouldn’t other peeople have been mentioned much more? If people wanted to hear about the founding of Givewell, wouldn’t Karnofsky and Hassenfeld been boosted more than SBF? But we live in bugged world where people want to hear about billionaires instead (and the public or at least journalistic consciousness may have been tying him to EA no matter what) and I think that would have shaped almost any person’s way of talking, including other electable leaders—leaders we might want to tell ourselves wouldn’t have done as Will did, but I’ll probably never feel confident of that.
So, yeah, I suppose the crux is that people think “I or [other leaders we can elect now] wouldn’t have spoken well of Sam if [we] had known what was said in 2018 or 2019 about Sam.” But CEA did at least do an internal investigation into CEA/Alameda in 2019 and yet here we are. So I find it a little weird to say that.
Especially expecting that Sam is quite manipulative and EA leaders are just a bunch of high-trust humans (who I wouldn’t change if I could press a button btw, because whatever way they think about things is literally why the movement got started). To me, idk, saying EA leaders made mistakes feel a bit like “you guys shouldn’t be the way you are as people, you should expect some people to try to grift” and I disagree with that.
Maybe we shouldn’t have such high-trust people in leadership? I’m not sold. Actually I’m not even sold we can find people who would have noticed and investigated SBF’s actual financial fraud who have anything close to a typical human psychology (Gladwell’s book “Talking toStrangers” has a great section about Bernie Madoff’s reception in this regard, but I digress). IDK. I feel so bad thinking that these people I know try so, so hard are being told they made mistakes which basically amount to them being kind and excited about a person, one person among many others they have been kind and excited about. Maybe this time it wasn’t deserved, but, sigh, Will and the others are philosophers and altruists, not psychologists and business moguls who might be primed to spot a manipulative person or reputational risk. It feels like to me, as of right now, adding a new person(s) into the mix with more expertise in spotting manipulators and hazards, makes more sense than removing “EA leaders” including Nick or even Will from their posts.
But I will wait to see from the investigation! I worry that submitting lists of things without context to show that “SBF was put on a pedestal” (whether or not you view that term with negative connotation), will lead to entrenched opinions, more upset, and possibly even more news pieces being written, which may not be right in the longrun.- ^
Like, there are 153 eps of 80K podcast, the 80K website profiles have been rotated many times throughout the years, there have been so many people shouted-out by Will at various times in his talks and interviews—especially EAG talks, there are for sure dozens of prominent EAs who people bring up repeatedly again and again in conversation etc, there are lots of EAs who news profiles have done features on, lots of non-leader EAs and non-EAs who get invited to private discussions between leaders. I could go on
- ^
When I see agreevotes [racking up quickly on contextless statements] like above, I always feel weird. Do people agree that that counts as pedestalling by leaders? Or they agree that all that happened? IDK, but I feel weird because I actually think I disagree (with all those people I guess) that these are fair for the point being made:
At EAG London 2022, they distributed hundreds of flyers and stickers depicting Sam on a bean bag with the text “what would SBF do?”.
I was at EAGL 22, but I never saw this, so it was not a universal handout. However I remember ppl talking about it, and it was not by EA leaders or offical conference ppl. It was (if I heard correctly) some kind of joke by some attendee(s). I still believe this because EAG conference staff are pretty serious about making the event professional, so even if they thought it was secretly funny I’d have a hard time believing they would do that. Worth noting I think this was also a meta-injoke about claims of EA being a cult, obviously by riffing on the WWJD and giving “glorious leader” vibes. (Trying to avoid sounding harsh, but to whoever likes this type of thing, here’s a reason not to do rogue, injoke stuff like that at an official event. Now we are dealing with it as submitted evidence of potential leadership corruption a year later, and unfortunately it did make EA look more like a cult even though that was actually the punchline not the truth. I imagine some attendees at the time felt icky about it too. Expect the injoke to make it to the outgroup, and anticipate how it will look. Save it for the unofficial afterparty, at most please.)
On the 80k website, they had several articles mentioning SBF as someone highly praiseworthy and worth emulating.
Honestly that just seems normal to me, idk. I have met a couple other people who have profiles on that website. They weren’t being put on a pedestal, and 80K couldn’t be fairly seen as tying 80k (let alone EA’s) rep to theirs (because they aren’t billionaires who would assume that). But it did make sense to list Sam as a notable example for earning-to-give, which is a recommended career path. It isn’t like this was unwarranted use. Bad luck IMO. Seems like maybe what we are seeing in this instance is that if you want to use a billionaire like a normal example of something in a different talking point you are trying to make, society won’t “let” you in the long run, because the billionaire’s rep is actually bigger than yours and bigger than the rep of the point you are trying to make. So the billionaire’s rep will swallow yours up whether you meant it to or not. EAs would do well to keep this lesson in mind for future prominent people even if those people are angels-on-earth. Still bad luck I say, I mean keep in mind SBF was likely trying to actively fool people (including himself methinks) as to his competence. [Edit: And as Ubuntu adds below, they aren’t really EA leaders who had heard anything negative about SBF. Hence bad luck/big oof moment.]
Will vouched for SBF “very much” when talking to Elon Musk.
This was discussed elsewhere better than I can (I will hunt for the link and edit in shortly [Edit: I didn’t want to dig for it so I just made this section longer]), but essentially anybody would do this for their friend or trusted acquaintance when they think something will be mutually beneficial, it was not a professional EA capacity and it was also in private text messages, not public. That it was shared publicly was tbh pretty uncool. I mean it’s obvious Macaskill misjudged Sam’s character/potential, but he actually wasn’t trying to recommend Sam publicly there, and this sounds more like “person-to-person, I think it’s very much worth you guys having a conversation! You care about the same stuff and I know that stuff is why you both are interested in Twitter! Give it a go!” than “I, an EA leader, vouch for Sam, also an EA leader” type of exchange. Will got duped but I’m pretty unclear he was acting inappropriately here in his own personal life. I mean when you have two fabulously wealthy acquaintances, and one wants to do a business deal with the other, you introduce them, and yes you vouch for the first if you know them reasonably well just so they give each other the time of day. There’s no expectation that a deal will move forward without due diligence occurring, or that your vouch will become law. Musk knows that Will is not a finance professional, so he should know that Will can only vouch character. And no harm, no foul. But if there was harm for the EA brand here, it was by Musk making the private messages public which like, huh I’d never have expected that if I was Will.
It looks to me like Musk did so for a bit of clout to show he is hard-to-dupe.[I was incorrectly pairing this with another Elon text but the rest stands].I don’t think he wanted to make a dig at Macaskill for vouching tbh. Musk’s financial professional (Michael Grimes) vouched for SBF too with about the same level of confidence. If I were Musk I’d prefer that people I like and respect (which it sounds like Will is one) keep trying to make connections for me when they believe the connections are of potential high value, and I/Musk would still always do my own due diligence. So it feels kind of weird to me that EAs are against Will having tried to connection-make when maybe even Musk wouldn’t be so hard on him?[Edit: I now realize that maybe some people are upset that Will was cool with arranging for “possibly EA money” to be spent on Twitter. To that, it’s worth noting that EA doesn’t own it’s donor’s money, helping out your major donors and acquaintances in their goals is prosocial and normal, there are EAs who view social coordination and combatting misinformation as important cause areas, and it would have been a business move not a donation]
Sam was invited to many discussions between EA leaders.
Sounds normal for a truly-massive donor who at least understood the movement or was thought to, and who people roughly enjoyed talking to as well. Again, keep in mind that SBF was likely actively trying to fool people here. I wonder if anyone reading this can say “No, having Sam involved in discussion would have just absolutely been a hard-no from me. No way he could convince me that he, a huge donor, deserves a seat at the table. And if anybody felt he gave them good input in the past… their opinions wouldn’t matter, I’d have said ‘No, don’t invite him.’ In fact even if he gave me good input over the years in my leadership, controlled half my funding, and he was possibly expecting an invite, I would definitely still never have invited him.” [This sounds so strong that it almost looks like I’m strawmanning the other side, I’m giving so little wiggle-room. But I do genuinely think to be able to judge leaders harshly here, you have to believe a claim as watertight as that quote would have been said by other experienced leaders, because I’m only confident a claim as watertight as that keeps competent actors and the rest from inviting Sam.] IDK I just find it quite surprising that readers are acting like disengaging with Sam, a major donor, was so black and white and easy. The majority of people got sucked in… it’s easy to say what should have happened now.
Generally, almost everyone was talking about how great Sam is and how much good he has achieved and how, as a good EA, one should try to be more like him.
This perhaps yes. Lot of injokes and stuff in EA culture about SBF generally, too. But how much of this was grassroots in origination, because having a heckin billionaire in your extended crew is cool and interesting, vs how much of this was done by leaders publicly? Should people really not have felt excited and expressed it within their ingroup based on what they thought?
There are probably more examples.
I’d be interested to see them. In general I feel like what I’m seeing listed here is pretty human and normal and I wouldn’t call it putting him on a pedestal, or tying EA’s rep to his (intentionally or even in ways ppl necessarily could have anticipated would end up very entangled), or elevating him as moral paragon publicly.
- Nov 1, 2023, 6:56 PM; 21 points) 's comment on How has FTX’s collapse impacted EA? by (
Hm you say “EA didn’t listen to anyone who warned them about obvious scams”, but the article says:
None of the early Alameda employees who witnessed Bankman-Fried’s behavior years earlier say they anticipated this level of alleged criminal fraud. There was no “smoking gun,” as one put it, that revealed specific examples of lawbreaking. Even if they knew Bankman-Fried was dishonest and unethical, they say, none of them could have foreseen a fraud of this scope.
And
No one has alleged criminal behavior on the part of top EA figures. None of the people who raised concerns about Bankman-Fried to EA leaders in 2018 and 2019 say they warned about specific criminal activity, nor did they foresee the size and scope of the alleged fraud at the heart of the FTX collapse. In charging documents, federal prosecutors identify the start of Bankman-Fried’s alleged fraud as 2019.
So I’m not sure you can say there were warnings of “obvious scams”.
AlsoSometime [in 2019], the Centre for Effective Altruism did an internal investigation relating to CEA and Alameda, according to one person who was contacted during the investigation, and who said it was was conducted in part by MacAskill. Bankman-Fried left the board of the organization in 2019. The Centre for Effective Altruism did not respond to repeated requests from TIME to discuss the circumstances leading to his departure; MacAskill and others declined multiple opportunities to answer questions about those events.
So I’m not sure it is accurate to say that “EA didn’t listen to” the warnings which were given. I’m certainly curious about the quality of the internal investigation by CEA. I wouldn’t be surprised if there were gaps/it was of low quality. But I also wouldn’t be surprised if it was of expected/good quality given the nature of the complaints made. And I wouldn’t be surprised to find that Sam would have fooled a non-EA, commissioned investigation too, enough that non-EA nonprofits would have felt comfortable taking his money. I mean, I assume Sam would have refused to give internal financial documents to independent investigators, and such a refusal to engage thoroughly from Alameda (“Um, no you can’t see our internal documents? Who do you think you are..?”) would be so normal for an investment firm that it can’t even be seen as a red flag.
I’m not surprised that CEA is refusing to comment til after the commissioned independent investigation is complete, whether or not their 2019 internal investigation was of high, decent, or low quality. I’m not sure which it was yet. I guess I’ll wait to see.
[Edit: In general I’m against pushing to make others responsible for the sins of others without a lot of proof. Especially when the “sinners” had dark triad traits who could have been trying to manipulate the others. I know the general population and journalists don’t think that way or have as much patience in that regard, but I’d like it if EAs did. Judge leadership for competence, and replace them if needed, sure, but comeuppance here is still likely to be punishment for trying and failing. And I think punishment should be reserved for the actual sinners themselves. I’m not at all sure anyone who didn’t work directly with SBF at Alameda “sinned” here. And if they didn’t, EA itself and EA leaders don’t “deserve” comeuppance, IMO.
I find comeuppance as journalistic motivation plausible, but I also admit that comeuppance might not be the journalist’s intention with this article, even subconsciously. But it sounds like you are also arguing that comeuppance would be warranted for other reasons here, and I just don’t think so. Comeuppance is moral punishment. I’ll reiterate that it would be fine to push that leadership should be changed (after the investigation). But let the actual sinners, and the sinners alone, be punished for their sins. [[I don’t want to suppress discussion, so sure, place your bets, but please don’t assume moral fault yet.]]
Finally, I agree with you that many journalists are activists themselves. But I’ll also note that when journalists and others say that “EA isn’t doing enough”, they are still potentially using another way to shame moral actors who otherwise appear to be doing more than them. It is a frame that EA has more agency and privilege than them (perhaps unjustly given), but still has less actual goodness and merit than them. So I still find it very plausible that the recent journalists are (consciously or unconsciously) doling out extra blame and shame to put aspiring altruists in their place. And if it is not the journalists themselves doing this, perhaps, as a business, they are catering to the many, many readers who click for and revel in “comeuppance”.]- Mar 25, 2023, 3:54 PM; 10 points) 's comment on Ivy Mazzola’s Quick takes by (
I realized I missed the bit where you talk about how we might not need such intense data to respond now. Yes, I agree with that. I personally expect that most community builders/leaders are already brainstorming ideas, and even implementing them, to make their spaces better for women. I also expect that most EA men will be much more careful moving forward to avoid saying or doing things which can cause discomfort for women. We will see what comes of it. Actually I’m working on a piece about actions individuals can take now… maybe I will DM it to ya with no pressure at all o.o
I agree with you’re first half. I wonder if a startup or even a non-EA non-profit would be so self-flagellating for taking his money, even given they had heard some troubling reports. If not, I think EAs should chill out on thinking we could have been expected to do a deep investigation [Edit: apparently CEA did one on CEA/Alameda relations in 2019 but no comment yet on how it went] or hold-off on taking money. I mean everyone else had his expected net worth wrong too (Billionaire lists for example, and FTX’s own investors who should have been much more interested in internals than donation recipients). [Some data/insider info on how much big charities like WWF or Doctors Without Borders investigate donors would be great here, but without seeing that I’m assuming it was normal to accept money from SBF]
But as for the idea he was framed as a moral paragon by leaders, idk. I never got that vibe. Was I missing it? It seems more like he was framed, by news outlets and everyman EAs, and maaaybe some EA leaders on occasion but I actually don’t even remember this except for from Sam’s promos himself, as more of a cool-but-humble industrious guy than a moral leader or rep of the movement itself if anything. I mean did he ever speak at an EAG or something? What would it even look like if EA leaders were tying EA’s rep to his? Maybe he was mentioned some (usually non-EA) places as a notable personality in the movement.? But that doesn’t to me say “this guy’s put on a pedestal by leaders” and “emulate this guy” (in fact EAs wanted people to not emulate him and try direct work first). I honestly wonder if I missed something.
My weakly-held take is: You can’t help it if cool-seeming billionaires get fans, and it is hard to help it if those cool billionaires get associations with the things they fund and themselves talk about. Ex: Elon who people associate with AI safety even though he has never worked on it himself. Associations in the eyes of the public, and news outlets/bloggers/Twitter bumping those associations are gonna happen. I’m not sure EA leaders did anything to boost (or stem!) this effect (fine with being proven wrong though). I do hope/demand that next time, EA leaders are more protective of the EA brand and do try to stem this potential association effect. Like “Hey buddy, we already weren’t gonna put you on a pedestal for being associated with our brand, but actually you don’t get to do that either. Please keep involvement with us out of your public talks and self-promotion. We want to be known for the work we do, not who funds us” Dustin does this (keeps EA out of his professional brand). And most wealthy people prefer to donate discretely so it was maybe a red flag that SBF leaned right into being associated with EA, and would be a red flag in future too. Idk.
[Edit: You might consider the last part negligence, that EA leaders didn’t give SBF a slap on the wrist for EA-associating. If so maybe you still aren’t happy with leadership. But I just want to flag that if that is what happened that is still much better than leaders actively boosting him (could be wrong that the latter happened though) and would likely warrant different response. I guess I view the former as “mistakes of medium-size (but small for most leaders due to diffuse responsibility if there was no one whose job it clearly was to talk to Sam about this), passively made, not-unusual-behavior” whereas I’d view active pedestalling or active tying-EA-rep-with-SBF to be “big mistakes, actively made, unusual behavior”]
Hmmm this is why I originally had a long paragraph about where I think the misunderstanding occurred. I think that someone serving as regranter is not the same thing as being given 50K.
Thanks for coming back. Hm in my mind, yes if all you are doing is handling immediate reports due to an acute issue (like the acute issue at your church), then yes a non-EA contractor makes sense. However if you want things like ongoing data collection and incident collection for ~the rest of time, it does have to be actually collected within or near to the company/CEA, enough that they and the surveyor can work together.
Why would youIt seems bad [and risky] to keep the other company on payroll forever and never actually be the owner of the data about your own community?
Additionally I don’t trust non-EAs to build a survey that really gives survey respondents the proper choices to select. I think data-collection infrastucture such as a survey should be designed by someone who understands the “shape” and “many facets” of the EA community very well so as to not miss things. Because it is quite the varied community. In my mind, you need optional questions about work settings, social setting, conference settings, courses, workshops, and more. And each of these requires an understanding of what can go wrong in that particular setting, and you will want to also include correlations you are looking for throughout that people can select. So I actually think, ironically, that data-collecting infrastructure and analysis by non-EAs will have more gaps and therefore bias (unintended or intended) than when designed by EA data analysts and survey experts.
That brings me to the middle option (somewhere between CEA and non-EA contract), which is what I understand CEA’s CH Team to be doing based on talks/emails with Catherine: commissioning independent data collection and analysis from Rethink Priorities. RP has a skilled/educated professional survey arm. It is not part of Effective Ventures (CEA’s parent organisation), so it is still an external investigation and bias should be minimized. If I understand correctly,CEA/CH team will give over their current data to RP[whoops nvm see Chatherine’s comment below], and RP will build and promote a survey (and possibly other infrastructure for data-collection), and finally do their own all-encompassing data analysis without CH Team involved, [possibly but not decided yet]. That’s my rough understanding as of conversation last month anyway.
I do find the question of how data will be handled to be a bit tangential to this post, and I encourage people to comment there if concerned. Though I’d actually just caution patience instead. This is a very important problem to the Community Health Team, and I hope this separation (CHT/RP) is enough for people. Personally, the only bias I’d expect Rethink Priorities [and the CH Team] to have would be to work extra hard because they’d care a lot about solving the problem as best they can. EAs know that as best you can requires naked, unwarped truth, as close as you can get, so I don’t expect RP to be biased against finding truth at all.
Now I find myself considering, “Well, what if RP isn’t separate enough for people, and they want a non-EA investigator, despite risk that non-EAs won’t understand the culture well enough for investigating cultural problems?”.… And idk, maybe people will feel that way. But then I feel incredible concern and confusion: I would honestly wonder if there is any hope of building resilient trust between EAs and EA institutions at all. If some EA readers don’t trust other skilled EAs to try really hard (and competently) to find the truth and good solutions in our own community, idk what to say. It’s hard to imagine myself staying in EA if I thought that way. Hopefully no readers here do think that, hopefully readers think RP separation is enough, as I do, but idk, just making my thoughts known.
Would love a book like this to exist, and you’d be a great author of it (and Julia too!) :)
Nice! I did have to bounce stuff around with it and ask it to add concepts, but doing that helped me get to those points Iactuallywanted to make that actually weren’t conveyed. If your writing is better to start, you might get it in one shot.Better workflow notes and even session screenshots are within doc
Thanks. I definitely still think someone had to say something—likely a detailed rebuttal at that point. But there were better ways to say some parts.
And there is a pernicious thing with aggressive language that I realized while rewording: Like, it can feel like you are getting everything out, but actually now that I am rewording it I see that I didn’t make my actual point in the second half very well. Later I ended up commenting a second time to try to clarify my POV and truly help the person “get it”, which was a timesink, more clutter for that thread, and even harder to word without coming off aggressive since I was following up on my prior aggressive statements. Maybe if I had worded my first comment properly from the start, I would not have even needed(?) to write a second comment.
Non-emotional language seems way better for making a nuanced point. Any point where it is imperative it can’t get warped. Example: I think using frustrated language in that thread increased the risk that readers thought I was anti-transparency, which I’m not. Anyway, Chat-GPT is cool for this so far :)
Recently I was given a warning*[1] by mods for a comment* I wrote while frustrated. I apologize to those in that thread for possibly hurting feelings, creating extra stress, and adding labor, and I apologize to all for breaking forum norms. I especially regret if I made the forum seem like an aggressive environment.
I am taking action to not cause a problem again. To that end, (1) I made and started using some Anki flashcards to better-instill some frustration-management habits (mindfulness and CBT stuff). That said, I often use emotion to inspire myself to take action (especially when time-limited), so I might still write stuff while emotional. (2) Therefore I am also addingChatGPTGPT-4 to my workflow (also instilled via Anki TAP-deck I hope): when I write something while upset, I will copy-paste the resulting draft intoChatGPTGPT-4, and ask it to reword the upset-sounding parts.
With this in mind, I’ve “redone” the comment here in google doc[2] with edits that ChatGPT helped me write more quickly than I could have done myself. [I also next-day tried out GPT-4 to be sure I had the best tool for future use, and in future I will use that.] My doing this might seem dramatic, but it is important for habit formation to go back and fix mistakes/redo an action properly, even when the problem is in the past. And I do so publicly in case others might find use in the ideas to (1) use spaced-repetition software like Anki to instill mindfulness TAPs and (2) use GPT to rephrase text you may write while emotional. You can request access to the doc if you want to see how [ChatGPT and GPT-4] perform at this task.
[In closing, I again apologize. I think with these TAPs I am very unlikely to break forum norms again, and appreciate the reminders and chance to do better.]
Big improvement on the left sidebar. Good job all 👍