I’m experimenting with “norms-pledges” to help reduce forum anxiety. Maybe it could be a good social technology IDK. Click [Show More] to read them all:
🕊 Fresh Slate After Disagreement Pledge: I hereby pledge that if we disagree on the forum, I will not hold it against you. (1) I will try not to allow a disagreement to meaningfully impact how I treat you in further discourse, should we meet in another EA Forum thread, on another website or virtual space, or IRL. I know that if we disagree, it doesn’t necessarily mean we will disagree on other topics, nor does it necessarily imply we are on opposing teams. We are most likely on the same team in that we both wish to have the most good done possible and are working in service of finding out what that means. (2) Relatedly, I pledge to not claim to know what you believe in future, I can only confidently claim to know what you wrote or believed at a given time, and I can say what I think you believe given that. I know that people change their minds, and it may be you or me who does so, so I understand that the disagreement may not even still stand and is not necessarily set in stone.
👨👩👧👦 No Gatekeeping Pledge: I hereby pledge that if I am seeking a collaborator, providing an opportunity, or doing hiring or anything akin to hiring, and you would otherwise be a top candidate if not for the following, I will try not to gatekeep: (1) If an opinion you’ve shared or broken-norm you’ve done (on the EA forum or elsewhere) is relevant in a potentially negative way to our collaboration, that I will ask you about it to gain clarity. I will not assume that such an incident means you will not be suitable for a role. I will especially try hard not to make assumptions about your suitability based on old or isolated incidents, or if your digital footprint is too small to get a good picture of who you are and how you think about things. (2) I will not penalize based on someone being a social or professional newcomer or being otherwise unknown to me or my colleagues. If the person is a top candidate otherwise, I will do my due diligence to determine cultural fit separate from that.
🤔 Rationalist Discourse Pledge: (1) I hereby pledge to try to uphold rationalist discourse norms as presented here and here, and comedically summed up here.
🦸♀️Preferring My Primary Account Pledge: (1) I hereby pledge that this is my main EA Forum account. I will never use multiple accounts to manipulate the system, as by casting multiple votes or stating similar opinions with different accounts. (2) I also pledge that, although I can’t be sure what comes, I strongly intend to not use an anonymous or different account (alt or sockpuppet), or any account other than this, my primary account. I pledge that I am willing to take on some reputational risks on this, my primary account, in service of putting truth, transparency, integrity, and a complete narrative over my own anxiety, and to give ideas I think are worth advocating for the best chance at adoption. Therefore I pledge that I will not use an alternate account out of general anxiety around personal or professional retribution or losing clout. CAVEAT 1: I reserve the right to use an alt account in cases where *specific* retribution or other danger can be expected in my particular instance. As example: I reserve the right to use an alt account out of concern about riling up a suspected stalker, specific known bad-faith actor, specific known slanderer, etc. CAVEAT 2: I also reserve the right to use an alt account for the benefit of others. Example: in cases where revealing my own identity would reveal the identity or betray the privacy of some other party I am discussing.
🙇♂️Humility in Pledging Pledge: I hereby pledge that I take these pledges for my own self-improvement and for altruistic reasons. It’s okay to disagree that pledges are useful and important for you. (1) I don’t expect others should necessarily take a norms pledge. I believe the pledges only work if people take them after deep consideration, and I don’t expect I can know all the considerations for others’ situations. Therefore I understand there may be situations that it is actually right that a user avoid taking a pledge. Therefore I will not judge others for not having taken a pledge, including that I will not dismiss other’s character if I see other accounts without a pledge. (3) Additionally, I don’t presume that others not taking a pledge means they would even necessarily act differently than that pledge would imply. I don’t assume their intentions are even different from mine. Perhaps a person is new to the idea or just trying to protect their energy by not opening themselves to criticism. (3) I won’t automatically dismiss a user’s reasoning if I see the user violating norms pledges I’ve made. I still will give their claims a chance to stand on their own merits. (4) If you see me violating a pledge I’ve taken, I will always appreciate if you bring it up to me.
Ivy Mazzola
This was written after reading Chloe’s update
Note: I’m trying to focus on “What are good practices for EAs who want to try weird things?” rather then “Should NL/Kat/Emerson be disbanded/reprimanded” until NL posts their rebuttal.
I’m feeling concerned about some specific stuff I’d put in the “working in unusual vistas” bucket. I feel weird because Nonlinear has listed “travel” as a perk on job listings, when it can easily be more of a burden, and looks like it is for certain members of their team (while others have more of the benefits and less of the costs). As someone who lived in East Asia for 7 months across as many different islands/regions, it is really hard to be productive and happy for me in that type of nomadic life. I think it’s hard for most people. But the last ~year, seeing NL’s (and Kat’s[1]) positive posts I’d been rethinking that. Now, what I’m reading from Chloe is more in line with my memories of the hassle and why I recommend against it. So I think re: job listings:Necessary travel should not be listed as a job perk, but a key job trait with both pros and cons and emphasis on suitability for certain less than common personalities. And hirers for travelling positions should try really hard to sort out the people who won’t like it.
It should be made clear how much of an assistant’s job description is literally “dealing with the hassles of travel and working abroad for the team”. To their credit, the NonLinear job listings do address this, but it is at the bottom and general. I mean it should be semi-granular on the listing, like “~10 hours/week”, because I think the amount it was for Chloe (sounds substantial) might have been a surprise and added to reasonable resentment and trapped and useless feelings.
I also think “helping your boss with their personal tasks” should be discussed more granularly (maybe it was in interviews IDK, but seems good to get specific early and in writing)
I additionally suspect (with low confidence because I don’t know what’s normal in business/nonprofits) that for personal tasks, accounting and pay source should be different for those hours, maybe.
- ^
Less relevantly, I have seen Kat promote working abroad and nomadic living as seemingly a really good solution to EAs who follow her on Facebook or her blog. I feel weird that I have not seen Kat post a cons list for travel, but this is a footnote because I also think it isn’t generally her responsibility in her personal social media to talk about more than what she is excited about. It is everyone’s responsibility to do their own research/not be easily influenced by what they see on social media. But personally, I think in future if I see someone recommend EAs try nomadic living/work in remote vistas, I will ask revealing questions (like “Do you have an assistant whose job it is to make this easier for you?”) even though it might feel rude.
(Take my strong upvote, I think people downvoting you don’t realize you are the author of the post haha)
[Edit: I now realize that this is what Spencer discussed below and other people have been discussing too. But maybe the community norms roadmap makes it seem less pie-in-the-sky]
I first had this idea about that Toby Ord post a few months back, and regret not writing it up then.
Idea: I think people who write something that could be considered a “criticism” (or worse, a “hitpiece”) should send a heads-up message to the person in question, with their finished post attached.
Example: “Hey, I have written [this post] about things I am concerned about regarding you/you project. I plan to post it on the EA Forum on [date]. I understand this is unfortunate and troubling from your perspective and you will probably want to respond. That’s why I’m letting you know my publishing date. You have until then to write a response, which I expect you will want to post shortly after I make this one go live. (optional: I am available to read your response post before I publish my piece if you wish, but I would need your retort by [earlier date] to take your response into account.)”
How it might become a norm:
-
Forum mods and power users could comment on critical posts that giving heads up with final drafts attached is the advised way for future criticism.
-
After seeing this advised for month or so, users would start actually doing it. And they would probably add transparency blurbs about it at the tops of their criticisms and responses, further educating readers on criticism norms.
-
After seeing this recommended and practiced for a couple more months, the cultural norm would be established.
-
Once the norm is established, people who don’t give warning of their criticism and time for response should be frowned upon/possibly even given a moderation warning.
Being blindsided by people posting bad stuff about you on a forum really sucks, and the ongoing threat of such a surprise is bad for the community. I think a norm like this could do a lot of good and be low effort for critics.
I think it’s probable that Ben tried to do something fair like this here by talking to Kat/Emerson, but I think that doesn’t do the full thing. For example, Kat may have felt that she had responded adequately to some of the concerns over chat, enough that they would be dropped from any final piece, and be surprised and blindsided to see some parts dug up here. [Edit: She may also be surprised by the very confidently-worded conclusion Ben made against Nonlinear.]
That’s why I think sending the actual final draft of the post with some buffer time to compose a public-facing response is much fairer. I admit that refuting things in writing can be very time consuming, so it’s still helpful and good for the critic to offer a conversation. But if a conversation occurs (as it did here), I think a final draft and chance to write a response for the forum should still be offered, in addition. There’s no replacing seeing the precise claims as-written before the rest of your community does.
-
Very excited about this! FYI, this also makes EA promo a lot simpler for the local groups!
At risk of saying a lot.. Especially in non-hubs, I sort of think the default funnel should be to help local EAs engage in online EA spaces more and more. It’s a lot easier to find your niche and collaborators once you engage with way more people in international community, and the changeup in flow of ideas that you see being involved in more than your local group is good too. But it’s been hard as an organizer to know which virtual communities to promote (EA newsletter? Check. 80,000 Hours? Check. Virtual communities? Where to start...).
I’ve def suggested various slacks to local EAs, but with mixed results. Basically it has felt like no online community was quite big enough to reliably pull the attention of people who start in a local community and already have their local infrastructure. I get the affection for your local group, but in the long run, I think some EA’s preference for EA local groups and the people they know is limiting them (eg, your local community only has so many entrepreneurs compared to the EA Entrepreneurs Slack).
Anyway, ~one virtual Slack community is far less confusing and more enticing :) It means the average local EA can basically track 2 Slacks, their in-person group and EA Anywhere, and have everything they could want at their fingertips. That’s just a huge difference! Def already promoting it! :)
I like both these suggestions a lot. I just wonder if anyone can chime in if the CH Team might struggle to get NGO status without being part of CEA. I wonder what their altruistic mission statement would be summarized as?
Perhaps this is a silly question and NGO status is easier to get than I think.
Same, I have never heard any of these. Perhaps some people are saying these things, but I’d be very surprised to, say, hear anything like this being shared in the EA leaders Slack (not that I’m in it but as someone who has spoken to many EA leaders, they are all chill)
EAs tend to speak in really nuanced ways, so the furthest I’ve heard someone go is saying things like “I’ve found Bayesian reasoning to be an irreplaceable tool and want us to help new EAs learn it and be aware of the value themselves” or “Eating vegan has been shown to increase compassion for animals, and I also think it is important to behave in compassionate ways regardless of impact calculations”. The frugal thing.… I can’t even reword that because I haven’t heard EAs speak that way ever, except from one student organizer who got the idea I think from his own head connecting the dots in the way that made sense to him (he was kind of on his own, not in a hub, and he was organizing before CEA’s UGAP program provided better mentorship to student organizers. I think now that he has graduated and gotten a real job he has gotten over this as he sees how complicated it is to navigate life when being abnormally frugal).
I have this inkling that maybe if people are saying such “rules”, it might be more of a self-soothing technique for some EAs that have struggled to get recognition and the jobs they want in EA? Gatekeeping/no-true-scottsman type stuff? IDK just a vibe, trying to get status where they can maybe. Or maybe they are just dealing with a lot of unexamined distrust toward people different from themselves, as neurodivergents, nerds, and activists tend to do IME?
Sorry it took me so long to come back.
Okay thanks for clarifying that you only mean that some private reception by CEA was unkind. Some others have attempted to paint EA forum response with a negative brush and if it isn’t true. People here might not be the most warm but they try pretty hard to avoid unkindness.
Tbh I think I understand your frustration a bit from experiences in my own life, like feeling like I’ve done someone’s dirty work or grunt work and then being pushed aside. That said, feelings being what they are (unreliable, even if if they are sometimes right), I still wonder if a misunderstanding happened about who you spoke to or if there was forgetting rather than willful dishonesty on anyone’s part, but I guess that is between you, them, and decision-makers. Although I’m more interested in the prevalance of sexual assault in EA and how that is handled, I wish you luck in your professional goals.
I agree with you about liability and have been finding that odd it hasn’t been mentioned too. That said, I think that shouldn’t be the main concern tbh, or even a primary one (there is so much else on the table to focus on). Also, publicizing things out of concern for CEA getting pinged for liability seems to me to be putting the cart before the horse. So personally, I’d just put that concern to the side and try to do the right thing, and take a lot of notes, and hope that trying to do the right thing washes out in court god forbid anyone tried to move forward with that. My understanding is that CEA is trying to do the right thing and has been for some time. It’s good to encourage people to try to do better.. I guess I just would want to see that as actual encouragement not like… worded in a way that easily strikes fear into the hearts of EAs that we are about to deal with another big public fiasco without having had adequate time to handle and respond to whatever last one.
I agree my response was defensive (and reactive though I tried to separate that bit out into a different comment). One of my intentions is definitely to defend EA (always with finding and protecting the truth as primary even more important goal though). Even if I go too far some of the time, at least I hopefully move the overton window toward defending EA as being acceptable and normal. I actually think people (not just EAs, but everybody) are morally obligated to defend themseves, and often others. Right now, the world spins based on claims and responses… accusations and rebuttals. As long as that’s the case, I think it’s almost always ethically warranted, even necessary, that someone come in and speak an honest, good-faith, altruistically-intentioned response that points out potential pitfalls, alternative considerations, and such. Frankly most people who stumble across this piece won’t be informed enough to do it themselves.
Anyway, best of luck.
I insinuated that you were trying to do something outside of the law? Where? I honestly didn’t mean to [Edit: Oh I think I know what line you noticed… no I wasn’t meaning to insinuate that in a negative way. If I insinuated that as going against the law I’d be insinuating negative toward all people who do external justice processes. I more mean “that’s a difference of opinion that boggles my mind” I know why people avoid it, as you say victims find it retraumatizing to go through judicial process. But it still boggles my mind that you and others are coming up with different value calculation here than me.
Also I implied that CH has a process very far from a judicial process. It sounded like, in your article, you don’t want a judicial process, and I’m confused what you’d want that isn’t essentially judicial yet is also quite different from what CEA is doing. Whereas I would want one much closer to typical judicial process than what CEA has.]
But I know you expected kindness back when you posted that old aggressive post you’ve now deleted/rewritten though?
Also, everyone had been quite kind to you in forum discussion before that point [I remember from other discussion and actually liking what you wrote and you got a lot of upvotes] So it’s a little odd to say that you helped “until [EA] was unkind to you”. Maybe its more accurate to say “helped until people pushed back against your unkindness, [which you did start with that highly unusual post]”
That said I don’t think my response here was even unkind at all. I am going to say what I think but I recommmend you not take what I think personally. I made it overt that I was sorry for possibly embarrassing so I’m not sure what else I can do other than be silent which is I think a bad habit of EAs that we should break. Like fwiw I didn’t even downvote this post.
I’m not sure where the relevance to EA is though when it comes to the Asian rape. That’s what I’m confused about. I’m honestly not aware of such cases in EA and you’ve not given enough info that anyone can pin it down and understand what you are talking about. I definitely do “see that as a problem” but I have no idea what it has to do with EA.
[Relatedly I focus on the problems with the post and the claims it makes that I see as misleading and likely to lead to problems/misunderstandings in future. As a critic yourself I expect you to understand this.]
I’m not saying other communities aren’t taking action when it is brought to their doorstep (EA is too), I’m saying I suspect it is mostly only being brought to EAs doorstep or is being brought way out of proportion to EA involvement. And this rings my unfairness alarms.
Feel free to DM me with the high-powered person’s name. I’d greatly appreciate it.
A couple more personal thoughts:
(these ended up long, I hope the algorithm collapses them. Interested parties can choose to read and the rest ignore)
1. I’m having trouble trusting the terminology being used is accurate because of the author’s liberal use of terminology in the past that did not make sense to me then. Eg, would the victim describe their experiences as they are described here? And again, teminology around calling people EAs? It’s possible the author is trying to do better this time, but I’m very unsure of that as they might just be sticking to their guns as well. And anyway, even solid attempts to do better might still end up wrong for an EA outsider/newbie (eg, is the “highpowered” person Michael Vassar perhaps?). I wish I could tell but without instances and more specificity I just find it very hard to trust.
2. Emotionally, I feel frustrated. Truth is that for the last 3 weeks I’ve been avoiding the Forum because of stuff like this, and the day I resolve to come back, this was just posted. I feel cursed or something (tongue in cheek, but also..). I’d like the Forum to not feel like a cursed place. I’d like EA to not feel like a cursed movement doomed to never become respected again because once the waters calm someone has something else to say (valid or not it may be, [most of us can’t keep dealing with this]). I’d like to feel like my mental health is not at risk by visiting the Forum or participating in EA generally. I feel such a drive to defend EA from what I view as epistemically-lacking piling on, and I have doubts that anyone else will stick out their neck to do so. FWIW there doesn’t appear to have been much of this “piling on” while I was gone though! But yeah I’m frustrated. [I came back all bright-eyed and bushy-tailed and now I feel deflated again]
3. Idea: I wonder if all Forum community posts could go through a brief review period, eg “We review all community posts within 24 hours of submitting. If this post doesn’t meet our epistemic standards we will give you some feedback and you are free to integrate some or all of it and resubmit. We also reserve the right to approve this post but add some caveats at the top”. I believe that my complaints I listed above would also have been noted by the Forum team and I do believe that helping people improve their posts is worthy.
4. My unfairness alarm is ringing, and it’s saying something like “I hope that this ‘pertinent’ info is being shared in major tech forums, burning man camps etc as well. Those groups which are likely to be more implicated.” If not, this seems like a vendetta against EA. I mean the article does talk more about EA than the other groups and I’m wondering if it is because EAs are the ones the author tried to interface with about all this (correctly or incorrectly) and when the author didn’t get the response they wanted, EA started to get classed in their mind as the “bad guys”. Maybe EA is getting punished because it is the only group of those listed which does actually have a [dedicated] team which is actually trying and having conversations about this. The visible are easier to name than the invisible. It definitely feels to me that we are getting most or all the responsiblity for the SF SAs that the author has heard of, even if the victims or accusers may have been 2 or 3 steps removed from EA? Perhaps I am wrong and I’d be glad to see that I am. I’d like to trust that the world is not just piling on EA but that this is an overall motion throughout the SF Bay area. The way that I can be proved wrong here is if someone shows me evidence that this is being brought to other groups too with equal fervor? Of course it is hard to do with this article since it basically let’s everyone else off scott free except EA. I’d appreciate anyone sharing links to such discussion in other communities if you have it, and in future I’ll calm down about my suspicions that EA is getting scapegoated. The communities named in the piece are: subgroups of tech, EA (Effective Altruists), rationalists, cybersecurity/hackers, crypto/blockchain, Burning Man camps, secret parties, and coliving houses.
5. On a complimentary note, I agree with the author’s ideas that a more formal and professional process is needed in reporting and determining community response. I wish I could stop at the compliment, but that said I feel confused that the author appears to believe that a closer-to-judicial process is out of bounds. In my mind, if you don’t want a judicial-like process for claims of rape and sexual assault (which boggles my mind but moving on), the CH team’s process is quite good. At least as a starting point, I’d think it’s like 80% of the way there and leagues better than what most non-corporations/social communities have? So I wonder what else they might mean.
So, to clarify, I want a more formal process too but personally I think that if anything the CH team’s reporting process right now isn’t judicial enough, notably in that the accused is not necessarily privy to the details of the claims made against them. I think as long as things are not fully transparent to the accused, we should expect problems to go under the radar and accused parties to return to private parties and things, because frankly people will assume it was less bad without knowing details. If victims want justice and proper response and are wiling to report at all, that’s great and I really want to support them! I also think it’s the nature of the world that you can’t expect a proper response without something more trial-like occurring. I think many different cultures evolved toward the schelling point of trials for a reason.
We have a duty to other members of the community and future potential victims as much as to the victims, and trial-like-things do the best job at ensuring future safety and ensuring that concrete policies (which the author seems to like) are implemented, I think.
And we (or, the people handling reports anyway) also have an ethical duty to hear both sides as accurately as possible. I have learned from personal experience that it is often unworkably labor-intensive to do due diligence on complaints without doing the discussion in real-time with multiple actors/jury present. I’m not saying that the victim and accused need to be on the same call or anything, but that processes which are deliberately not in-person or deliberately giving the victim more services than the accused to avoid the judicial feel, and avoid putting the victim on the spot, either lead to a lot more work in writing (an unworkable amount. Like think 2 weeks or more of work-hours of writing back and forth for each serious detailed case), or missed opportunities to address things and ask clarification as concerns arise, or misunderstandings that will come up later. BTW I say all this as someone who has reported men in my personal life and EA. Communities should support victims in coming forward.. and with serious claims that will sometimes mean all the way forward to ensure proper response.
I expect this comment to get disagrees from most because there are multiple different unpopular or rarely voiced things so it has something for everyone to dislike lol. I’ve noticed forum users tend to disagree-vote any comment that has even one false/disagreeable statement even if the rest is okay, in true logician fashion. I’ll just remind anyone who got this far that its just my thoughts I didn’t even want as a top level comment, but maybe someone will find value here and there.
I apologize for possibly embarrassing, but I think I see a couple problems with this forum post (I’m intentionally saying little of the original piece in this list, but I do say a couple thoughts at the end)
1. It’s extremely important to note that the author says: “The broad community I speak of here are insular, interconnected subgroups of tech, EA (Effective Altruists), rationalists, cybersecurity/hackers, crypto/blockchain, Burning Man camps, secret parties, and coliving houses.”
This is stated early in the original piece, and I believe this should be quoted at the start of this forum post.
2. So I get that the idea is to share the bits pertinent to EA here on the forum. But I’m not sure what implies that the first paragraph of this forum post is “pertinent” to EA?
I realize that the author later writes that these groups belong to the “same community”. Well that is what they think anyway. But I find that disingenuous as, if if it were so, you are unlikely to need a grab bag of names to describe that “same community”. This is the author that wrote this post so they have definitely been told that EA is not necessarily connected to these other communities at all.
3. Within that paragraph, I also will note that I don’t understand this sentence: “Four, five, and eight people have accused specific persons of completed rape.” What does this mean? Unless you understand something I don’t, I would discourage from sharing confusing language like this or add a caveat in brackets. If I’m being dumb, I’ll retract this item.
4. In the last paragraph, it would have been better link the CH Team’s actual response (I assume the lie is meant to be in there? That they had never had contact before?) by/in that hyperlink.
Finally, I encourage everyone to at least skim the full piece. I also want to remind people to form their own opinions rather than defer to the opinions and emotions of other commentors. Eg, If you want to grab pitchforks, please do so because you genuinely want to (and have at least tried to inform yourself reasonably well by reading the piece and OP’s original and the discussion there), not because other people are grabbing pitchforks.
I’m writing some personal thoughts (those I mentioned in beginning) as a response to this comment.
Should a job listing/do-good opportunity be tagged community? It puts it in a different section which will be seen by less people
Nice job, I was super happy to notice that :) :thumb:
As someone who read the whole piece, I think you could just read the bolded lines and read the explanatory bits below for those lines you find interesting/key. It’s also already an outline, so you could just read the bullets further to the left, and read the further-right bits as your curiosity and ethical compass direct you. Reading the left-ward bits can always be assumed to function as a summary of an outline (and author’s fault if it doesn’t).
[EDIT: This is what Angelina Li did above, nice :) Hopefully if anyone finds any bit intriguing, they go read more in the source :)
The rest is me reflecting on EAs and appropriateness of summaries vs different types of info-gleaning]
I’m not confident that summarizing pieces like this for an EA audience [like, typical summary paragraph-style] really works tbh. Different EAs will need very different things from it. Eg, community builders will be way more interested in the CB section and want to read it in detail even if they disagree, so as to understand the modes of thinking that others might adopt and what they might want to refute or adapt to.This is also, after all, just someone’s personal reflections and won’t necessarily be the way EAs move forward on any of these things. And for reflections, summaries often cut reasoning and therefore lead to information cascades that need to be addressed later, I think. We already have way too much deference and information cascades in EA anyway, so I’d rather see people lean more toward engaging with material semi-deeply that is relevant to them or not repeat ideas at all tbh. This leads me to say that each reader should be proactive [by reading the bolded/leftward parts of the the outline themselves], and try to sort out the bits they care about or want to improve their thinking on and read anything further on that carefully.
It’s totally okay to say “this isn’t really my bag, and I trust others to get it right eventually, so I’m not gonna engage with this”. And if you don’t trust others to get it right eventually (and the FTX debacle is certainly around a low-trust theme), I still think EAs should engage semi-deeply (enough to evaluate trust in others or actually do the better job yourself) or hardly at all (even if this means pulling back from EA til you have the spoons to check-in deeply on your concerns), because engaging lightly will probably only waste your time, confuse discussion, and waste the time of others if they retroactively have to correct misunderstandings that spread thanks to poor-quality/surface-level engagement. [I’ve gone on a long time which makes it sound like a big ask, but honestly I am just talking about semi-deep engagement (eg, reading the leftward parts of the full outline as the author intended when in flow with the work and any further details as needed) vs light engagement (reading a summary which I don’t think works for long pieces like this), not mandating very-deep engagement (reading the piece in full detail). So I think most people can do it.]
That said, I appreciate your sentiment, and I think a table of contents and better section titles would be extremely helpful for easier semi-deep engagement. Also, using numbered outline instead of bullet points. I think these are also easier asks less likely to get future posts hung up in procrastination-land.
This is an add-on to this comment I wrote and sort of to all the SBF-EA-related stuff I’ve written recently. I write this add-on mostly for personal reasons.
I’ve argued that we should have patience when assigning blame for EA leadership and not assume leaders deserve blame or ever were necessarily incompetent in a way that counterfactual leaders would not have been. But this point is distinct from thinking there was nothing we could do or no signs to pay attention to. I don’t want to be seen as arguing there was nothing that EAs in general could do, so here are my actual thoughts of what was, on it’s own, enough to warrant taking action of distancing ourselves from SBF, which it looks like basically all of us, and non-EAs, missed.
FWIW I do think mistakes were made around SBF. I’m just not willing to pin it on EA leaders specifically (yet), or even EAs/EA itself specifically (to the exclusion of others). Anyone, including journalists and finance people, could have seen red flags who watched SBF’s interviews. IMO, the major red flags in retrospect were things anyone who was paying attention (I was only a bit but even I messed up here) could see:(1) SBF talking about ponzi schemes, and some of his testimony regarding crypto regulation (I think?) which apparently made the ponzi scheme possibility look more real.
Personal take and regrets: I saw neither of these myself but my newly-EA gf thought they were morally troubling before the crash and told me. We had a couple short conversations about it which basically led to “Oof, IDK what to say” from me. I thought of looking it all up, or messaging prominent EAs on her word alone. But I did not, mostly due to confusion about what it meant… “Isn’t this just the nature of crypto as an asset as something all people buying crypto should know? Or is this unethical? Am I getting into the moral dilemma that EAs just shouldn’t do finance to E2G? Is that a bullet I want to bite, because I might have to argue that? And what’s my ‘ask’, what am I hoping to happen as result of my messaging someone?”
I didn’t think of it as a red flag for upcoming crash and bankruptcy, and I didn’t expect something to come out that could be formally charged as fraud. I guess someone who knows about ponzi schemes can say if I was dumb to not think of any of this. But it was a red flag that he didn’t care about FTX users, and he might not be “a good guy” (even by consequentialist standards, the balance gets way more complicated and you can’t be anywhere close to sure enough to take such risks with citizen’s money). And regardless of SBF, it was a tip that the public consciousness was about to slant against crypto (even more than the growing disdain for “crypto bros” betrays) and that’s a risk of association.
I still kick myself for not messaging someone. It wouldn’t have been that much notice, a couple months maybe? But maybe could have helped EA distance itself proactively. Sigh.(2) SBF’s violation of kelly criteria/biting bullets on St. Petersburg paradox.
Personal take and expressing shock/light scolding: I never knew how “all-in” he was, but I’d have found that super alarming, and this I think I’d have tried (more seriously than about #1) to talk to someone about it. Basically all I know about betting is that you “never bet it all. Always leave enough to bet another day”, but I know it as the golden rule. It still troubles me that EAs and others seemingly thought SBF’s responses were philosophically neutral or something, when actually it was a glaring red flag that the companywould fail, even without fraud. And also a red flag that he was kind of self-deluding, or trying too hard to be clever via breaking rules. Like. If you want to make more money to do more good, just do the thing that is already known to make the most money in the long run (kelly), don’t instead pull numbers out of your ass to reinvent a wheel, except inevitably worse than before. This also tied into SBF acting way too morally sure of himself—personally I’d never bet earth’s entire future without others’ consent because of one moral theory coupled with the multiple universe theory, in regards to a situation that is called a paradox for good reason (it’s not supposed to be an easy decision, which generally means you should defer to group consensus!).
This all said: I think EAs’ philosophical naivety here, or brushing it off, is disappointingly normal? As proven by no one else in the world writing a hit piece about SBF (that I’m aware of). Bystander effect too maybe, since that stuff is way more public than ex-Alameda employee complaints (but CEA investigated those, at least kinda idk yet), it’d be normal to think “Well, lots of people are seeing this, and if no one else, including FTX investors, sees it as a problem, I guess it must be okay.” Idk. I’d like EAs and non-EAs to do better at pinpointing problematic actors in this regard (and we can only control EA so we should focus on this failure mode a bit), but my complaints are all qualitatively different from what the Time article is talking about.
I expect I’m not the only one who feels as I do re: 1 and 2, including vague and specific guilt, even though I was by all accounts a total outsider. I’m guessing most people just don’t talk about it, and if I’m not the only one, that’s one reason it feels very weird to me to pin it on EA leaders (as of right now).
That basically everyone missed or ignored these flags, does not, I think, bode well that replacing EA leaders means it would have been caught, or that replacements will do better. As a silver lining, I expect odds of catching bad signs like this to go up in future for all potential leaders, because we will have learned this hard lesson and the lesson will be made overt to any new elected people. But I still think we want at least one designated person who would have caught it with or without ex-Alameda reports, regardless of what could have been gleaned from those reports, because I think some sort of fiasco could have been caught with or without those. Surprisingly, I consider those relatively minor flags compared to 1 and 2. The difference is that for those, it’s EA leaders who take the blame, whereas for 1 and 2 it’s basically everyone who was paying a bit of attention.
Most humans won’t catch troubling dark-triad actors. That’s probably okay because we don’t want most people to have low-trust types of personalities. As things stand, I’d be more in favor of adding a new person to the leadership mix, or hiring a social-risk specialist or something for the CH team, whose overt job it is to catch signs of unethical and troubling behavior by EA and EA adjacent people, who is structurally greenlighted to navigate possibly-manipulative people as though they are probably acting in bad faith, so as to not be as easily fooled as most leadership, I think, would be in cases like SBF’s:]
I could say a lot more, and be more precise, and doublecheck some stuff in #1 which I still never did, but this is just a shortform.- Mar 25, 2023, 2:33 PM; 3 points) 's comment on Time Article Discussion—“Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed” by (
(Sorry I took so long to come back) Thanks for clarifying. Hm I’m surprised then that it really seems like the journalist didn’t turn up such quotes about fraud. I do think you are right that many of them expected a crash-and-burn… of some sort. I feel like I should have written something more precise like “crash and burn 3 years later, after making 15B on paper” which comes with so many signals over the years that if I were such a person I’d end up discounting my early suspicions. If I were in their or CEA’s shoes I’d probably have expected something like what happened with Tara’s company, a crash and burn pretty soon after (2019?), so I’d be assuming something got fixed along the way if not. Especially given how an ex-employee(s?) talked about burning through the Asian arbitrage dollars with bad trade decisions.. they’d have had to fix it, right, or they’d have gone belly-up way sooner? I guess crypto was just that much of a gold rush and so few “adults in the room” that they could keep fudging their numbers for that long..?
Maybe the investigation was worse than useless in the end, but reasonably any action taken was going to start with an investigation. It depends the quality of the investigation, but for now I’m much more comfortable considering this a bug of the world than something to blame CEA for.
[Edit: This isn’t to say that I think no mistakes were made. But my complaints are not focused on EA leaders specifically (I’m hesitant to call out any single person til CEA’s commissioned investigation is complete), and are different from what the article discusses. I discuss that in shortform]- Mar 25, 2023, 3:54 PM; 10 points) 's comment on Ivy Mazzola’s Quick takes by (
I’ve read this comment a few times, and my brain goes ”???” whenever I get to your last clause: “I also had quite a strong reaction that nobody seemed to be acting on all of these warning flags”
I just don’t get it in a way that connects to my reading of the article. What are “all these warning flags” and what counts as “inaction”? I don’t want to say your take is wrong because you are sort of sharing feelings, but like.… according to the article, ex-Alameda employees don’t seem to think that those flags were warning flags for the massive fraud and crash-and-burn failure that was to come. And re: inaction, the article says CEA did an internal investigation in 2019 (it drops the info kinda randomly. As you say, the article isn’t well-optimized to come away with an understanding of the details). And idk what new warning flags came after 2019, I’m not seeing any in the article.
I mostly like your comment, but I’m also left wondering… Do you know things not in the article? Did I miss something? [Is this just a “vibe” we will disagree on regardless?] I can’t quite reconcile your take.
[Edit: I had been thinking about asking this over DM for a couple days, but now that this post is no longer an active topic, I figured, “what the hay, ask it in thread”. However you can answer over DM if you prefer, or ignore cuz the post is giving dying breaths, np.]
I vouch for Monica as a kind, intelligent person who has indeed focused much of her career around helping people with autism :) I haven’t worked with her myself, but we are in the same local EA community.
If you have autism, (with or without formal diagnosis), there’s nothing to lose by reaching out, and a lot you might gain :)
Wow. Sincere apologies you went through that. Even if Kat and Emerson thought they were being reasonable (no comment), and/or even if bad instances were few and far between (no comment), such instances would affect me and most people I know very deeply. Probably including the multi-month hangover and residual pain today. And that matters, and is something we need managers/bosses/colleagues to consider. Even if it was only painful at the time, that would matter. Really sorry.
P.S. I previously put a “changed my mind” react to this comment, but I really meant “brought new things to mind”. Put them in other comments