I think some of us owe FLI an apology for assuming heinous intentions where a simple (albeit dumb) mistake was made.
I can imagine this must have been a very stressful period for the entire team, and I hope we as a community become better at waiting for the entire picture instead of immediately reacting and demanding things left and right.
I just wanted to chip in to say that this does indeed seem like this has been a very stressful period for the team.
I cannot read their minds but it certainly seems possible to me that part of the reason some folks could find a situation like this stressful is precisely because they felt that some of the objections and critical comments were reasonable.
The statement says in point 8 of the FAQ (my emphasis)
The way we see it, we rejected a grant proposal that deserved to be rejected, and challenging, reasonable questions have been asked as to why we initially considered it and didnāt reject it earlier. We deeply regret that we may have inadvertently compromised the confidence of our community and constituents. This causes us huge distress, as does the idea that FLI or its personnel would somehow align with ideologies to which we are fundamentally opposed. We are working hard to continue improving the structure and process of our grantmaking processes, including more internal and (in appropriate cases) external review. For starters, for organizations not already well-known to FLI or clearly unexceptionable (e.g. major universities), we will request and evaluate more information about the organization, its personnel, and its history before moving on to additional stages.
Maybe this is a super weird thing to say but, were I a staff member at a place affected by this kind of thing, my distress would have been because: I was shocked /ā upset myself that grant seemed to have nearly been given and I would have been really hurt and shaken by that, confused about what had happened given senior leadership were not able to respond and frustrated I couldnāt get a sooner reply, really disappointed /ā angry about the initial response from Max Tegmark which seemed poor and didnāt represent the values of an organisation I wanted to work for etc. etc.
Iām absolutely not claiming that anyone at FLI feels like this! But I just wanted to say that just because something was hard for staff, doesnāt necessarily mean it was hard because the critical comments were wrong/āmisguided.
Strong agree with ājust because something was hard for staff, doesnāt necessarily mean it was hard because the critical comments were wrong/āmisguided.ā, though I think āpart of the reason some folks could find a situation like this stressful is precisely because they felt that some of the objections and critical comments were reasonableā doesnāt differentiate between different worldsāI think there would be a lot of flurry and frenzy and stress basically independently of reasonableness of critique (within some bounds).
To be fair, the initial statement was incredibly bad, and I do not regret condemning it. They were extremely defensive in response to very obvious and reasonable questions, and were very ignorant about the nature of the newspaper in question.
I think if we had refrained from criticizing their initial statement, their final, formal statement would be a lot worse, so if anything, we did them a favour. But I do agree that speculation of Max being a nazi of something were unwarranted.
edit: After reading some of the comments below, I think my initial statement here was unnecessarily glib. There were definitely comments attacking Max that jumped to conclusions and were far too quick to assume extreme malice, and I donāt think he should be grateful for those. I still maintain that the original statement was poor and that criticism and questions were warranted.
Jan 13, 6:46am: Shakeel Hashim (speaking for himself and not for CEA; +110 karma, +109 net agreement as of the 15th) writes, āIf this is true itās absolutely horrifying. FLI needs to give a full explanation of what exactly happened here and I donāt understand why they havenāt. If FLI did knowingly agree to give money to a neo-Nazi group, thatās despicable. I donāt think people who would do something like that ought to have any place in this community.ā
Jan 13, 9:18pm: Shakeel follows up, repeating that he sees no reason why FLI wouldnāt have already made a public statement that itās really weird that FLI hasnāt already made a public statement, and raises the possibility that FLI has maybe done sinister questionably-legal things and thatās why they havenāt spoken up.
Jan 14, 3:43am: You (titotal) comment, āIf the letter is genuine (and they have never denied that it is), then someone at FLI is either grossly incompetent or malicious. They need to address this ASAP. ā
Jan 14, 8:16am: Jason comments (+15 karma, +13 net agreement as of the 15th): āI think it very likely that FLI would have made a statement here if there were an innocent or merely negligent explanation (e.g., the document is a forgery, or they got duped somehow into believing the grantee was related to FLIās stated charitable purposes and not pro-Nazi). So, unless there is a satisfactory explanation forthcoming, the stonewalling strongly points to a more sinister one.ā
To be clear, this is Shakeel saying āI donāt understand why [FLI hasnāt given a full explanation]ā six hours after the article came out /ā two hours after EAs started discussing it, at 9:46am Boston time. (FLI is based in Boston.) And Jason accusing FLI of āstonewallingā one day after the articleās release.
[Update 1ā21: Jason says that he was actually thinking of FLI stonewalling Expo, not FLI stonewalling the EA Forum. That makes a big difference, though I wish Jason had been clear about this in his comments, since I think the aggregate effect of a bunch of comments like this on the EA Forum was to cause myself and others to think that Tegmark was taking a weirdly long time to reply to the article or to the EA Forum discussion.]
(And Iām only mentioning the explicit condemnation of FLI for not speaking up sooner here. The many highly upvoted and agreevoted EA Forum comments roasting FLI and making confident claims about what happened prior to Tegmarkās comment, with language like āthe squalid character of Tegmarkās choicesā, are obviously a further reason Tegmark /ā FLI might have wanted to rush out a response.)
The level of speed-in-replying demanded by EAs in this case (and endorsed by the larger EA Forum community, insofar as we strongly upvoted and up-agreevoted those comments) is frankly absurd, and I do think several apologies are owed here.
(Like, ārespond within two hours of a 7am forum postā is wildly absurd even if weāre adopting a norm of expecting people to just blurt out their initial thoughts in real time, warts and errors and all. But itās even more absurd if weāre demanding carefully crafted Public Statements that make no missteps and have no PR defects.)
Thanks for calling me out on this ā I agree that I was too hasty to call for a response.
Iām glad that FLI has shared more information, and that they are rethinking their procedures as a result of this. This FAQ hasnāt completely alleviated my concerns about what happened here ā I think itās worrying that something like this can get to the stage it did without it being flagged (though again, Iām glad FLI seems to agree with this). And I also think that it would have been better if FLI had shared some more of the FAQ info with Expo too.
I do regret calling for FLI to speak up sooner, and I should have had more empathy for the situation they were in. I posted my comments not because I wanted to throw FLI under the bus for PR reasons, but because I was feeling upset; coming on the heels of the Bostrom situation I was worried that some people in the EA community were racist or at least not very sensitive about how discussions of race-related things can make people feel. At the time, I wanted to do my bit to make it clear ā in particular to other non-white people who felt similarly to me ā that EA isnāt racist. But I could and should have done that in a much better way. Iām sorry.
Thank you for making the apology, you have my approval for that! I also like your apology on the other thread ā your words are hopeful for CEA going in a good direction.
Some feedback/āreaction from me that I hope is helpful. In describing your motivation for the FLI comment, you say that it was not to throw FLI under the bus, but because of your fear that some people would think EA is racist, and you wanted to correct that. To me, that is a political motivation, not much different from a PR motivation.
To gesture at the difference (in my ontology) between PR/āpolitical motivations and truth-seeking motivations:
PR/āpolitical
you want people to believe a certain thing (even if itās something you yourself sincerely believe), in this case, that EA is not racist
itās about managing impressions and reputations (e.g. EAās reputation as not racist)
Your initial comment (and also the Bostrom email statement) both struck me as āperformativeā in how they demonstrated really harsh and absolute condemnation (āabsolutely horrifyingā, ā[no] place in this communityā, ārecklessly flawed and reprehensibleā ā granted that you said āif trueā, but the tone and other comments seemed to suggest you did think it was true). That tone and manner of speaking as the first thing you say on a topic[1] feels pretty out of place to me within EA, and certainly isnāt what I want in the EA I would design.
Extreme condemnation pattern matches to someone signaling that they too punish the taboo thing (to be clear, I agree that racism should not be tolerated at all), as is seen on the lot of the Internet, and it feels pretty toxic. It feels like itās coming from a place of needing to demonstrate āI/āwe are not the bad thingā.
So even if your motivation was ādo your bit to make it clear that EA isnāt racistā, that does strike me as still political/āPR (even if you sincerely believe it).
(And I donāt mean to doubt your upsetness! It is very reasonable to be upset if you think something will cause harm to others, and harm to the cause you are dedicating yourself to, and harm to your own reputation through association. Upsetness is real and caring about reputation can come from a really good place.)
I could write more on my feelings about PR/āpolitical stuff, because my view is not that itās outright ābad/āevilā or anything, more that caution is required.
Truth-seeking /ā info-propagation Such comments more focus on sharing the authorās beliefs (not performing them)[2] and explaining how they reached them, e.g. āthis is what I think happened, this is why I think thatā and inferences theyāre making, and what makes sense. They tally uncertainty, and they leave open room for the chance theyāre mistaken.
To me, the ideal spirit is ālet me add my cognition to the collective so we all arrive at true beliefsā rather than ālet me tug the collective beliefs in the direction I believe is correctā or āI need to ensure people believe the correct thingā (and especially not āI need people to believe the correct thing about meā).
My ideal CEA comms strategy would conceive of itself as having the goal of causing people to have accurate beliefs foremost, even when that makes EA look bad. That is the job ā not to ensure EA looks good, but to ensure EA is perceived accurately, warts and all.
(And Iām interested in attracting to EA people who can appreciate that large movements have warts and who can tolerate weirdness in beliefs, and gets that movement leaders make mistakes. I want the people who see past that to the ideas and principles that make sense, and the many people (including you, Iād wager) are working very hard to make the world better.)
Encouragement I donāt want to respond to step in the right direction (a good apology) with something that feels negative, but it feels important to me that this distinction is deeply understood by CEA and EA in general, hence me writing it up for good measure. I hope this is helpful.
To me, the ideal spirit is ālet me add my cognition to the collective so we all arrive at true beliefsā rather than ālet me tug the collective beliefs in the direction I believe is correctā or āI need to ensure people believe the correct thing.ā
I like this a lot.
Iāll add that you can just say out loud āI wish other people believed Xā or āI think the correct collective belief here would be Xā, in addition to saying your personal belief Y.
(An example of a case where this might make sense: You think another person or group believes Z, and you think they rationally should believe X instead, given the evidence available to them. You yourself believe a more-extreme proposition Y, but you donāt think others have enough evidence to believe Y yetāe.g., your belief may be based on technical expertise or hard-won life-experience that the other parties donāt have.)
Itās possible to care about the groupās beliefs, and try to intervene on them, in a way thatās honest and clear about what youāre doing.
āabsolutely horrifyingā, ā[no] place in this communityā, ārecklessly flawed and reprehensibleā
[...]
That tone and manner of speaking as the first thing you say on a topic[fn] feels pretty out of place to me within EA, and certainly isnāt what I want in the EA I would design.
Speaking locally to this point: I donāt think I agree! My first-pass take is that if somethingās horrible, reprehensible, flawed, etc., then I think EAs should just say so. That strikes me as the default truth-seeking approach.[1]
There might be second-order reasons to be more cautious about when and how you report extreme negative evaluations (e.g., to keep forum discussions from degenerating as people emotionally trigger each other), but I would want to explicitly flag that this is us locally departing from the naive truth-seeking approach (ājust say what seems true to youā) in the hope that the end result will be more truth-seeky via people having an easier time keeping a cool head.
(Note that Iām explicitly responding to the āextreme languageā side of this, not the āwas this to some extent performative or strategic?ā side of things.)
With the caveat that maybe evaluative judgments in general get in the way of truth-seeking, unless theyāre āownedā NVC-style, because of common confusions like āthinking my own evaluations are mind-independent properties of the worldā. But if weāre allowing mild evaluative judgments like āOKā or āfineā, then I think thereās less philosophical basis for banning more extreme judgments like āawesomeā or āterribleā.
I think I agree with your clarification and was in fact conflating the mere act of speaking with strong emotion with speaking in a way that felt more like a display. Yeah, I do think itās a departure from naive truth-seeking.
In practice, I think it is hard, though I do think it is hard for the second order reasons you give and others. Perhaps an ideal is people share strong emotion when they feel it, but in some kind of format/ācontainer/āmanner that doesnāt shut down discussion or get things heated. āNVCā style, perhaps, as you suggest.
Fwiw, I do think āhas no place in the communityā without being owned as āno place in my communityā or āshouldnāt have a place in the communityā is probably too high a simulacrum level by default (though this isnāt necessarily a criticism of Shakeel, I donāt remember what exactly his original comment said.)
Cool. :) I think we broadly agree, and I donāt feel confident about what the ideal way to do this is, though Iād be pretty sad and weirded out by a complete ban on expressing strong feelings in any form.
you want people to believe a certain thing (even if itās something you yourself sincerely believe), in this case that EA is not racist
itās about managing impressions and reputations (e.g. EAās reputation as not racist)
Your initial comment (and also the Bostrom email statement) both struck me as āperformativeā in how they demonstrated really harsh and absolute condemnation (āabsolutely horrifyingā, ā[no] place in this communityā, ārecklessly flawed and reprehensibleā ā granted that you said āif trueā, but the tone and other comments seemed to suggest you did think it was true). That tone and manner of speaking as the first thing you say on a topic[1] feels pretty out of place to me within EA, and certainly isnāt what I want in the EA I would design.
Extreme condemnation pattern matches to someone signaling that they too punish the taboo thing (to be clear, I agree that racism should not be tolerated at all), as is seen on the lot of the Internet, and feels pretty toxic. It feels like itās coming from a place of needing to demonstrate āI/āwe are not the bad thingā.
So even if your motivation was ādo your bit to make it clear that EA isnāt racistā, that does strike me as still political/āPR (even if you sincerely believe it)
(And I donāt mean to doubt your upsetness! It is very reasonable to upset if you think something will cause harm to others, and harm to the cause you are dedicating yourself to. Upsetness is real and caring about reputation can come from a really good place.)
I could write more on my feelings about PR/āpolitical stuff, because my view is not that itās outright ābad/āevilā or anything, more that caution is required.
IMO, I think this is an area EA needs to be way better in. For better or worse, most of the world runs on persuasion, and PR matters. The nuanced truth doesnāt matter that much for social reality, and EA should ideally be persuasive and control social reality.
For better or worse, most of the world runs on persuasion, and PR matters. The nuanced truth doesnāt matter that much for social reality, and EA should ideally be persuasive and control social reality.
I think the extent to which nuanced truth does not matter to āmost of the worldā is overstated.
I additionally think that EA should not be optimizing for deceiving people who belong to the class āmost of the worldā.
Both because it wouldnāt be useful if it worked (realistically most of the world has very little they are offering) and because it wouldnāt work.
I additionally think think that trying to play nitwit political games at or around each hecking other would kill EA as a community and a movement dead, dead, dead.
Thanks for this Shakeel. This seems like a particularly rough time to be running comms for CEA. Iām grateful that in addition to having that on your plate, in your personal capacity youāre helping to make the community feel more supportive for non-white EAs feeling the alienation you point to. Also for doing that despite the emotional labour involved in that, which typically makes me shy away from internet discussions.
Responding swiftly to things seems helpful in service of that support. One of the risks from that is that you can end up taking a particular stance immediately and then it feeling hard to back down from that. But in fact you were able to respond swiftly, and then also quickly update and clearly apologise. Really appreciate your hard work!
(Flag that Shakeel and I both work for EV, though for different orgs under that umbrella)
Hey Shakeel, thanks for your apology and update (and I hope youāve apologized to FLI). Even though call-out culture may be popular or expected in other contexts, it is not professional or appropriate for the Comms Head of CEA to initiate an interaction with an EA org by publicly putting them on blast and seemingly seconding what could be very damaging accusations (as well as inventing others by speculating about financial misconduct). Did you try to contact FLI before publicly commenting to get an idea of what happened (perhaps before they could prepare their statement)?
I appreciate that you apologized for this incident but I donāt think you understand how deep of a problem this behavior is. Get an anonymous account if you want to shoot from the hip. When you do it while your bio says āHead of Communications at CEAā it comes with a certain weight. Multiplying unfounded accusations, toward another EA org no less, is frankly acting in bad faith in a communications role.
Even though call-out culture may be popular or expected in other contexts, it is not professional or appropriate for the Comms Head of CEA to initiate an interaction with an EA org by publicly putting them on blast and seemingly seconding what could be very damaging accusations (as well as inventing others by speculating about financial misconduct). Did you try to contact FLI before publicly commenting to get an idea of what happened (perhaps before they could prepare their statement)?
For what itās worth, this seems like the wrong way around to me. I donāt know exactly about the role and responsibilities of the āHead of Commā, but in-general I would like people in EA to be more comfortable criticizing each other, and to feel less constrained to first air all criticism privately and resolve things behind closed doors.
I think the key thing that went wrong here was the absence of a concrete logical argument or probabilities about why the thing that was happening was actually quite bad, and also the time pressure, which made the context of the conversation much worse. Another big thing was also jumping to conclusions about FLIās character in a way that felt like it was trying to apply direct political pressure instead of focusing on propagating accurate information.
it is not professional or appropriate for the Comms Head of CEA to initiate an interaction with an EA org by publicly putting them on blast and seemingly seconding what could be very damaging accusations
Maybe there are special rules that EA comms people (or the CEA comms person in particular) should follow; I possibly shouldnāt weigh in on that, since Iām another EA comms person (working at MIRI) and might be biased.
My initial thought, however, is that itās good for full-time EAs on the current margin to speak more from their personal views, and to do less āspeaking for the organizationsā. E.g., in the case of FTX, I think it would have been healthy for EAs working at full-time orgs to express their candid thoughts about SBF, both negative and positive; and for other professional EAs to give their real counter-arguments, and for a real discussion to thereby happen.
My criticism of Shakeelās post is very different from yours, and is about how truth-seeking the contents are and how well they incentivize truth-seeking from others, not about whether itās inherently unprofessional for particular EAs to strongly criticize other EAs.
Get an anonymous account if you want to shoot from the hip.
This seems ~strictly worse to me than making a āShakeel-Personalā account separate from āShakeel-CEAā. It might be useful to have personal takes indexed separately (though Iād guess this is just not necessary, and would add friction and discourage people from sharing their real takes, which I want them to do more). But regardless, I donāt think itās better to add even more of a fog of anonymity to EA Forum discussions, if someoneās willing to just say their stuff under their own name.
Iām glad anonymity is an option, but the number of anons in these discussions already makes it hard to know how much I might be double-counting views, makes it hard to contextualize comments by knowing what world-view or expertise or experience or they reflect, makes it hard to have sustained multi-month discussions with a specific person where we gradually converge on things, etc.
Idk I think it might be pretty hard to have a role like Head of Communications at CEA and then separately communicate your personal views about the same topics. Your position is rather unique for allowing that. I donāt see CEA becoming like MIRI in this respect. It comes across as though heās saying this in his professional capacity when you hover over his account name and it says āHead of Communications at CEAā.
But the thing I think is most important about Shakeelās job is that it means he should know better than to throw around and amplify allegations. A marked personal account would satisfy me but I would still hold it to a higher standard re:gossip since heās supposed to know whatās appropriate. And I expect him to want EA orgs to succeed! I donāt think premature callouts for racism and demands to have already have apologized are good faith criticism to strengthen the community.
I mean, I want employees at EA orgs to try to make EA orgs succeed insofar as that does the most good, and try to make EA orgs fail insofar as that does the most good instead. Likewise, I want them to try to strengthen the EA community if their model says this is good, and to try to weaken it (or just ignore it) otherwise.
(Obviously, in each case Iād want them to be open and honest about what theyāre trying to do; you can oppose an org you think is bad without doing anything unethical or deceptive.)
Iām not sure what I think CEAās role should be in EA. I do feel more optimistic about EA succeeding if major EA orgs in general focus more on developing a model of the world and trying to do the most good under their idiosyncratic world-view, rather than trying to represent or reflect EA-at-large; and I feel more optimistic about EA if sending our best and brightest to work at EA orgs doesnāt mean that they have to do massively more self-censoring now.
Maybe CEA or CEA-comms is an exception, but Iām not sold yet. I do think itās good to have high epistemic standards, but I see that as compatible with expressing personal feelings, criticizing other orgs, wanting specific EA orgs to fail, etc.
For what itās worth, speaking as a non-comms person, Iām a big fan of Rob Bensinger style comms people. I like seeing him get into random twitter scraps with e/āacc weirdos, or turning obnoxious memes into FAQs, or doing informal abstract-level research on the state of bioethics writing. I may be biased specifically because I like Robās contributions, and would miss them if he turned himself into a vessel of perfect public emptiness into which the disembodied spirit of MIRIās preferred public image was poured, but, look, I also just find that type of job description obviously offputting. In general I liked getting to know the EAs Iāve gotten to know, and I donāt know Shakeel that well, but I would like to get to know him better. I certainly am averse to the idea of wrist slapping him back into this empty vessel to the extent that we are blaming him for carelessness even when he specifies very clearly that he isnāt speaking for his organization. I do think that his statement was hasty, but I also think we need to be forgiving of EAs whose emotions are running a bit hot right now, especially when they circle back to self-correct afterwards.
I like Robās contributions, and would miss them if he turned himself into a vessel of perfect public emptiness into which the disembodied spirit of MIRIās preferred public image was poured
I think this would also just be logically inconsistent; MIRIās preferred public image is that we not be the sort of org that turns people into vessels of perfect public emptiness into which the disembodied spirit of our preferred public image is poured.
āMy initial thought, however, is that itās good for full-time EAs on the current margin to speak more from their personal views, and to do less āspeaking for the organizationsā. E.g., in the case of FTX, I think it would have been healthy for EAs working at full-time orgs to express their candid thoughts about SBF, both negative and positive; and for other professional EAs to give their real counter-arguments, and for a real discussion to thereby happen.ā
This seems a little naive. āWe were all getting millions of dollars from this guy with billions to come, heās personal friends with all the movement leaders, but if we had had more open discussions we would not have taken the millions...really??ā
also if youāre in line to get millions of $$$ from someone of course you are never going to share your candid thoughts about them publicly under your real name!
This seems a little naive. āWe were all getting millions of dollars from this guy with billions to come, heās personal friends with all the movement leaders, but if we had had more open discussions we would not have taken the millions...really??ā
I didnāt say a specific prediction about what would have happened differently if EAs had discussed their misgivings about SBF more openly. What Iād say is that if you took a hundred SBF-like cases with lots of the variables randomized, outcomes will be a lot better if people discuss early serious warning signs and serious misgivings in public.
That will sometimes look like āturning down moneyā, sometimes like āmore people poke around to learn moreā, sometimes like āthis person is less able to win othersā trust via their EA associationsā, sometimes like āfewer EAs go work for this guyā.
Sometimes it wonāt do anything at all, or will be actively counterproductive, because the world is complicated and messy. But I think talking about this stuff and voicing criticisms is the best general policy, if weāre picking a policy to apply across many different cases and not just using hindsight to ask what an omniscient person would do differently in the specific case of FTX.
also if youāre in line to get millions of $$$ from someone of course you are never going to share your candid thoughts about them publicly under your real name!
I mean, Open Philanthropy is MIRIās largest financial supporter, and
Makes sense to me! I appreciate knowing your perspective better, Shakeel. :)
On reflection, I think the thing I care about in situations like this is much more āmutual understanding of where people were coming from and where theyāre at nowā, whether or not anyone technically āapologizesā.
Apologizing is one way of communicating information about that (because it suggests weāre on the same page that there was a nontrivial foreseeable-in-advance fuck-up), but IMO a comment along those lines could be awesome without ever saying the words āIām sorryā.
One of my concerns about āIām sorryā is that I think some people think you can only owe apologies to Good Guys, not to Bad Guys. So if thereās a disagreement about who the Good Guys are, communities can get stuck arguing about whether X should apologize for Y, when it would be more productive to discuss upstream disagreements about facts and values.
I think some people are still uncertain about exactly how OK or bad FLIās actions here were, but whether or not FLI fucked up badly here and whether or not FLI is bad as an org, I think the EA Forumās response was bad given the evidence we had at the time. I want our culture to be such that itās maximally easy for us to acknowledge that sort of thing and course-correct so we do better next time. And my intuition is that a sufficiently honest explanation of where you were coming from, thatās sufficiently curious about and open to understanding othersā perspectives, and sufficiently lacking in soldier-mindset-style defensiveness, can do even more than an apology to contribute to a healthy culture.
(In this case the apology is to FLI/āMax, not to me, so itās mostly none of my business. š But since I called for āapologiesā earlier, I wanted to consider the general question of whether thatās the thing that matters most.)
I find myself disliking this comment, and I think its mostly because it sounds like you 1) agree with many of the blunders Rob points out, yet 2) donāt seem to have learned anything from your mistake here? I donāt think many do or should blame you, and Iām personally concerned about repeated similar blunders on your part costing EA much loss of outside reputation and internal trust.
Like, do you think that the issue was that you were responding in heat, and if so, will you make a future policy of not responding in heat in future similar situations?
I feel like there are deeper problems here that wonāt be corrected by such a policy, and your lack of concreteness is an impedance to communicating such concerns about your approach to CEA comms (and is itself a repeated issue that wonāt be corrected by such a policy).
FWIW, I donāt really want Shakeel to rush into making public promises about his future behavior right now, or big public statements about long-term changes to his policies and heuristics, unless he finds that useful for some reason. I appreciated hearing his thoughts, and would rather leave him space to chew on things and figure out what makes sense for himself. If he or CEA make the wrong updates by my lights, then I expect that to be visible in future CEA/āShakeel actions, and I can just wait and criticize those when they happen.
FTX collapsed on November 8th; all the key facts were known by the 10th; CEA put out their statement on November 12th. This is a totally reasonable timeframe to respond. I would have hoped that this experience would make CEA sympathetic to a fellow EA org (with much less resources than CEA) experiencing a media crisis rather than being so quick to condemn.
Iām also not convinced that a Head of Communications, working for an organization with a very restrictive media policy for employees, commenting on a matter of importance for that organization, can really be said to be operating in a personal capacity. Despite claims to the contrary, I think itās pretty reasonable to interpret these as official CEA communications. Skill at a PR role is as much about what you do not say as what you do.
The eagerness with which people rushed to condemn is frankly a warning sign for involution. We have to stop it with the pointless infighting or itās all we will end up doing.
Just a quick note to say I donāt think everything in your comment above is entirely fair characterisation of the comments.
Two specific points (I havenāt checked everything you say above, so I donāt claim this is exhaustive):
I think youāre mischaracterising Shakeelās 9.18pm response quite significantly. You paraphrased him as saying he sees no reason FLI wouldnāt have released a public statement but that is I think neither the text nor the spirit of that comment. He specifically acknowledged he might be missing some reasons. He said he thinks the lack of response is āvery weirdā which seems pretty different to me to āI see no reason for thisā. Hereās some quoting but itās so short people can just read the comment :P āHi Jack ā reasonable question! When I wrote this post I just didnāt see what the legal problems might be for FLIā¦ Jasonās comment has made me realise there might be something else going on here, though; if that is the case then that would make the silence make more sense. I do still think itās very weird that FLI hasnāt condemned Nya Dagbladet thoughā
You also left out that Shakeel did already apologise to Max Tegmark for in his words ājumping to conclusionsā when Max explained a reason for the delay, which I think is relevant to the timeline youāre setting out here.
I think both those things are relevant to how reasonable some of these comments were and to what extent apologies might be owed.
I think youāre mischaracterising Shakeelās 9.18pm response quite significantly.
The comments are short enough that I should probably just quote them here:
Comment 1: āThe following is my personal opinion, not CEAās. If this is true itās absolutely horrifying. FLI needs to give a full explanation of what exactly happened here and I donāt understand why they havenāt. If FLI did knowingly agree to give money to a neo-Nazi group, thatās despicable. I donāt think people who would do something like that ought to have any place in this community.ā
Comment 2: āHi Jack ā reasonable question! When I wrote this post I just didnāt see what the legal problems might be for FLI. With FTX, there are a ton of complications, most notably with regards to bankruptcy/āclawbacks, and the fact that actual crimes were (seemingly) committed. This FLI situation, on face value, didnāt seem to have any similar complications ā it seemed that something deeply immoral was done, but nothing more than that. Jasonās comment has made me realise there might be something else going on here, though; if that is the case then that would make the silence make more sense. I do still think itās very weird that FLI hasnāt condemned Nya Dagbladet though ā CEA did, after all, make it very clear very quickly what our stance on SBF was.ā
My summary of comment 2: āShakeel follows up, repeating that he sees no reason why FLI wouldnāt have already made a public statement, and raises the possibility that FLI has maybe done sinister questionably-legal things and thatās why they havenāt spoken up.ā
I think this is a fine summary of the gist of Shakeelās comment ā obviously there isnāt literally āno reasonā here (that would contradict the very next part of my sentence, āand raises the possibility that FLI has maybe done sinister questionably-legal things and thatās why they havenāt spoken upā), but thereās no good reason Shakeel can see, and Shakeel reiterates that he thinks āitās very weird that FLI hasnāt condemned Nya Dagbladetā.
The main thing I was trying to point at is that Shakeelās first comment says āI donāt understandā why FLI hasnāt given āa full explanation of exactly what happened hereā (the implication being that thereās something really weird and suspicious about FLI not having already released a public statement), and Shakeelās second comment doubles down on that basic perspective (itās still weird and suspicious /ā he canāt think of an innocent explanation, though he acknowledges a non-innocent explanation).
That said, I think this is a great context to be a stickler about saying everything precisely (rather than relying on āgistsā), and Iām generally a fan of the ethos that cares about precision and literalness. š Being completely literal, āhe sees no reasonā is flatly false (at least if āseeing no reasonā means āyou havenāt thought of a remotely plausible motivation that might have caused this behaviorā).
Iāll edit the comment to say ārepeating that itās really weird that FLI hasnāt already made a public statementā, since thatās closer to being a specific sentiment he expresses in both comments.
You also left out that Shakeel did already apologise to Max Tegmark for in his words ājumping to conclusionsā when Max explained a reason for the delay, which I think is relevant to the timeline youāre setting out here.
I think this is a different thing, but itās useful context anyway, so thanks for adding it. :)
I upvoted this, but disagreed. I think the timeline would be better if it included:
November 2022: FLI inform Nya Dagbladet Foundation (NDF) that they will not be funding them
15 December 2022: FLI learn of media interest in the story
I therefore donāt think itās āabsurdā to have expected FLI to have repudiated NDF sooner. You could argue that by apologising for their mistake before the media interest does more harm than good by drawing attention to it (and by association, to NDF), but once they became aware of the media attention, I think they should have issued something more like their current statement.
I also agreed with the thrust of titotalās comment that their first statement was woefully inadequate (it was more like ānothing to see hereā than āoh damn, we seriously considered supporting an odious publication and weāre sorryā). I donāt think lack of time gets them off the hook here, given they should have expected Expo to publish at some point.
I donāt think anyone owes an apology for expecting FLI to do better than this.
(Note: I appreciate Max Tegmark was dealing with a personal tragedy (for which, my condolences) at the time of it becoming āa thingā on the EA Forum, so I of course wouldnāt expect him to be making quick-but-considered replies to everything posted on here at that time. But I think thereās a difference between that and the speed of the proper statement.)
***
FWIW I also had a different interpretation of Shakeelās 9:18pm comment than what you write here:
āJan 13, 9:18pm: Shakeel follows up, repeating that he sees no reason why FLI wouldnāt have already made a public statement, and raises the possibility that FLI has maybe done sinister questionably-legal things and thatās why they havenāt spoken up.ā
Shakeel said āJasonās comment has made me realise there might be something else going on here, though; if that is the case then that would make the silence make more sense.ā ā this seemed to me that Shakeel was trying to to be charitable, and understand the reasons FLI hadnāt replied quicker.
Only a subtle difference, but wanted to point that out.
November 2022: FLI inform Nya Dagbladet Foundation (NDF) that they will not be funding them
15 December 2022: FLI learn of media interest in the story
Yeah, if the early EA Forum comments had explicitly said āFLI should have said something public about this as soon as they discovered that NDF was badā, āFLI should have said something public about this as soon as Expo contacted themā, or āFLI should have been way more response to Expoās inquiriesāāand if weād generally expressed a lot more uncertainty and been more measured in what we said in the first few daysāthen I might still have disagreed, but I wouldnāt have seen this as an embarrassingly bad response in the same way.
I, as a casual reader who wasnāt trying to carefully track all the timestamps, had no idea when I first skimmed these threads on Jan. 13-14 that the article had only come out a few hours ago, and I didnāt track timestamps carefully enough to register just how fast the EA Forum went from āa top-level post exists about this at allā to āwow, FLI is stonewalling usā and āwow, there must be something really sinister here given that FLI still hasnāt respondedā. I feel like I was misled by these comments, because I just took for granted (to some degree) that the people writing these highly upvoted comments were probably not saying something transparently silly.
If a commenter like Jason thought that FLI was āstonewallingā because they didnāt release a public statement about this in December, then itās important to be explicit about that, so casual readers donāt come away from the comment section thinking that FLI is displaying some amazing level of unresponsiveness to the forum post or to the news article.
once they became aware of the media attention, I think they should have issued something more like their current statement.
This is less obvious to me, if they didnāt owe a public response before Expo reached out to them. A lot of press inquiries donāt end up turning into articles, and if the goal is to respond to press coverage, itās often better to wait and see whatās in the actual article, since you might end up surprised about the articleās contents.
I donāt think anyone owes an apology for expecting FLI to do better than this.
āDo better than thisā, notably, is switching out concrete actions for a much more general question, one thatās closer to āWhatās the correct overall level of affect we should have about FLI right now?ā.
If weāre going to have āapologize when you mess up enoughā norms, I think they should be more about evaluating local process, and less about evaluating the overall character of the person youāre apologizing to. (Or even the character-in-this-particular-case, since itās possible to owe someone an apology even if that person owes an apology too.) āDid I fuck-up when I did X?ā should be a referendum on whether the local action was OK, not a referendum on the people you fucked up at.
More thoughts about apology norms in my comment here.
Thanks for this comment and timeline, I found it very useful.
I agree that ārespond within two hours of a 7am forum postā seems like an unreasonable standard, and I also agree that some folks rushed too quickly to condemn FHI or make assumptions about Tegmarkās character/āchoices.
I do want to illustrate a related point: When the Bostrom news hit, many folks jumped to defend Bostromās apology as reasonable because it consisted of statements that Bostrom believed to be true, and that this reflects truth-seeking and good epistemics, and this should be something that the forum and community should uphold.
But if I look at Jasonās comment, āSo, unless there is a satisfactory explanation forthcoming, the stonewalling strongly points to a more sinister one.ā
There is actually nothing technically untrue about this statement? There WAS a satisfactory explanation that eventuated.
Similarly, if I look at Shakeelās comment, the condemnation is conditional on if the events happened: āIf this is true itās absolutely horrifyingā, āIf FLI did knowingly agree to give money to a neo-Nazi group, thatās despicableā, āI donāt think people who would do something like that ought to have any place in this communityā.
The sentence about speaking up sooner FLI reflects Shakeel expressing his desire that FLI needs to give a full explanation, and his confusion about why this has not yet happened, but reading the text of that statement, thereās actually no āexplicit condemnation of FLI for not speaking up sooner ā.
Now, I raise these points not because Iām interested in defending Shakeel or Jason, because the subtext does matter, and itās somewhat reasonable to read those statements and interpret those as explicit condemnation of FLI for not speaking up sooner, and push back accordingly.
But Iām just noting that there are a lot of upvotes on Robās comment, and quite a few voices (I think rightfully!) saying that some commentors were too quick to jump to conclusions about Tegmark or FLI. But I donāt see any commentors defending Jason or Shakeelās statements with the ātruth-seekingā and āgood epistemicsā argument that was being used to defend Bostromās apology.
Do you have any thoughts on the explanations for what seem like an inconsistent application of upholding these standards? It might not even be accurately characterized as an inconsistency, Iām likely missing something here.
I expect this comment will just get reflexively downvoted given how tribal the commentary on the forum is these days, but I am curious about what drives this perceived difference, especially from those who self-identify as high decouplers, truth-seekers, or those who place themselves in the āprioritize epistemicsā camp.
There is actually nothing technically untrue about this statement?
[...]
Do you have any thoughts on the explanations for what seem like an inconsistent application of upholding these standards? It might not even be accurately characterized as an inconsistency, Iām likely missing something here.
āTechnically not saying anything untrueā isnāt the same as āexhibiting a truth-seeking attitude.ā
Iād say truth-seeking attitude would have been more like āBefore we condemn FLI, letās make sure we understand their perspective and can assess what really happened.ā Perhaps accompagnied by āI agree we should condemn them harshly if the reporting is roughly as it looks like right now.ā Similar statement, different emphasis. Shakeelās comment did appropriate hedging, but its main content was sharing a (hedged) judgment/ācondemnation.
Edit: I still upvoted your comment for highlighting that Shakeel (and Jason) hedged their comments. I think thatās mostly fine! In hindsight, though, I agree with the sentiment that the community discussion was tending towards judgment a bit too quickly.
I agree with the sentiment that the community discussion was tending towards judgment a bit too quickly.
Yeah, I agree! I think my main point is to illustrate that the impression you got of the community discussion ātending towards judgement a bit too quicklyā is pretty reasonable despite the technically true statements that they made, because of a reading of a subtext, including what they didnāt say or choose to focus on, instead of the literal text alone, which I felt like was a major crux between those who thought Bostromās apology was largely terrible VS. those who thought Bostromās apology was largely acceptable.
āTechnically not saying anything untrueā isnāt the same as āexhibiting a truth-seeking attitude.ā
Likewise, I also agree with this! I think what Iām most interested in here is like, what you (or others) think separates the two in general, because my guess is those who were upset with Bostromās apology would also agree with this statement. I think the crux is more likely that they would also think this statement applies to Bostromās comments (i.e. they were closer to ātechnically not saying anything untrueā, rather than āexhibiting a truth-seeking attitudeā), while those who disagree would think āBostrom is actually exhibiting a truth-seeking attitudeā.
For example, if I apply your statement to Bostromās apology: āIād say truth-seeking attitude would have been more like: āBefore I make a comment thatās strongly suggestive of a genetic difference between races, or easily misinterpreted to be a racist dogwhistle, letās make sure I understand their perspective and can assess how this apology might actually be interpretedā, perhaps accompanied by āI think I should make true statements if I can make sure they will be interpreted to mean what my actual views are, and I know they are the true statements that are most relevant and important for the people I am apologizing to.ā
Similar statement, different emphasis. Bostromās comment was ātechnically trueā, but its main content was less about an apology and more about raising questions around a genetic component of intelligence, expression of support for some definition of eugenics and some usage of provocative communication.ā
I think my point is less that āShakeel and Jasonās comments are fine because they were hedgedā, and less about pointing out the empirical fact that they were hedged, and more that āShakeel and Jasonās comments were not fine just because they contained true statements, but this standard should be applied similarly to Bostromās apology, which was also not fine just because it contained true statementsā.
More speculative: Like, part of me gets the impression this is in part modulated by a dislike of the typical SJW cancel culture (which I can resonate with), and therefore the truth-seeking defence is applied more strongly against condemnation of any kind, as opposed to just truth-seeking for truthās sake. But Iām not sure that this, if true, is actually optimizing for truth, nor that itās necessarily the best approach on consequentialist grounds, unless thereās good reason to think that a heuristic to err on the side of anti-condemnation in every situation is preferable to evaluating each on a case-by-case basis.
That makes sense ā I get why you feel like there are double standards.
I donāt agree that there necessarily are.
Regarding Bostromās apology, I guess you could say that itās part of ātruth-seekingā to dive into any mistakes you might have made and acknowledge everything there is to acknowledge. (Whether we call it ātruth-seekingā or not, thatās certainly how apologies should be, in an ideal world.) On this point, Bostromās apology was clearly suboptimal. It didnāt acknowledge that there was more bad stuff to the initial email than just the racial slur.
Namely, in my view, itās not really defensible to say ātechnically trueā things without some qualifying context, if those true things are easily interpreted in a misleadingly-negative or harmful-belief-promoting way on their own or even interpreted as, as you say, āracist dogwhistles.ā (I think that phrase is sometimes thrown around so lightly that it seems a bit hysterical, but it does seem appropriate for the specific example of the sentence Bostrom claimed he ālikes.ā)
Take for example a newspaper reporting on a person with autism who committed a school shooting. Given the widespread stigma against autism, it would be inappropriate to imply that autism is linked to these types of crimes without some sort of very careful discussion that doesnāt make readers prejudiced against people on the spectrum. (I donāt actually know if thereās any such link.)
What I considered bad about Bostromās apology was that he didnāt say more about why his entire stance on ācontroversial communicationā was a bad take.
Context matters: The initial email was never intended to be seen by anyone who wasnāt in that early group of transhumanists. In a small, closed group, communication functions very differently. For instance, among EA friends, Iāve recently (after the FTX situation) made a joke about how we should run a scam to make money. The joke works because my friends have enough context to know I donāt mean it. I wouldnāt make the same joke in a group where it isnāt common knowledge that Iām joking. Similarly, while I donāt know much about the transhumanist reading list, itās probably safe to say that āweāre all high-decouplers and care about all of humanityā was common knowledge in that group. Given that context, itās sort of defensible to think that thereās not that much wrong with the initial email (apart from cringiness) other than the use of the racial slur. Bostrom did apologize for the latter (even viscerally, and unambiguously).
I thought there was some ambiguity in the apology about whether he was just apologizing for the racial slur, or whether he also meant just the general email when he described how he hated re-reading it. When I said that the apology was āreasonable,ā I interpreted him to mean the general email. I agree he could have made this more clear.
In any case, thatās one way to interpret ātruth-seekingā ā trying to get to the bottom of any mistakes that were made when apologizing.
That said, I think almost all the mentions of ātruth-seeking is importantā in the Bostrom discussion were about something else.
There was a faction of people who thought that people should be socially shunned for holding specific views on the underlying causes of group differences. Another faction that was like āit should be okay to say āI donāt knowā if you actually donāt know.ā
While a few people criticized Bostromās apology for reasons similar to the ones I mentioned above (which I obviously think is reasonable!), my impression is that the people who were most critical of it did so for the āsocial shunning for not completely renouncing a specific viewā reason.
For what itās worth, I agree that emphasis on truth-seeking can go too far. While I appreciated this part of EA culture in the discussion around Bostrom, Iāve several times found myself accusing individual rationalists of fetishizing ātruth-seeking.ā :)
So, I certainly donāt disagree with your impression that there can be biases on both sides.
I wanted to say a bit about the āvibeā /ā thrust of this comment when it comes to community discourse norms...
(This is somewhat informed by your comments on twitter /ā facebook which themselves are phrased more strongly than this and are less specific in scope )
I suspect you and I agree that we should generally encourage posters to be charitable in their takes and reasonable in their requestsāand it would be bad overall for discussions in general where this not the case. Being angry on the internet is often not at all constructive!
However, I think that being angry or upset where it seems like an organisation has done something egregious is very often an appropriate emotional response to feel. I think that the ideal amount of expressing that anger /ā upset that community norms endorse is non-zero! And yes when people are hurt they may go somewhat too far in what they request /ā suggest /ā speculate. But again the optimal amount of ātoo strong requestsā is non-zero.
I think that expressing those feeling of hurt /ā anger /ā upset explicitly (or implicitly expressing them through the kinds of requests one is making) has many uses and there are costs to restricting it too much.
Some uses to expressing it:
Conveying the sheer seriousness or importance of the question to the poster. That can be useful information for the organisation under scrutiny about whether /ā how much people think they messed up (which itself is information about whether /ā how much they actually messed up). It will lead to better outcomes if organisation in fact get the information that some people are deeply hurt by their actions. If the people who are deeply hurt cannot /ā do not express this the organisation will not know.
Individuals within a community expressing values they hold dear (and which of those are strong enough to provoke the strongest emotional reaction) is part of how a community develops and maintains norms about behaviour that is /ā isnāt acceptable.
Some costs to restricting it:
People who have stronger emotional reactions are often closer to the issue. It is very hard when you feel really hurt by something to have to reformulate that in terms acceptable to people who are not at all affected by the thing.
If people who are really hurt by something get the impression from community norms that expressing their hurt is not welcome they may well not feel welcome in the community at all. This seems extra bad if you care about diversity in the community and certain issues affect certain groups more. (E.g. antisemitism, racism, sexism etc.)
If people who are really hurt by something do not post, the discourse will be selected towards people who arenāt hurt /ā donāt care as strongly. That will systematically skew the discussion towards a specific set of reactions and lead you further away from understanding what people across the community actually think about something.
I think that approaching online discussions on difficult topics is really really hard! I do not think I know what the ideal balance is. I have almost never before participated in such discussions and Iām personally finding my feet here. I am not arguing in favour of carte blanche for people making unreasonable angry demands.
But I want to push back pretty strongly against the idea that people should never be able to post hurt /ā upset comments or that the comments above seem very badly wrong. (Or that they warrant the things you said on facebook /ā twitter about EA discourse norms)
P.S. Iām wondering whether you would agree with me for all the above if the organisational behaviour was egregious enough by your /ā anyoneās lights? [Insert thought experiment here about shockingly beyond the pale behaviour by an organisation that people on the forum express angry comments about]. If yes, then we just disagree on where /ā how to draw the line not that there is a line at all. If not, then I think we have a more fundamental disagreement about how humans can be expected to communicate online.
I see āclearly expressing angerā and āposting when angryā as quite different things.
I endorse the former, but I rarely endorse the latter, especially in contexts like the EA Forum.
Letās distinguish different stages of anger:
The āhotā kindāwhen one is not really thinking straight, prone to exaggeration and uncharitable interpretations, etc.
The ācoolā kindāwhere one can think roughly as clearly about the topic as any other.
We could think of āhotā and ācoldā anger as a spectrum.
Most people experience hot anger from time to time. But I think EA figuresāespecially senior figuresāshould model a norm of only posting on the EA Forum when fairly cool.
My impression is that, during the Bostrom and FLI incidents, several people posted with considerably more hot anger than I would endorse. In these cases, I think the mistake has been quite harmful, and may warrant public and private apologies.
As a positive example: Peter Hurfordās blog post, which he described as āangryā, showed a level of reasonableness and clarity that made it, in my mind, āabove the barā to publish. The text suggests a relatively cool anger. I disagree with some parts of the post, but I am glad he published it. At the meta-level, my impression is that Peter was well within the range of āappropriate states of mindā for a leadership figure to publish a message like that in public.
Iām not sure how I feel about this proposed norm. I probably think that senior EA figures should at least sometimes post when theyāre feeling some version of āhot angerā, as opposed to literally never doing this.
The way you defined ācool vs. hotā here is that itās about thinking straight vs. not thinking straight. Under that framing, I agree that you shouldnāt post comments when you have reason to suspect you might temporarily not be thinking straight. (Or you should find a way to flag this concern in the comment itself, e.g., with an epistemic status disclaimer or NVC-style language.)
But you also call these ādifferent stages of angerā, which suggests a temporal interpretation: hot anger comes first, followed by cool. And the use of the words āhotā and ācoolā, to my ear, also suggests something about the character of the feeling itself.
I feel comfortable suggesting that EAs self-censor under the āthinking straight?ā interpretation. But if youāre feeling really intense emotion and itās very close in time to the triggering event, but you think youāre nonetheless thinking straight ā or you think you can add appropriate caveats and context so people can correct for the ways in which youāre not thinking straight ā then Iām a lot more wary about adding a strong ādonāt say whatās on your mindā norm here.
I suspect you and I agree that we should generally encourage posters to be charitable in their takes and reasonable in their requests
I think ācharityā isnāt quite the right framing here, but I think we should encourage posters to really try to understand each other; to ask themselves āwhat does this other person think the physical world is like, and what evidence do I have that itās not like that?ā; to not exaggerate how negative their takes are; and to be mindful of biases and social dynamics that often cause people to have unrealistically negative beliefs about The Other Side.
However, I think that being angry or upset where it seems like an organisation has done something egregious is very often an appropriate emotional response to feel. I think that the ideal amount of expressing that anger /ā upset that community norms endorse is non-zero!
I 100% agree! I happened to write something similar here just before reading your comment. :)
From my perspective, the goal is more āhave accurate modelsā and ābe honest about what your models areā. In interpersonal contexts, the gold standard is often that youāre able to pass someone elseās ideological Turing Test.
Sometimes, your model really is that something is terrible! In cases like that, I think we should be pretty cautious about discouraging people from sharing what they really think about the terrible thing. (Like, I think ābe civil all the timeā, ādonāt rock the boatā, ābe very cautious about criticizing other EAsā is one of the main processes that got in the way of people like me hearing earlier about SBFās bad track record ā I think EAs in the know kept waaay too quiet about this information.)
Itās true that there are real costs to encouraging EAs to routinely speak up about their criticisms ā it can make the space feel more negative and aversive to a lot of people, which Iād expect to contribute to burnout and to some people feeling less comfortable honestly expressing their thoughts and feelings.
I donāt know what the best solution is (though I think that tech like NVC can help a whole lot), but Iād be very surprised if the best solution involved EAs never expressing actually intense feelings in any format, no matter how much the context cries for it.
Sometimes shitās actually just fucked up, and Iād rather a community where people can say as much (even if not everyone agrees) than one where weāre all performatively friendly and smiley all the time.
If people who are really hurt by something do not post, the discourse will be selected towards people who arenāt hurt /ā donāt care as strongly. That will systematically skew the discussion towards a specific set of reactions and lead you further away from understanding what people across the community actually think about something.
Seems right. Digging a bit deeper, I suspect weād disagree about what the right tradeoff to make is in some cases, based on different background beliefs about the world and about how to do the most good.
Like, we can hopefully agree that itās sometimes OK to pick the ātalk in a way that hurts some people and thereby makes those people less likely to engage with EAā side of the tradeoff. An example of this is that some people find discussion of food or veg*nism triggering (e.g., because they have an ED).
We could choose to hide discussion of animal products from the EA Forum in order to be more inclusive to those people; but given the importance of this topic to a lot of what EA does today, it seems more reasonable to just accept that weāre going to exclude a few people (at least from spaces like the EA Forum and EA Global, where all the different cause areas are rubbing elbows and itās important to keep the friction on starting animal-related topics very low).
If we agree that itās ever OK to pick the ātalk in way X even though it hurts some peopleā side of the tradeoff, then I think we have enough common ground that the remaining disagreements can be resolved (given enough time) by going back and forth about what sort of EA community we think has the best chance of helping the world (and about how questions of interpersonal ethics, integrity, etc. bear on what we should do in practice).
(Or that they warrant the things you said on facebook /ā twitter about EA discourse norms)
Oh, did I say something wrong? I was imagining that all the stuff I said above is compatible with what Iāve said on social media. Iād be curious which things you disagree with that I said elsewhere, since that might point at other background disagreements Iām not tracking.
Just a quick note to say thanks for such a thoughtful response! <3
I think youāre doing a great job here modelling discourse norms and I appreciate the substance of your points!
Ngl I was kinda trepidatious opening the forumā¦ but the reasonableness of your reply and warmth of your tone is legit making me smile! (It probably doesnāt hurt that happily we agree more than I realised. :P )
I may well write a litte more substantial response at some point but will likely take a weekend break :)
P.S. Real quick re social mediaā¦ Things I was thinking about were phrases from fb like āEAs fād upā and the āfairly shameful initial responseā- which I wondered if were stronger than you were expressing here but probably just you saying the same thing. And in this twitter thread you talk about the ācancel mobāābut I think youāre talking there are about a general case. You donāt have to justify yourself on those Iām happy to read it all via the lens of the comments youāve written on this post.
Aw, that makes me really happy to hear. Iām surprised that it made such a positive difference, and I update that I should do it more!
(The warmth part, not the agreement part. I canāt really control the agreement part, if we disagree then weāre just fucked. šš)
Re the social media things: yeah, I stand by that stuff, though I basically always expect reasonable people to disagree a lot about exactly how big a fuck-up is, since natural language is so imprecise and there are so many background variables we could disagree on.
I feel a bit weird about the fact that I use such a different tone in different venues, but I think I like this practice for how my brain works, and plan to keep doing it. I definitely talk differently with different friends, and in private vs. public, so I like the idea of making this fact about me relatively obvious in public too.
I donāt want to have such a perfect and consistent public mask/āpersona that people think my public self exactly matches my private self, since then they might come away deceived about how much to trust (for example) that my tone in a tweet exactly matches the emotions I was feeling when I wrote it.
I want to be honest in my private and public communications, but (even more than that) I want to be meta-honest, in the sense of trying to make it easy for people to model what kind of person I am and what kinds of things I tend to be more candid about, what it might mean if I steer clear of a topic, etc.
Trying too hard to look like Iām an open book who always says whatās on his mind, never self-censors in order to look more polite on the EA Forum, etc. would systematically cause people to have falser beliefs about the delta between āwhat Rob B saidā and āwhat Rob B is really thinking and feeling right nowā. And while I donāt think I owe everyone a full print-out of my stream of consciousness, I do sorta feel like I owe it to people to not deliberately make it sound like Iām more transparent than I am.
This is maybe more of a problem for me than for other people: Iām constantly going on about what a big fan of candor and blurting I am, so I think thereās more risk of people thinking Iām a 100% open book, compared to the risk a typical EA faces.
So, to be clear: I donāt advocate that EAs be 100% open books. And separately, I donāt perfectly live up to my own stated ideals.
Like, I think an early comment like this would have been awesome (with apologies to Shakeel for using his comments as an example, and keeping in mind that this is me cobbling something together rather than something Shakeel endorses):
Note: The following is me expressing my own feelings and beliefs. Other people at CEA may feel differently or have different models, and I donāt mean to speak for them.
If this is true then I feel absolutely horrified. Supporting neo-Nazi groups is despicable, and I donāt think people who would do something like that ought to have any place in this community. [mention my priors about how reliable this sort of journalism tends to be] [mention my priors about FLIās moral character, epistemics, and/āor political views, or mention that I donāt know much about FLI and havenāt thought about them before] Given that, [rough description of how confident I feel that FLI would financially support a group that they knew had views like Holocaust-denialism].
But itās hard to be confident about what happened based on a single news article, in advance of hearing FLIās side of things; and there are many good reasons it can take time to craft a complete and accurate public statement that expresses the proper amount of empathy, properly weighs the PR and optics concerns, etc. So I commit to upvoting FLIās official response when it releases one (even if I donāt like the response), to make it likelier that people see the follow-up and not just the initial claims.
I also want to encourage others to speak up if they disagree on any of this, including chiming in with views contrary to mine (which Iāll try to upvote at least enough to make it obviously socially accepted to express uncertainty or disagreement on this topic, while the facts are still coming in). But for myself, my immediate response to this is that I feel extremely upset.
For context: Coming on the heels of the Bostrom situation, I feel seriously concerned that some people in the EA community think of non-white people as inherently low-status, and I feel surprised and deeply hurt at the lack of empathy to non-white people many EAs have shown in their public comments. I feel profoundly disgusted at the thought of racist ideas and attitudes finding acceptance within EA, and though Iāll need to hear more about the case of FLI before I reach any confident conclusions about this case, my emotional reaction is one of anger at the possibility that FLI knowingly funded neo-Nazis, and a strong desire to tell EAs and non-EAs alike that this is not who we are.
The above hypothetical, not-Shakeel-authored comment meets a higher bar than what I think was required in this context ā I think itās fine for EAs to be a bit sloppier than that, even if they work at CEA ā but hopefully it directionally points at what I mean when I say that there are epistemically good ways to express strong feelings. (Though I donāt think itās easy, and I think there are hard tradeoffs here: demanding more rigor will always cause some number of comments to just not get written at all, which will cause some good ideas and perspectives to never be considered. In this case, I think a fair bit more rigor is worth the cost.)
The concreteness is helpful because I think my take is that, in general, writing something like this is emotionally exhausting (not to mention time consuming!) - especially so if youāve got skin in the game and across your life you often come up across things like this to respond to and you keep having the pressure to force your feelings into a more acceptable format.
I reckon that crafting a message like that if I were upset about something could well take half a work day. And Iād have in my head all the being upset /ā being angry /ā being scared people on the forum would find me unreasonable /ā resentful that people might find me unreasonable /ā doubting myself the whole time. (Though I know plausibly that Iām in part just the describing the human condition there. Trying to do things is hard...!)
Overall, I think Iām just more worried than you that requiring comments to be too far in this direction has too much of a chilling effect on discourse and is too costly for the individuals involved. And it really just is a matter of degree here and what tradeoffs weāre willing to make.
(It makes me think itād be an interesting excerise to write a number of hypothetical comments arrange them on a scale of how much they major on carefully explaining priors, caveating, communicating meta-level intention etc. and then see where weād draw the line of acceptable /ā not!)
Thereās an angry top-level post about evaporative cooling of group beliefs in EA that I havenāt written yet, and wonāt until it would no longer be an angry one. That might mean that the best moment has passed, which will make me sad for not being strong enough to have competently written it earlier. You could describe this as my having been chilled out of the discourse, but I would instead describe it as my politely waiting until I am able and ready to explain my concerns in a collected and rational manner.
I am doing this because I care about carefully articulating what Iām worried about, because I think itās important that I communicate it clearly. I donāt want to cause people to feel ambushed and embattled; I donāt want to draw battle lines between me and the people who agree with me on 99% of everything. I donāt want to engender offense that could fester into real and lasting animosity, in the very same people who if approached collaboratively would pull with me to solve our mutual problem out of mutual respect and love for the people who do good.
I donāt want to contribute to the internal divisions growing in EA. To the extent that it is happening, we should all prefer to nip the involution in the budāif one has ever been on team Everyone Who Logically Tries To Do The Most Good, thereās nowhere to go but down.
I think that if I wrote an angry top-level post, it would deserve to be downvoted into oblivion, though Iām not sure it would be.
I think on the margin Iām fine with posts that will start fights being chilled. Angry infighting and polarization are poisonous to what weāre trying to do.
I barely give a gosh-guldarn about FLI or Tegmark outside of their (now reduced) capacity to reduce existential risk.
Obviously Iād rather bad things not happen to people and not happen to good people in particular, but I donāt specifically know anyone from FLI and they are a feather on the scales next to the full set of strangers who I care about.
If Tegmark or FLI was wronged in the way your comments and others imply, you are correct and justified in your beliefs. But if the apology or the current facts do not make that status clear, thereās an object level problem and itās bad to be angry that they are wronged, or build further arguments on that belief.
I think itās pretty obvious at this point that Tegmark and FLI was seriously wronged, but I barely care about any wrong done to them and am largely uninterested in the question of whether it was wildly disproportionate or merely sickeningly disproportionate.
I care about the consequences of what weāve done to them.
I care about how, in order to protect themselves from this community, the FLI is
working hard to continue improving the structure and process of our grantmaking processes, including more internal and (in appropriate cases) external review. For starters, for organizations not already well-known to FLI or clearly unexceptionable (e.g. major universities), we will request and evaluate more information about the organization, its personnel, and its history before moving on to additional stages.
I care about how everyone who watched this happen will also realize the need to protect themselves from us by shuffling along and taking their own pulses. I care about the new but promising EAs who no one will take a chance on, the moonshots that wonāt be funded even though theyād save lives in expectation, the good ideas with ābad opticsā that wonāt be acted on because of fear of backdraft on this forum. I care about the lives we can save if we donāt rush to conclusions, rush to anger, if we can give each other the benefit of the doubt for five freaking minutes and consider whether itād make any sense whatsoever for the accusation de jour to be what it looks like.
If what happened was that Max Tegmark or FLI gets many dubious grant applications, and this particular application made it a few steps through FLIās processes before it was caught, expo.seās story and the negative response you object to on the EA forum would be bad, destructive and false. If this was what happened, it would absolutely deserve your disapproval and alarm.
I donāt think this isnāt true. What we know is:
An established (though hostile) newspaper gave an account with actual quotes from Tegmark that contradict his apparent actions
The bespoke funding letter, signed by Tegmark, explicitly promising funding, āapproved a grantā conditional on registration of the charity
The hiring of the lawyer by Tegmark
When Tegmark edited his comment with more content, Iām surprised by how positive the reception of this edit got, which simply disavowed funding extremist groups.
Iām further surprised by the reaction and changing sentiment on the forum in reaction of this post, which simply presents an exonerating story. This story itself is directly contradicted by the signed statement in the letter itself.
Contrary to the top level post, it is false that it is standard practice to hand out signed declarations of financial support, with wording like āapproved a grantā if substantial vetting remains. Also, itās extremely unusual for any non-profit to hire a lawyer to explain that a prospective grantee failed vetting in the application process. We also havenāt seen any evidence that FLI actually communicated a rejection. Expo.se seems to have a positive recordāeven accepting the aesthetic here that newspapers or journalists are untrustworthy, itās costly for an outlet to outright lie or misrepresent facts.
Thereās other issues with Tegmarkās/āFLI statements (e.g. deflections about the lack of direct financial benefit to his brother, not addressing the material support the letter provided for registration/āthe reasonable suspicion this was a ploy to produce the letter).
Thereās much more that is problematic that underpin this. If I had more time, I would start a long thread explaining how funding and family relationships could interact really badly in EA/ālongtermism for several reasons, and another about Tegmarkās insertions into geopolitical issues, which are clumsy at best.
Another comment said the EA forum reaction contributed to actual harm to Tegmark/āFLI in amplifying the false narrative. I think a look at Twitter, or how the story, which continues and has been picked up in Vice, suggests to me this isnāt this is true. Unfortunately, I think the opposite is true.
The concreteness is helpful because I think my take is that, in general, writing something like this is emotionally exhausting (not to mention time consuming!) - especially so if youāve got skin in the game and across your life you often come up across things like this to respond to and you keep having the pressure to force your feelings into a more acceptable format.
Yep, I think it absolutely is.
Itās also not an accident that my version of the comment is a lot longer and covers more topics (and therefore would presumably have taken way longer for someone to write and edit in a way they personally endorsed).
I donāt think the minimally acceptable comment needed to be quite that long or cover quite that much ground (though I think it would be praiseworthy to do so), but directionally Iām indeed asking people to do a significantly harder thing. And I expect this to be especially hard in exactly the situations where it matters most.
I reckon that crafting a message like that if I were upset about something could well take half a work day. And Iād have in my head all the being upset /ā being angry /ā being scared people on the forum would find me unreasonable /ā resentful that people might find me unreasonable /ā doubting myself the whole time. (Though I know plausibly that Iām in part just the describing the human condition there. Trying to do things is hard...!)
ā¤
Yeah, that sounds all too realistic!
Iām also imagining that while the author is trying to put together their comment, they might be tracking the fact that others have already rushed out their own replies (many of which probably suck from your perspective), and discussion is continuing, and the clock is ticking before the EA Forum buries this discussion entirely.
(I wonder if thereās a way to tweak how the EA Forum works so that thereās less incentive to go super fast?)
One reason I think itās worth trying to put in this extra effort is that it produces a virtuous cycle. If I take a bit longer to draft a comment I can more fully stand by, then other people will feel less pressure to rush out their own thoughts prematurely. Slowing down the discussion a little, and adding a bit more light relative to heat, can have a positive effect on all the other discussion that happens.
Iāve mentioned NVC a few times, but I do think NVC is a good example of a thing that can help a lot at relatively little time+effort cost. Quick easy hacks are very good here, exactly because this can otherwise be such a time suck.
A related hack is to put your immediate emotional reaction inside a āthis is my immediate emotional reactionā frame, and then say a few words outside that frame. Like:
āHereās my immediate emotional reaction to the OP:
[indented italicized text]
And here are my first-pass thoughts about physical reality, which are more neutral but might also need to be revised after I learn more or have more time to chew on things:
[indented italicized text]ā
This is kinda similar to some stuff I put in my imaginary Shakeel comment above, but being heavy-handed about it might be a lot easier and faster than trying to make it feel like an organic whole.
And I think it has very similar effects to the stuff I was going for, where you get to express the feeling at all, but itās in a container that makes it (a) a bit less likely that youāll trigger others and thereby get into a heated Internet fight, and (b) a bit less likely that your initial emotional reaction will get mistaken (by you or others) for an endorsed carefully-wordsmithed description of your factual beliefs.
Overall, I think Iām just more worried than you that requiring comments to be too far in this direction has too much of a chilling effect on discourse and is too costly for the individuals involved. And it really just is a matter of degree here and what tradeoffs weāre willing to make.
Yeah, this very much sounds to me like a topic where reasonable people can disagree a lot!
(It makes me think itād be an interesting excerise to write a number of hypothetical comments arrange them on a scale of how much they major on carefully explaining priors, caveating, communicating meta-level intention etc. and then see where weād draw the line of acceptable /ā not!)
Ooooo, this sounds very fun. :) Especially if we can tangent off into science and philosophy debates when it turns out that thereās a specific underlying disagreement that explains why we feel differently about a particular case. š
To be clear, my criticism of the EA Forumās initial response to the Expo article was never āitās wrong to feel strong emotions in a context like this, and EAs should never publicly express strong emotionsā, and it also wasnāt āit should have been obviously in advance to all EAs that this wasnāt a huge dealā.
If you thought I was saying either of those things, then I probably fucked up in how I expressed myself; sorry about that!
My criticism of the EA Forumās response was:
I think that EAs made factual claims about the world that werenāt warranted by the evidence at the time. (Including claims about what FLI and Tegmark did, claims about their motives, and claims about how likely it is that there are good reasons for an org to want more than a few hours or days to draft a proper public response to an incident like this.) We were overconfident and following poor epistemic practices (and Iād claim this was noticeable at the time, as someone who downvoted lots of comments at the time).
I think that at least some EAs deliberately leaned into bad epistemic practices here, out of a sense that prematurely and overconfidently condemning FLI would help protect EAās reputation.
The EA Forum sort of ātrappedā FLI, by simultaneously demanding that FLI respond extremely quickly, but also demanding that the response be pretty exhaustive (āa full explanation of what exactly happened hereā, in Shakeelās words) and across-the-board excellent (zero factual errors, excellent empathizing and excellent displays of empathy, good PR both for reaching EAs and for satisfying the larger non-EA public, etc.). This sort of trap is not a good way to treat anyone, including non-EAs.
I think that many EAsā words and upvote patterns at the time created a social space in which expressing uncertainty, moderation, or counter-narrative beliefs and evidence was strongly discouraged. Basically, we did the classic cancel-culture echo chamber thing, where groups update more and more extremely toward a negative view of X because they keep egging each other on with new negative opinions and data points, while the people with alternative views stay quiet for fear of the social repercussions.
The more general version of this phenomenon is discussed in the Death Spirals sequence, and in videos like ContraPointsā Canceling: thereās a general tendency for many different kinds of social network to push themselves toward more and more negative (or more and more positive) views of a thing, when groups donāt exert lots of deliberate and unusual effort to encourage dissent, voice moderation, explicitly acknowledge alternative perspectives or counter-narrative points, etc.
I think this is a special risk for EA discussions of heavily politicized topics, so if we want to reliably navigate to true beliefs on such topics ā many of which will be a lot messier than the Tegmark case ā weāll need to try to be unusually allowing of dissent, disagreement, ābut what if X?ā, etc. on topics that are more emotionally charged. (Hard as that sounds!)
And Jason accusing FLI of āstonewallingā one day after the articleās release.
Minor point: I read Jason talking about āstonewallingā as referring to FLIās communications with Expo.se, not with the communications (or lack of) with EAs on this Forum.
I think it very likely that FLI would have made a statement here if there were an innocent or merely negligent explanation (e.g., the document is a forgery, or they got duped somehow into believing the grantee was related to FLIās stated charitable purposes and not pro-Nazi). So, unless there is a satisfactory explanation forthcoming, the stonewalling strongly points to a more sinister one.
The context is āFLI would have made a statement hereā, and the rest of comment doesnāt make me think heās talking about Expo either. And itās in reply to Jack and Shakeelās comments, which both seem to be about FLI saying something publicly, not about FLIās interactions with Expo specifically.
And Jeff Kaufman replied to Jason to say āone thing to keep in mind is that organizations can take weirdly long times to make even super obvious public statementsā, and Jason responded āGood point.ā The whole context is very āwow why has FLI not made a public statementā, not āwow why did FLI stonewall Expoā.
Still, I appreciate you raising the possibility, since there now seems to be inertia in this comment section against the people who were criticizing FLI, and the same good processes that would have helped people avoid rushing to conclusions in that case, should also encourage some amount of curiosity, patience, and uncertainty in this case.
As should be clear from follow-up comment posted shortly after that one, I was referring to the nearly one month that had passed between Expo reaching out to FLI and the publication of the article. When Jeff responded by noting reasons an organization might delay in making a statement, I wrote in reply: āA decision was made to send a responseāthat sounds vaguely threatening/āintimidating to my earsāthrough FLIās lawyer within days.ā [1] Expo did allege a number of facts that I think can be fairly characterized as stonewalling.
Itās plausible that Expo is wildly misrepresenting the substance of its communications between it and FLI, but the article seems fairly well-sourced to me. If Expoās characterization of the correspondence was unfair, I would expect FLIās initial January 13 statement to have disclosed significant facts that FLI told Expo but it omitted from its article.
Of course, drawing adverse inferences because an organization hasnāt provided a response within two hours of a forum discussion starting would be ridiculous (over a holiday weekend in the US no less!). I wouldnāt have thought it was necessary to say that. However, based on the feedback I am getting here, it would have been much better for me to have said something like āI view FLIās reported responses to Expo as stonewalling, and if FLI continues to offer the same responses . . . .ā I apologize to FLI and everyone else here that my lack of clarity on that point contributed to a Forum environment on that morning that was too ready to settle on conclusions without giving FLI the opportunity to make a statement.
The line that sounded vaguely threatening/āintimidating was āAny implication to the contrary would be falseāāthat sounds how I would expect a lawyer to vaguely allude to a possible defamation claim when they knew they would never file one. If youāve already said X didnāt happen, whatās the point of that sentence?
I think if we had refrained from criticizing their initial statement, their final, formal statement would be a lot worse, so if anything, we did them a favour.
I donāt think you have internalized the point: there was no misconduct. If their initial statement was insufficient to convince us of this, that is on us, not on them. Their job as a charity is not to manage a public persona so that you or me continue to look good by affiliation, itās to actually do good. Accusing them of secretly financing nazis because weāre weak and afraid of being tarred by association is the exact reverse polar opposite of doing them a āfavorā.
First, Iāll state that allowing the grant to get past the vetting stage may not have been malicious, but it was incompetent. Tegmark has admitted as such, and proposed changes to remedy this. Finding out at least some of the insidious nature of the newspaper would have only have taken half an hour of googling.
The initial responses suggested either incompetence or malice on the part of FLI. I think assuming it was malice was uncalled for and wrong, but it was at the very least a possibility.
Their job as a charity is not to manage a public persona so that you or me continue to look good by affiliation
Charities rely on donors. Donors do not like being associated with neo-nazis, however unfairly. Doing basic research on your funding partners is part of a charities job, to avoid exactly this situation.
Iād like to ask people not to downvote titotalās comment below zero, because that also hides RobBensingerās timeline. I had to strong upvote the parent comment to make the timeline visible again.
This letter is to confirm that the Future of Life Institute (FLI) has approved a grant in the amount of $100,000 to the Swedish foundationāStiftelsen Nya Dagbladetā that is currently under registration. Because we are a non-profit organization under US law, we are only allowed to make grants to non-profit organizations; we hereby declare our intent to transfer the grant amount promptly once āStiftelsen Nya Dagbladetā has been registered. If you have any questions, please do not hesitate to contact me at [redacted].
[emphasis added]
And here is what they say at the end of the current FAQ:
This was just covered by vice.com. Note that their article is inconsistent about whether we issued a grant agreement or not, first saying āthe grant agreement was immediately revokedā and then saying āwould not be moving forward with a grant agreementā. As clarified in (4) above, our process made it to the intent stage but never proceeded it to the stage of issuing a grant agreement, which is what we mean by āoffering a grantā.
Do you not feel lied to? Thereās something wrong here. Thereās more to this story.
Tegmarkās brother published in this place. Expo says, reasonably: āWhether this connection is significant with regards to the promise of funding from Max Tegmark and the Future of Life Institute to Nya Dagbladet is one of the questions we have been trying to put to them, but neither Max Tegmark nor his brother Per Shapiro have commented.ā
Yet this does not make it to the FAQ, somehow? Like, FLI just refuses to address the suspicious connection here, except to say that Max Tegmark wouldnāt have been paid.
You can apologize if you want, but I personally still feel lied to.
I think some of us owe FLI an apology for assuming heinous intentions where a simple (albeit dumb) mistake was made.
I can imagine this must have been a very stressful period for the entire team, and I hope we as a community become better at waiting for the entire picture instead of immediately reacting and demanding things left and right.
I just wanted to chip in to say that this does indeed seem like this has been a very stressful period for the team.
I cannot read their minds but it certainly seems possible to me that part of the reason some folks could find a situation like this stressful is precisely because they felt that some of the objections and critical comments were reasonable.
The statement says in point 8 of the FAQ (my emphasis)
Maybe this is a super weird thing to say but, were I a staff member at a place affected by this kind of thing, my distress would have been because: I was shocked /ā upset myself that grant seemed to have nearly been given and I would have been really hurt and shaken by that, confused about what had happened given senior leadership were not able to respond and frustrated I couldnāt get a sooner reply, really disappointed /ā angry about the initial response from Max Tegmark which seemed poor and didnāt represent the values of an organisation I wanted to work for etc. etc.
Iām absolutely not claiming that anyone at FLI feels like this! But I just wanted to say that just because something was hard for staff, doesnāt necessarily mean it was hard because the critical comments were wrong/āmisguided.
Strong agree with ājust because something was hard for staff, doesnāt necessarily mean it was hard because the critical comments were wrong/āmisguided.ā, though I think āpart of the reason some folks could find a situation like this stressful is precisely because they felt that some of the objections and critical comments were reasonableā doesnāt differentiate between different worldsāI think there would be a lot of flurry and frenzy and stress basically independently of reasonableness of critique (within some bounds).
To be fair, theinitial statementwas incredibly bad, and I do not regretcondemning it. They were extremely defensive in response to very obvious and reasonable questions, and were very ignorant about the nature of the newspaper in question.I think if we had refrained from criticizing their initial statement, their final, formal statement would be a lot worse, so if anything, we did them a favour. But I do agree that speculation of Max being a nazi of something were unwarranted.edit: After reading some of the comments below, I think my initial statement here was unnecessarily glib. There were definitely comments attacking Max that jumped to conclusions and were far too quick to assume extreme malice, and I donāt think he should be grateful for those. I still maintain that the original statement was poor and that criticism and questions were warranted.
The timeline (in PT time zone) seems to be:
Jan 13, 12:46am: Expo article published.
Jan 13, 4:20am: First mention of this on the EA Forum.
Jan 13, 6:46am: Shakeel Hashim (speaking for himself and not for CEA; +110 karma, +109 net agreement as of the 15th) writes, āIf this is true itās absolutely horrifying. FLI needs to give a full explanation of what exactly happened here and I donāt understand why they havenāt. If FLI did knowingly agree to give money to a neo-Nazi group, thatās despicable. I donāt think people who would do something like that ought to have any place in this community.ā
Jan 13, 9:18pm: Shakeel follows up, repeating
that he sees no reason why FLI wouldnāt have already made a public statementthat itās really weird that FLI hasnāt already made a public statement, and raises the possibility that FLI has maybe done sinister questionably-legal things and thatās why they havenāt spoken up.Jan 14, 3:43am: You (titotal) comment, āIf the letter is genuine (and they have never denied that it is), then someone at FLI is either grossly incompetent or malicious. They need to address this ASAP. ā
Jan 14, 8:16am: Jason comments (+15 karma, +13 net agreement as of the 15th): āI think it very likely that FLI would have made a statement here if there were an innocent or merely negligent explanation (e.g., the document is a forgery, or they got duped somehow into believing the grantee was related to FLIās stated charitable purposes and not pro-Nazi). So, unless there is a satisfactory explanation forthcoming, the stonewalling strongly points to a more sinister one.ā
Jan 14, 6:39pm: Tegmarkās initial response.
To be clear, this is Shakeel saying āI donāt understand why [FLI hasnāt given a full explanation]ā six hours after the article came out /ā two hours after EAs started discussing it, at 9:46am Boston time. (FLI is based in Boston.) And Jason accusing FLI of āstonewallingā one day after the articleās release.
[Update 1ā21: Jason says that he was actually thinking of FLI stonewalling Expo, not FLI stonewalling the EA Forum. That makes a big difference, though I wish Jason had been clear about this in his comments, since I think the aggregate effect of a bunch of comments like this on the EA Forum was to cause myself and others to think that Tegmark was taking a weirdly long time to reply to the article or to the EA Forum discussion.]
(And Iām only mentioning the explicit condemnation of FLI for not speaking up sooner here. The many highly upvoted and agreevoted EA Forum comments roasting FLI and making confident claims about what happened prior to Tegmarkās comment, with language like āthe squalid character of Tegmarkās choicesā, are obviously a further reason Tegmark /ā FLI might have wanted to rush out a response.)
The level of speed-in-replying demanded by EAs in this case (and endorsed by the larger EA Forum community, insofar as we strongly upvoted and up-agreevoted those comments) is frankly absurd, and I do think several apologies are owed here.
(Like, ārespond within two hours of a 7am forum postā is wildly absurd even if weāre adopting a norm of expecting people to just blurt out their initial thoughts in real time, warts and errors and all. But itās even more absurd if weāre demanding carefully crafted Public Statements that make no missteps and have no PR defects.)
Thanks for calling me out on this ā I agree that I was too hasty to call for a response.
Iām glad that FLI has shared more information, and that they are rethinking their procedures as a result of this. This FAQ hasnāt completely alleviated my concerns about what happened here ā I think itās worrying that something like this can get to the stage it did without it being flagged (though again, Iām glad FLI seems to agree with this). And I also think that it would have been better if FLI had shared some more of the FAQ info with Expo too.
I do regret calling for FLI to speak up sooner, and I should have had more empathy for the situation they were in. I posted my comments not because I wanted to throw FLI under the bus for PR reasons, but because I was feeling upset; coming on the heels of the Bostrom situation I was worried that some people in the EA community were racist or at least not very sensitive about how discussions of race-related things can make people feel. At the time, I wanted to do my bit to make it clear ā in particular to other non-white people who felt similarly to me ā that EA isnāt racist. But I could and should have done that in a much better way. Iām sorry.
Hey Shakeel,
Thank you for making the apology, you have my approval for that! I also like your apology on the other thread ā your words are hopeful for CEA going in a good direction.
Some feedback/āreaction from me that I hope is helpful. In describing your motivation for the FLI comment, you say that it was not to throw FLI under the bus, but because of your fear that some people would think EA is racist, and you wanted to correct that. To me, that is a political motivation, not much different from a PR motivation.
To gesture at the difference (in my ontology) between PR/āpolitical motivations and truth-seeking motivations:
PR/āpolitical
you want people to believe a certain thing (even if itās something you yourself sincerely believe), in this case, that EA is not racist
itās about managing impressions and reputations (e.g. EAās reputation as not racist)
Your initial comment (and also the Bostrom email statement) both struck me as āperformativeā in how they demonstrated really harsh and absolute condemnation (āabsolutely horrifyingā, ā[no] place in this communityā, ārecklessly flawed and reprehensibleā ā granted that you said āif trueā, but the tone and other comments seemed to suggest you did think it was true). That tone and manner of speaking as the first thing you say on a topic[1] feels pretty out of place to me within EA, and certainly isnāt what I want in the EA I would design.
Extreme condemnation pattern matches to someone signaling that they too punish the taboo thing (to be clear, I agree that racism should not be tolerated at all), as is seen on the lot of the Internet, and it feels pretty toxic. It feels like itās coming from a place of needing to demonstrate āI/āwe are not the bad thingā.
So even if your motivation was ādo your bit to make it clear that EA isnāt racistā, that does strike me as still political/āPR (even if you sincerely believe it).
(And I donāt mean to doubt your upsetness! It is very reasonable to be upset if you think something will cause harm to others, and harm to the cause you are dedicating yourself to, and harm to your own reputation through association. Upsetness is real and caring about reputation can come from a really good place.)
I could write more on my feelings about PR/āpolitical stuff, because my view is not that itās outright ābad/āevilā or anything, more that caution is required.
Truth-seeking /ā info-propagation
Such comments more focus on sharing the authorās beliefs (not performing them)[2] and explaining how they reached them, e.g. āthis is what I think happened, this is why I think thatā and inferences theyāre making, and what makes sense. They tally uncertainty, and they leave open room for the chance theyāre mistaken.
To me, the ideal spirit is ālet me add my cognition to the collective so we all arrive at true beliefsā rather than ālet me tug the collective beliefs in the direction I believe is correctā or āI need to ensure people believe the correct thingā (and especially not āI need people to believe the correct thing about meā).
My ideal CEA comms strategy would conceive of itself as having the goal of causing people to have accurate beliefs foremost, even when that makes EA look bad. That is the job ā not to ensure EA looks good, but to ensure EA is perceived accurately, warts and all.
(And Iām interested in attracting to EA people who can appreciate that large movements have warts and who can tolerate weirdness in beliefs, and gets that movement leaders make mistakes. I want the people who see past that to the ideas and principles that make sense, and the many people (including you, Iād wager) are working very hard to make the world better.)
Encouragement
I donāt want to respond to step in the right direction (a good apology) with something that feels negative, but it feels important to me that this distinction is deeply understood by CEA and EA in general, hence me writing it up for good measure. I hope this is helpful.
ETA: Happy to clarify more here or chat sometime.
I think that after things have been clarified and the picture is looking pretty clear, then indeed, such condemnation might be appropriate.
The LessWrong frontpage commenting guidelines are āaim to explain, not persuadeā.
I like this a lot.
Iāll add that you can just say out loud āI wish other people believed Xā or āI think the correct collective belief here would be Xā, in addition to saying your personal belief Y.
(An example of a case where this might make sense: You think another person or group believes Z, and you think they rationally should believe X instead, given the evidence available to them. You yourself believe a more-extreme proposition Y, but you donāt think others have enough evidence to believe Y yetāe.g., your belief may be based on technical expertise or hard-won life-experience that the other parties donāt have.)
Itās possible to care about the groupās beliefs, and try to intervene on them, in a way thatās honest and clear about what youāre doing.
Speaking locally to this point: I donāt think I agree! My first-pass take is that if somethingās horrible, reprehensible, flawed, etc., then I think EAs should just say so. That strikes me as the default truth-seeking approach.[1]
There might be second-order reasons to be more cautious about when and how you report extreme negative evaluations (e.g., to keep forum discussions from degenerating as people emotionally trigger each other), but I would want to explicitly flag that this is us locally departing from the naive truth-seeking approach (ājust say what seems true to youā) in the hope that the end result will be more truth-seeky via people having an easier time keeping a cool head.
(Note that Iām explicitly responding to the āextreme languageā side of this, not the āwas this to some extent performative or strategic?ā side of things.)
With the caveat that maybe evaluative judgments in general get in the way of truth-seeking, unless theyāre āownedā NVC-style, because of common confusions like āthinking my own evaluations are mind-independent properties of the worldā. But if weāre allowing mild evaluative judgments like āOKā or āfineā, then I think thereās less philosophical basis for banning more extreme judgments like āawesomeā or āterribleā.
I think I agree with your clarification and was in fact conflating the mere act of speaking with strong emotion with speaking in a way that felt more like a display. Yeah, I do think itās a departure from naive truth-seeking.
In practice, I think it is hard, though I do think it is hard for the second order reasons you give and others. Perhaps an ideal is people share strong emotion when they feel it, but in some kind of format/ācontainer/āmanner that doesnāt shut down discussion or get things heated. āNVCā style, perhaps, as you suggest.
Fwiw, I do think āhas no place in the communityā without being owned as āno place in my communityā or āshouldnāt have a place in the communityā is probably too high a simulacrum level by default (though this isnāt necessarily a criticism of Shakeel, I donāt remember what exactly his original comment said.)
Cool. :) I think we broadly agree, and I donāt feel confident about what the ideal way to do this is, though Iād be pretty sad and weirded out by a complete ban on expressing strong feelings in any form.
Really appreciated a bunch about this comment. I think itās that it:
flags where it comes from clearly, both emotionally and cognitively
expresses a pragmatism around PR and appreciation for where it comes from that to my mind has been underplayed
Does a lot of āmy ideal EAā, āIā language in a way that seems good for conversation
Adds good thoughts to the āwhat is politicsā discussion
IMO, I think this is an area EA needs to be way better in. For better or worse, most of the world runs on persuasion, and PR matters. The nuanced truth doesnāt matter that much for social reality, and EA should ideally be persuasive and control social reality.
I think the extent to which nuanced truth does not matter to āmost of the worldā is overstated.
I additionally think that EA should not be optimizing for deceiving people who belong to the class āmost of the worldā.
Both because it wouldnāt be useful if it worked (realistically most of the world has very little they are offering) and because it wouldnāt work.
I additionally think think that trying to play nitwit political games at or around each hecking other would kill EA as a community and a movement dead, dead, dead.
Thanks for this Shakeel. This seems like a particularly rough time to be running comms for CEA. Iām grateful that in addition to having that on your plate, in your personal capacity youāre helping to make the community feel more supportive for non-white EAs feeling the alienation you point to. Also for doing that despite the emotional labour involved in that, which typically makes me shy away from internet discussions.
Responding swiftly to things seems helpful in service of that support. One of the risks from that is that you can end up taking a particular stance immediately and then it feeling hard to back down from that. But in fact you were able to respond swiftly, and then also quickly update and clearly apologise. Really appreciate your hard work!
(Flag that Shakeel and I both work for EV, though for different orgs under that umbrella)
I liked this apology.
Hey Shakeel, thanks for your apology and update (and I hope youāve apologized to FLI). Even though call-out culture may be popular or expected in other contexts, it is not professional or appropriate for the Comms Head of CEA to initiate an interaction with an EA org by publicly putting them on blast and seemingly seconding what could be very damaging accusations (as well as inventing others by speculating about financial misconduct). Did you try to contact FLI before publicly commenting to get an idea of what happened (perhaps before they could prepare their statement)?
I appreciate that you apologized for this incident but I donāt think you understand how deep of a problem this behavior is. Get an anonymous account if you want to shoot from the hip. When you do it while your bio says āHead of Communications at CEAā it comes with a certain weight. Multiplying unfounded accusations, toward another EA org no less, is frankly acting in bad faith in a communications role.
For what itās worth, this seems like the wrong way around to me. I donāt know exactly about the role and responsibilities of the āHead of Commā, but in-general I would like people in EA to be more comfortable criticizing each other, and to feel less constrained to first air all criticism privately and resolve things behind closed doors.
I think the key thing that went wrong here was the absence of a concrete logical argument or probabilities about why the thing that was happening was actually quite bad, and also the time pressure, which made the context of the conversation much worse. Another big thing was also jumping to conclusions about FLIās character in a way that felt like it was trying to apply direct political pressure instead of focusing on propagating accurate information.
Maybe there are special rules that EA comms people (or the CEA comms person in particular) should follow; I possibly shouldnāt weigh in on that, since Iām another EA comms person (working at MIRI) and might be biased.
My initial thought, however, is that itās good for full-time EAs on the current margin to speak more from their personal views, and to do less āspeaking for the organizationsā. E.g., in the case of FTX, I think it would have been healthy for EAs working at full-time orgs to express their candid thoughts about SBF, both negative and positive; and for other professional EAs to give their real counter-arguments, and for a real discussion to thereby happen.
My criticism of Shakeelās post is very different from yours, and is about how truth-seeking the contents are and how well they incentivize truth-seeking from others, not about whether itās inherently unprofessional for particular EAs to strongly criticize other EAs.
This seems ~strictly worse to me than making a āShakeel-Personalā account separate from āShakeel-CEAā. It might be useful to have personal takes indexed separately (though Iād guess this is just not necessary, and would add friction and discourage people from sharing their real takes, which I want them to do more). But regardless, I donāt think itās better to add even more of a fog of anonymity to EA Forum discussions, if someoneās willing to just say their stuff under their own name.
Iām glad anonymity is an option, but the number of anons in these discussions already makes it hard to know how much I might be double-counting views, makes it hard to contextualize comments by knowing what world-view or expertise or experience or they reflect, makes it hard to have sustained multi-month discussions with a specific person where we gradually converge on things, etc.
Idk I think it might be pretty hard to have a role like Head of Communications at CEA and then separately communicate your personal views about the same topics. Your position is rather unique for allowing that. I donāt see CEA becoming like MIRI in this respect. It comes across as though heās saying this in his professional capacity when you hover over his account name and it says āHead of Communications at CEAā.
But the thing I think is most important about Shakeelās job is that it means he should know better than to throw around and amplify allegations. A marked personal account would satisfy me but I would still hold it to a higher standard re:gossip since heās supposed to know whatās appropriate. And I expect him to want EA orgs to succeed! I donāt think premature callouts for racism and demands to have already have apologized are good faith criticism to strengthen the community.
I mean, I want employees at EA orgs to try to make EA orgs succeed insofar as that does the most good, and try to make EA orgs fail insofar as that does the most good instead. Likewise, I want them to try to strengthen the EA community if their model says this is good, and to try to weaken it (or just ignore it) otherwise.
(Obviously, in each case Iād want them to be open and honest about what theyāre trying to do; you can oppose an org you think is bad without doing anything unethical or deceptive.)
Iām not sure what I think CEAās role should be in EA. I do feel more optimistic about EA succeeding if major EA orgs in general focus more on developing a model of the world and trying to do the most good under their idiosyncratic world-view, rather than trying to represent or reflect EA-at-large; and I feel more optimistic about EA if sending our best and brightest to work at EA orgs doesnāt mean that they have to do massively more self-censoring now.
Maybe CEA or CEA-comms is an exception, but Iām not sold yet. I do think itās good to have high epistemic standards, but I see that as compatible with expressing personal feelings, criticizing other orgs, wanting specific EA orgs to fail, etc.
For what itās worth, speaking as a non-comms person, Iām a big fan of Rob Bensinger style comms people. I like seeing him get into random twitter scraps with e/āacc weirdos, or turning obnoxious memes into FAQs, or doing informal abstract-level research on the state of bioethics writing. I may be biased specifically because I like Robās contributions, and would miss them if he turned himself into a vessel of perfect public emptiness into which the disembodied spirit of MIRIās preferred public image was poured, but, look, I also just find that type of job description obviously offputting. In general I liked getting to know the EAs Iāve gotten to know, and I donāt know Shakeel that well, but I would like to get to know him better. I certainly am averse to the idea of wrist slapping him back into this empty vessel to the extent that we are blaming him for carelessness even when he specifies very clearly that he isnāt speaking for his organization. I do think that his statement was hasty, but I also think we need to be forgiving of EAs whose emotions are running a bit hot right now, especially when they circle back to self-correct afterwards.
I think this would also just be logically inconsistent; MIRIās preferred public image is that we not be the sort of org that turns people into vessels of perfect public emptiness into which the disembodied spirit of our preferred public image is poured.
I donāt agree with MIRI on everything, but yes, this is one of the things I like most about it
āMy initial thought, however, is that itās good for full-time EAs on the current margin to speak more from their personal views, and to do less āspeaking for the organizationsā. E.g., in the case of FTX, I think it would have been healthy for EAs working at full-time orgs to express their candid thoughts about SBF, both negative and positive; and for other professional EAs to give their real counter-arguments, and for a real discussion to thereby happen.ā
This seems a little naive. āWe were all getting millions of dollars from this guy with billions to come, heās personal friends with all the movement leaders, but if we had had more open discussions we would not have taken the millions...really??ā
also if youāre in line to get millions of $$$ from someone of course you are never going to share your candid thoughts about them publicly under your real name!
I didnāt say a specific prediction about what would have happened differently if EAs had discussed their misgivings about SBF more openly. What Iād say is that if you took a hundred SBF-like cases with lots of the variables randomized, outcomes will be a lot better if people discuss early serious warning signs and serious misgivings in public.
That will sometimes look like āturning down moneyā, sometimes like āmore people poke around to learn moreā, sometimes like āthis person is less able to win othersā trust via their EA associationsā, sometimes like āfewer EAs go work for this guyā.
Sometimes it wonāt do anything at all, or will be actively counterproductive, because the world is complicated and messy. But I think talking about this stuff and voicing criticisms is the best general policy, if weāre picking a policy to apply across many different cases and not just using hindsight to ask what an omniscient person would do differently in the specific case of FTX.
I mean, Open Philanthropy is MIRIās largest financial supporter, and
Makes sense to me! I appreciate knowing your perspective better, Shakeel. :)
On reflection, I think the thing I care about in situations like this is much more āmutual understanding of where people were coming from and where theyāre at nowā, whether or not anyone technically āapologizesā.
Apologizing is one way of communicating information about that (because it suggests weāre on the same page that there was a nontrivial foreseeable-in-advance fuck-up), but IMO a comment along those lines could be awesome without ever saying the words āIām sorryā.
One of my concerns about āIām sorryā is that I think some people think you can only owe apologies to Good Guys, not to Bad Guys. So if thereās a disagreement about who the Good Guys are, communities can get stuck arguing about whether X should apologize for Y, when it would be more productive to discuss upstream disagreements about facts and values.
I think some people are still uncertain about exactly how OK or bad FLIās actions here were, but whether or not FLI fucked up badly here and whether or not FLI is bad as an org, I think the EA Forumās response was bad given the evidence we had at the time. I want our culture to be such that itās maximally easy for us to acknowledge that sort of thing and course-correct so we do better next time. And my intuition is that a sufficiently honest explanation of where you were coming from, thatās sufficiently curious about and open to understanding othersā perspectives, and sufficiently lacking in soldier-mindset-style defensiveness, can do even more than an apology to contribute to a healthy culture.
(In this case the apology is to FLI/āMax, not to me, so itās mostly none of my business. š But since I called for āapologiesā earlier, I wanted to consider the general question of whether thatās the thing that matters most.)
I find myself disliking this comment, and I think its mostly because it sounds like you 1) agree with many of the blunders Rob points out, yet 2) donāt seem to have learned anything from your mistake here? I donāt think many do or should blame you, and Iām personally concerned about repeated similar blunders on your part costing EA much loss of outside reputation and internal trust.
Like, do you think that the issue was that you were responding in heat, and if so, will you make a future policy of not responding in heat in future similar situations?
I feel like there are deeper problems here that wonāt be corrected by such a policy, and your lack of concreteness is an impedance to communicating such concerns about your approach to CEA comms (and is itself a repeated issue that wonāt be corrected by such a policy).
FWIW, I donāt really want Shakeel to rush into making public promises about his future behavior right now, or big public statements about long-term changes to his policies and heuristics, unless he finds that useful for some reason. I appreciated hearing his thoughts, and would rather leave him space to chew on things and figure out what makes sense for himself. If he or CEA make the wrong updates by my lights, then I expect that to be visible in future CEA/āShakeel actions, and I can just wait and criticize those when they happen.
FTX collapsed on November 8th; all the key facts were known by the 10th; CEA put out their statement on November 12th. This is a totally reasonable timeframe to respond. I would have hoped that this experience would make CEA sympathetic to a fellow EA org (with much less resources than CEA) experiencing a media crisis rather than being so quick to condemn.
Iām also not convinced that a Head of Communications, working for an organization with a very restrictive media policy for employees, commenting on a matter of importance for that organization, can really be said to be operating in a personal capacity. Despite claims to the contrary, I think itās pretty reasonable to interpret these as official CEA communications. Skill at a PR role is as much about what you do not say as what you do.
The eagerness with which people rushed to condemn is frankly a warning sign for involution. We have to stop it with the pointless infighting or itās all we will end up doing.
Hi Rob!
Just a quick note to say I donāt think everything in your comment above is entirely fair characterisation of the comments.
Two specific points (I havenāt checked everything you say above, so I donāt claim this is exhaustive):
I think youāre mischaracterising Shakeelās 9.18pm response quite significantly. You paraphrased him as saying he sees no reason FLI wouldnāt have released a public statement but that is I think neither the text nor the spirit of that comment. He specifically acknowledged he might be missing some reasons. He said he thinks the lack of response is āvery weirdā which seems pretty different to me to āI see no reason for thisā. Hereās some quoting but itās so short people can just read the comment :P āHi Jack ā reasonable question! When I wrote this post I just didnāt see what the legal problems might be for FLIā¦ Jasonās comment has made me realise there might be something else going on here, though; if that is the case then that would make the silence make more sense. I do still think itās very weird that FLI hasnāt condemned Nya Dagbladet thoughā
You also left out that Shakeel did already apologise to Max Tegmark for in his words ājumping to conclusionsā when Max explained a reason for the delay, which I think is relevant to the timeline youāre setting out here.
I think both those things are relevant to how reasonable some of these comments were and to what extent apologies might be owed.
Thanks for the response, Habiba. :)
The comments are short enough that I should probably just quote them here:
Comment 1: āThe following is my personal opinion, not CEAās. If this is true itās absolutely horrifying. FLI needs to give a full explanation of what exactly happened here and I donāt understand why they havenāt. If FLI did knowingly agree to give money to a neo-Nazi group, thatās despicable. I donāt think people who would do something like that ought to have any place in this community.ā
Comment 2: āHi Jack ā reasonable question! When I wrote this post I just didnāt see what the legal problems might be for FLI. With FTX, there are a ton of complications, most notably with regards to bankruptcy/āclawbacks, and the fact that actual crimes were (seemingly) committed. This FLI situation, on face value, didnāt seem to have any similar complications ā it seemed that something deeply immoral was done, but nothing more than that. Jasonās comment has made me realise there might be something else going on here, though; if that is the case then that would make the silence make more sense. I do still think itās very weird that FLI hasnāt condemned Nya Dagbladet though ā CEA did, after all, make it very clear very quickly what our stance on SBF was.ā
My summary of comment 2: āShakeel follows up, repeating that he sees no reason why FLI wouldnāt have already made a public statement, and raises the possibility that FLI has maybe done sinister questionably-legal things and thatās why they havenāt spoken up.ā
I think this is a fine summary of the gist of Shakeelās comment ā obviously there isnāt literally āno reasonā here (that would contradict the very next part of my sentence, āand raises the possibility that FLI has maybe done sinister questionably-legal things and thatās why they havenāt spoken upā), but thereās no good reason Shakeel can see, and Shakeel reiterates that he thinks āitās very weird that FLI hasnāt condemned Nya Dagbladetā.
The main thing I was trying to point at is that Shakeelās first comment says āI donāt understandā why FLI hasnāt given āa full explanation of exactly what happened hereā (the implication being that thereās something really weird and suspicious about FLI not having already released a public statement), and Shakeelās second comment doubles down on that basic perspective (itās still weird and suspicious /ā he canāt think of an innocent explanation, though he acknowledges a non-innocent explanation).
That said, I think this is a great context to be a stickler about saying everything precisely (rather than relying on āgistsā), and Iām generally a fan of the ethos that cares about precision and literalness. š Being completely literal, āhe sees no reasonā is flatly false (at least if āseeing no reasonā means āyou havenāt thought of a remotely plausible motivation that might have caused this behaviorā).
Iāll edit the comment to say ārepeating that itās really weird that FLI hasnāt already made a public statementā, since thatās closer to being a specific sentiment he expresses in both comments.
I think this is a different thing, but itās useful context anyway, so thanks for adding it. :)
Agree. Should have added those to my own comment, but felt like Iād already spent too much time on it!
I also spent too much time on comments :P
I upvoted this, but disagreed. I think the timeline would be better if it included:
November 2022: FLI inform Nya Dagbladet Foundation (NDF) that they will not be funding them
15 December 2022: FLI learn of media interest in the story
I therefore donāt think itās āabsurdā to have expected FLI to have repudiated NDF sooner. You could argue that by apologising for their mistake before the media interest does more harm than good by drawing attention to it (and by association, to NDF), but once they became aware of the media attention, I think they should have issued something more like their current statement.
I also agreed with the thrust of titotalās comment that their first statement was woefully inadequate (it was more like ānothing to see hereā than āoh damn, we seriously considered supporting an odious publication and weāre sorryā). I donāt think lack of time gets them off the hook here, given they should have expected Expo to publish at some point.
I donāt think anyone owes an apology for expecting FLI to do better than this.
(Note: I appreciate Max Tegmark was dealing with a personal tragedy (for which, my condolences) at the time of it becoming āa thingā on the EA Forum, so I of course wouldnāt expect him to be making quick-but-considered replies to everything posted on here at that time. But I think thereās a difference between that and the speed of the proper statement.)
***
FWIW I also had a different interpretation of Shakeelās 9:18pm comment than what you write here:
āJan 13, 9:18pm: Shakeel follows up, repeating that he sees no reason why FLI wouldnāt have already made a public statement, and raises the possibility that FLI has maybe done sinister questionably-legal things and thatās why they havenāt spoken up.ā
Shakeel said āJasonās comment has made me realise there might be something else going on here, though; if that is the case then that would make the silence make more sense.ā ā this seemed to me that Shakeel was trying to to be charitable, and understand the reasons FLI hadnāt replied quicker.
Only a subtle difference, but wanted to point that out.
Yeah, if the early EA Forum comments had explicitly said āFLI should have said something public about this as soon as they discovered that NDF was badā, āFLI should have said something public about this as soon as Expo contacted themā, or āFLI should have been way more response to Expoās inquiriesāāand if weād generally expressed a lot more uncertainty and been more measured in what we said in the first few daysāthen I might still have disagreed, but I wouldnāt have seen this as an embarrassingly bad response in the same way.
I, as a casual reader who wasnāt trying to carefully track all the timestamps, had no idea when I first skimmed these threads on Jan. 13-14 that the article had only come out a few hours ago, and I didnāt track timestamps carefully enough to register just how fast the EA Forum went from āa top-level post exists about this at allā to āwow, FLI is stonewalling usā and āwow, there must be something really sinister here given that FLI still hasnāt respondedā. I feel like I was misled by these comments, because I just took for granted (to some degree) that the people writing these highly upvoted comments were probably not saying something transparently silly.
If a commenter like Jason thought that FLI was āstonewallingā because they didnāt release a public statement about this in December, then itās important to be explicit about that, so casual readers donāt come away from the comment section thinking that FLI is displaying some amazing level of unresponsiveness to the forum post or to the news article.
This is less obvious to me, if they didnāt owe a public response before Expo reached out to them. A lot of press inquiries donāt end up turning into articles, and if the goal is to respond to press coverage, itās often better to wait and see whatās in the actual article, since you might end up surprised about the articleās contents.
āDo better than thisā, notably, is switching out concrete actions for a much more general question, one thatās closer to āWhatās the correct overall level of affect we should have about FLI right now?ā.
If weāre going to have āapologize when you mess up enoughā norms, I think they should be more about evaluating local process, and less about evaluating the overall character of the person youāre apologizing to. (Or even the character-in-this-particular-case, since itās possible to owe someone an apology even if that person owes an apology too.) āDid I fuck-up when I did X?ā should be a referendum on whether the local action was OK, not a referendum on the people you fucked up at.
More thoughts about apology norms in my comment here.
Thanks for this comment and timeline, I found it very useful.
I agree that ārespond within two hours of a 7am forum postā seems like an unreasonable standard, and I also agree that some folks rushed too quickly to condemn FHI or make assumptions about Tegmarkās character/āchoices.
I do want to illustrate a related point:
When the Bostrom news hit, many folks jumped to defend Bostromās apology as reasonable because it consisted of statements that Bostrom believed to be true, and that this reflects truth-seeking and good epistemics, and this should be something that the forum and community should uphold.
But if I look at Jasonās comment, āSo, unless there is a satisfactory explanation forthcoming, the stonewalling strongly points to a more sinister one.ā
There is actually nothing technically untrue about this statement? There WAS a satisfactory explanation that eventuated.
Similarly, if I look at Shakeelās comment, the condemnation is conditional on if the events happened: āIf this is true itās absolutely horrifyingā, āIf FLI did knowingly agree to give money to a neo-Nazi group, thatās despicableā, āI donāt think people who would do something like that ought to have any place in this communityā.
The sentence about speaking up sooner FLI reflects Shakeel expressing his desire that FLI needs to give a full explanation, and his confusion about why this has not yet happened, but reading the text of that statement, thereās actually no āexplicit condemnation of FLI for not speaking up sooner ā.
Now, I raise these points not because Iām interested in defending Shakeel or Jason, because the subtext does matter, and itās somewhat reasonable to read those statements and interpret those as explicit condemnation of FLI for not speaking up sooner, and push back accordingly.
But Iām just noting that there are a lot of upvotes on Robās comment, and quite a few voices (I think rightfully!) saying that some commentors were too quick to jump to conclusions about Tegmark or FLI. But I donāt see any commentors defending Jason or Shakeelās statements with the ātruth-seekingā and āgood epistemicsā argument that was being used to defend Bostromās apology.
Do you have any thoughts on the explanations for what seem like an inconsistent application of upholding these standards? It might not even be accurately characterized as an inconsistency, Iām likely missing something here.
I expect this comment will just get reflexively downvoted given how tribal the commentary on the forum is these days, but I am curious about what drives this perceived difference, especially from those who self-identify as high decouplers, truth-seekers, or those who place themselves in the āprioritize epistemicsā camp.
āTechnically not saying anything untrueā isnāt the same as āexhibiting a truth-seeking attitude.ā
Iād say truth-seeking attitude would have been more like āBefore we condemn FLI, letās make sure we understand their perspective and can assess what really happened.ā Perhaps accompagnied by āI agree we should condemn them harshly if the reporting is roughly as it looks like right now.ā Similar statement, different emphasis. Shakeelās comment did appropriate hedging, but its main content was sharing a (hedged) judgment/ācondemnation.
Edit: I still upvoted your comment for highlighting that Shakeel (and Jason) hedged their comments. I think thatās mostly fine! In hindsight, though, I agree with the sentiment that the community discussion was tending towards judgment a bit too quickly.
Thanks for the engagement Lukas, have upvoted.
Yeah, I agree! I think my main point is to illustrate that the impression you got of the community discussion ātending towards judgement a bit too quicklyā is pretty reasonable despite the technically true statements that they made, because of a reading of a subtext, including what they didnāt say or choose to focus on, instead of the literal text alone, which I felt like was a major crux between those who thought Bostromās apology was largely terrible VS. those who thought Bostromās apology was largely acceptable.
Likewise, I also agree with this! I think what Iām most interested in here is like, what you (or others) think separates the two in general, because my guess is those who were upset with Bostromās apology would also agree with this statement. I think the crux is more likely that they would also think this statement applies to Bostromās comments (i.e. they were closer to ātechnically not saying anything untrueā, rather than āexhibiting a truth-seeking attitudeā), while those who disagree would think āBostrom is actually exhibiting a truth-seeking attitudeā.
For example, if I apply your statement to Bostromās apology:
āIād say truth-seeking attitude would have been more like: āBefore I make a comment thatās strongly suggestive of a genetic difference between races, or easily misinterpreted to be a racist dogwhistle, letās make sure I understand their perspective and can assess how this apology might actually be interpretedā, perhaps accompanied by āI think I should make true statements if I can make sure they will be interpreted to mean what my actual views are, and I know they are the true statements that are most relevant and important for the people I am apologizing to.ā
Similar statement, different emphasis. Bostromās comment was ātechnically trueā, but its main content was less about an apology and more about raising questions around a genetic component of intelligence, expression of support for some definition of eugenics and some usage of provocative communication.ā
I think my point is less that āShakeel and Jasonās comments are fine because they were hedgedā, and less about pointing out the empirical fact that they were hedged, and more that āShakeel and Jasonās comments were not fine just because they contained true statements, but this standard should be applied similarly to Bostromās apology, which was also not fine just because it contained true statementsā.
More speculative:
Like, part of me gets the impression this is in part modulated by a dislike of the typical SJW cancel culture (which I can resonate with), and therefore the truth-seeking defence is applied more strongly against condemnation of any kind, as opposed to just truth-seeking for truthās sake. But Iām not sure that this, if true, is actually optimizing for truth, nor that itās necessarily the best approach on consequentialist grounds, unless thereās good reason to think that a heuristic to err on the side of anti-condemnation in every situation is preferable to evaluating each on a case-by-case basis.
That makes sense ā I get why you feel like there are double standards.
I donāt agree that there necessarily are.
Regarding Bostromās apology, I guess you could say that itās part of ātruth-seekingā to dive into any mistakes you might have made and acknowledge everything there is to acknowledge. (Whether we call it ātruth-seekingā or not, thatās certainly how apologies should be, in an ideal world.) On this point, Bostromās apology was clearly suboptimal. It didnāt acknowledge that there was more bad stuff to the initial email than just the racial slur.
Namely, in my view, itās not really defensible to say ātechnically trueā things without some qualifying context, if those true things are easily interpreted in a misleadingly-negative or harmful-belief-promoting way on their own or even interpreted as, as you say, āracist dogwhistles.ā (I think that phrase is sometimes thrown around so lightly that it seems a bit hysterical, but it does seem appropriate for the specific example of the sentence Bostrom claimed he ālikes.ā)
Take for example a newspaper reporting on a person with autism who committed a school shooting. Given the widespread stigma against autism, it would be inappropriate to imply that autism is linked to these types of crimes without some sort of very careful discussion that doesnāt make readers prejudiced against people on the spectrum. (I donāt actually know if thereās any such link.)
What I considered bad about Bostromās apology was that he didnāt say more about why his entire stance on ācontroversial communicationā was a bad take.
Given all of the above, why did I say that I found Bostromās apology āāreasonableāā³?
āReasonableā is a lower bar than āgood.ā
Context matters: The initial email was never intended to be seen by anyone who wasnāt in that early group of transhumanists. In a small, closed group, communication functions very differently. For instance, among EA friends, Iāve recently (after the FTX situation) made a joke about how we should run a scam to make money. The joke works because my friends have enough context to know I donāt mean it. I wouldnāt make the same joke in a group where it isnāt common knowledge that Iām joking. Similarly, while I donāt know much about the transhumanist reading list, itās probably safe to say that āweāre all high-decouplers and care about all of humanityā was common knowledge in that group. Given that context, itās sort of defensible to think that thereās not that much wrong with the initial email (apart from cringiness) other than the use of the racial slur. Bostrom did apologize for the latter (even viscerally, and unambiguously).
I thought there was some ambiguity in the apology about whether he was just apologizing for the racial slur, or whether he also meant just the general email when he described how he hated re-reading it. When I said that the apology was āreasonable,ā I interpreted him to mean the general email. I agree he could have made this more clear.
In any case, thatās one way to interpret ātruth-seekingā ā trying to get to the bottom of any mistakes that were made when apologizing.
That said, I think almost all the mentions of ātruth-seeking is importantā in the Bostrom discussion were about something else.
There was a faction of people who thought that people should be socially shunned for holding specific views on the underlying causes of group differences. Another faction that was like āit should be okay to say āI donāt knowā if you actually donāt know.ā
While a few people criticized Bostromās apology for reasons similar to the ones I mentioned above (which I obviously think is reasonable!), my impression is that the people who were most critical of it did so for the āsocial shunning for not completely renouncing a specific viewā reason.
For what itās worth, I agree that emphasis on truth-seeking can go too far. While I appreciated this part of EA culture in the discussion around Bostrom, Iāve several times found myself accusing individual rationalists of fetishizing ātruth-seeking.ā :)
So, I certainly donāt disagree with your impression that there can be biases on both sides.
I found myself agreeing with a lot of this. Thanks for your nuanced take on truth-seeking ideals, I appreciated the conversation!
I wanted to say a bit about the āvibeā /ā thrust of this comment when it comes to community discourse norms...
(This is somewhat informed by your comments on twitter /ā facebook which themselves are phrased more strongly than this and are less specific in scope )
I suspect you and I agree that we should generally encourage posters to be charitable in their takes and reasonable in their requestsāand it would be bad overall for discussions in general where this not the case. Being angry on the internet is often not at all constructive!
However, I think that being angry or upset where it seems like an organisation has done something egregious is very often an appropriate emotional response to feel. I think that the ideal amount of expressing that anger /ā upset that community norms endorse is non-zero! And yes when people are hurt they may go somewhat too far in what they request /ā suggest /ā speculate. But again the optimal amount of ātoo strong requestsā is non-zero.
I think that expressing those feeling of hurt /ā anger /ā upset explicitly (or implicitly expressing them through the kinds of requests one is making) has many uses and there are costs to restricting it too much.
Some uses to expressing it:
Conveying the sheer seriousness or importance of the question to the poster. That can be useful information for the organisation under scrutiny about whether /ā how much people think they messed up (which itself is information about whether /ā how much they actually messed up). It will lead to better outcomes if organisation in fact get the information that some people are deeply hurt by their actions. If the people who are deeply hurt cannot /ā do not express this the organisation will not know.
Individuals within a community expressing values they hold dear (and which of those are strong enough to provoke the strongest emotional reaction) is part of how a community develops and maintains norms about behaviour that is /ā isnāt acceptable.
Some costs to restricting it:
People who have stronger emotional reactions are often closer to the issue. It is very hard when you feel really hurt by something to have to reformulate that in terms acceptable to people who are not at all affected by the thing.
If people who are really hurt by something get the impression from community norms that expressing their hurt is not welcome they may well not feel welcome in the community at all. This seems extra bad if you care about diversity in the community and certain issues affect certain groups more. (E.g. antisemitism, racism, sexism etc.)
If people who are really hurt by something do not post, the discourse will be selected towards people who arenāt hurt /ā donāt care as strongly. That will systematically skew the discussion towards a specific set of reactions and lead you further away from understanding what people across the community actually think about something.
I think that approaching online discussions on difficult topics is really really hard! I do not think I know what the ideal balance is. I have almost never before participated in such discussions and Iām personally finding my feet here. I am not arguing in favour of carte blanche for people making unreasonable angry demands.
But I want to push back pretty strongly against the idea that people should never be able to post hurt /ā upset comments or that the comments above seem very badly wrong. (Or that they warrant the things you said on facebook /ā twitter about EA discourse norms)
P.S. Iām wondering whether you would agree with me for all the above if the organisational behaviour was egregious enough by your /ā anyoneās lights? [Insert thought experiment here about shockingly beyond the pale behaviour by an organisation that people on the forum express angry comments about]. If yes, then we just disagree on where /ā how to draw the line not that there is a line at all. If not, then I think we have a more fundamental disagreement about how humans can be expected to communicate online.
I see āclearly expressing angerā and āposting when angryā as quite different things.
I endorse the former, but I rarely endorse the latter, especially in contexts like the EA Forum.
Letās distinguish different stages of anger:
We could think of āhotā and ācoldā anger as a spectrum.
Most people experience hot anger from time to time. But I think EA figuresāespecially senior figuresāshould model a norm of only posting on the EA Forum when fairly cool.
My impression is that, during the Bostrom and FLI incidents, several people posted with considerably more hot anger than I would endorse. In these cases, I think the mistake has been quite harmful, and may warrant public and private apologies.
As a positive example: Peter Hurfordās blog post, which he described as āangryā, showed a level of reasonableness and clarity that made it, in my mind, āabove the barā to publish. The text suggests a relatively cool anger. I disagree with some parts of the post, but I am glad he published it. At the meta-level, my impression is that Peter was well within the range of āappropriate states of mindā for a leadership figure to publish a message like that in public.
Iām not sure how I feel about this proposed norm. I probably think that senior EA figures should at least sometimes post when theyāre feeling some version of āhot angerā, as opposed to literally never doing this.
The way you defined ācool vs. hotā here is that itās about thinking straight vs. not thinking straight. Under that framing, I agree that you shouldnāt post comments when you have reason to suspect you might temporarily not be thinking straight. (Or you should find a way to flag this concern in the comment itself, e.g., with an epistemic status disclaimer or NVC-style language.)
But you also call these ādifferent stages of angerā, which suggests a temporal interpretation: hot anger comes first, followed by cool. And the use of the words āhotā and ācoolā, to my ear, also suggests something about the character of the feeling itself.
I feel comfortable suggesting that EAs self-censor under the āthinking straight?ā interpretation. But if youāre feeling really intense emotion and itās very close in time to the triggering event, but you think youāre nonetheless thinking straight ā or you think you can add appropriate caveats and context so people can correct for the ways in which youāre not thinking straight ā then Iām a lot more wary about adding a strong ādonāt say whatās on your mindā norm here.
I think ācharityā isnāt quite the right framing here, but I think we should encourage posters to really try to understand each other; to ask themselves āwhat does this other person think the physical world is like, and what evidence do I have that itās not like that?ā; to not exaggerate how negative their takes are; and to be mindful of biases and social dynamics that often cause people to have unrealistically negative beliefs about The Other Side.
I 100% agree! I happened to write something similar here just before reading your comment. :)
From my perspective, the goal is more āhave accurate modelsā and ābe honest about what your models areā. In interpersonal contexts, the gold standard is often that youāre able to pass someone elseās ideological Turing Test.
Sometimes, your model really is that something is terrible! In cases like that, I think we should be pretty cautious about discouraging people from sharing what they really think about the terrible thing. (Like, I think ābe civil all the timeā, ādonāt rock the boatā, ābe very cautious about criticizing other EAsā is one of the main processes that got in the way of people like me hearing earlier about SBFās bad track record ā I think EAs in the know kept waaay too quiet about this information.)
Itās true that there are real costs to encouraging EAs to routinely speak up about their criticisms ā it can make the space feel more negative and aversive to a lot of people, which Iād expect to contribute to burnout and to some people feeling less comfortable honestly expressing their thoughts and feelings.
I donāt know what the best solution is (though I think that tech like NVC can help a whole lot), but Iād be very surprised if the best solution involved EAs never expressing actually intense feelings in any format, no matter how much the context cries for it.
Sometimes shitās actually just fucked up, and Iād rather a community where people can say as much (even if not everyone agrees) than one where weāre all performatively friendly and smiley all the time.
Seems right. Digging a bit deeper, I suspect weād disagree about what the right tradeoff to make is in some cases, based on different background beliefs about the world and about how to do the most good.
Like, we can hopefully agree that itās sometimes OK to pick the ātalk in a way that hurts some people and thereby makes those people less likely to engage with EAā side of the tradeoff. An example of this is that some people find discussion of food or veg*nism triggering (e.g., because they have an ED).
We could choose to hide discussion of animal products from the EA Forum in order to be more inclusive to those people; but given the importance of this topic to a lot of what EA does today, it seems more reasonable to just accept that weāre going to exclude a few people (at least from spaces like the EA Forum and EA Global, where all the different cause areas are rubbing elbows and itās important to keep the friction on starting animal-related topics very low).
If we agree that itās ever OK to pick the ātalk in way X even though it hurts some peopleā side of the tradeoff, then I think we have enough common ground that the remaining disagreements can be resolved (given enough time) by going back and forth about what sort of EA community we think has the best chance of helping the world (and about how questions of interpersonal ethics, integrity, etc. bear on what we should do in practice).
Oh, did I say something wrong? I was imagining that all the stuff I said above is compatible with what Iāve said on social media. Iād be curious which things you disagree with that I said elsewhere, since that might point at other background disagreements Iām not tracking.
Just a quick note to say thanks for such a thoughtful response! <3
I think youāre doing a great job here modelling discourse norms and I appreciate the substance of your points!
Ngl I was kinda trepidatious opening the forumā¦ but the reasonableness of your reply and warmth of your tone is legit making me smile! (It probably doesnāt hurt that happily we agree more than I realised. :P )
I may well write a litte more substantial response at some point but will likely take a weekend break :)
P.S. Real quick re social mediaā¦ Things I was thinking about were phrases from fb like āEAs fād upā and the āfairly shameful initial responseā- which I wondered if were stronger than you were expressing here but probably just you saying the same thing. And in this twitter thread you talk about the ācancel mobāābut I think youāre talking there are about a general case. You donāt have to justify yourself on those Iām happy to read it all via the lens of the comments youāve written on this post.
Aw, that makes me really happy to hear. Iām surprised that it made such a positive difference, and I update that I should do it more!
(The warmth part, not the agreement part. I canāt really control the agreement part, if we disagree then weāre just fucked. šš)
Re the social media things: yeah, I stand by that stuff, though I basically always expect reasonable people to disagree a lot about exactly how big a fuck-up is, since natural language is so imprecise and there are so many background variables we could disagree on.
I feel a bit weird about the fact that I use such a different tone in different venues, but I think I like this practice for how my brain works, and plan to keep doing it. I definitely talk differently with different friends, and in private vs. public, so I like the idea of making this fact about me relatively obvious in public too.
I donāt want to have such a perfect and consistent public mask/āpersona that people think my public self exactly matches my private self, since then they might come away deceived about how much to trust (for example) that my tone in a tweet exactly matches the emotions I was feeling when I wrote it.
I want to be honest in my private and public communications, but (even more than that) I want to be meta-honest, in the sense of trying to make it easy for people to model what kind of person I am and what kinds of things I tend to be more candid about, what it might mean if I steer clear of a topic, etc.
Trying too hard to look like Iām an open book who always says whatās on his mind, never self-censors in order to look more polite on the EA Forum, etc. would systematically cause people to have falser beliefs about the delta between āwhat Rob B saidā and āwhat Rob B is really thinking and feeling right nowā. And while I donāt think I owe everyone a full print-out of my stream of consciousness, I do sorta feel like I owe it to people to not deliberately make it sound like Iām more transparent than I am.
This is maybe more of a problem for me than for other people: Iām constantly going on about what a big fan of candor and blurting I am, so I think thereās more risk of people thinking Iām a 100% open book, compared to the risk a typical EA faces.
So, to be clear: I donāt advocate that EAs be 100% open books. And separately, I donāt perfectly live up to my own stated ideals.
Like, I think an early comment like this would have been awesome (with apologies to Shakeel for using his comments as an example, and keeping in mind that this is me cobbling something together rather than something Shakeel endorses):
Note: The following is me expressing my own feelings and beliefs. Other people at CEA may feel differently or have different models, and I donāt mean to speak for them.
If this is true then I feel absolutely horrified. Supporting neo-Nazi groups is despicable, and I donāt think people who would do something like that ought to have any place in this community. [mention my priors about how reliable this sort of journalism tends to be] [mention my priors about FLIās moral character, epistemics, and/āor political views, or mention that I donāt know much about FLI and havenāt thought about them before] Given that, [rough description of how confident I feel that FLI would financially support a group that they knew had views like Holocaust-denialism].
But itās hard to be confident about what happened based on a single news article, in advance of hearing FLIās side of things; and there are many good reasons it can take time to craft a complete and accurate public statement that expresses the proper amount of empathy, properly weighs the PR and optics concerns, etc. So I commit to upvoting FLIās official response when it releases one (even if I donāt like the response), to make it likelier that people see the follow-up and not just the initial claims.
I also want to encourage others to speak up if they disagree on any of this, including chiming in with views contrary to mine (which Iāll try to upvote at least enough to make it obviously socially accepted to express uncertainty or disagreement on this topic, while the facts are still coming in). But for myself, my immediate response to this is that I feel extremely upset.
For context: Coming on the heels of the Bostrom situation, I feel seriously concerned that some people in the EA community think of non-white people as inherently low-status, and I feel surprised and deeply hurt at the lack of empathy to non-white people many EAs have shown in their public comments. I feel profoundly disgusted at the thought of racist ideas and attitudes finding acceptance within EA, and though Iāll need to hear more about the case of FLI before I reach any confident conclusions about this case, my emotional reaction is one of anger at the possibility that FLI knowingly funded neo-Nazis, and a strong desire to tell EAs and non-EAs alike that this is not who we are.
The above hypothetical, not-Shakeel-authored comment meets a higher bar than what I think was required in this context ā I think itās fine for EAs to be a bit sloppier than that, even if they work at CEA ā but hopefully it directionally points at what I mean when I say that there are epistemically good ways to express strong feelings. (Though I donāt think itās easy, and I think there are hard tradeoffs here: demanding more rigor will always cause some number of comments to just not get written at all, which will cause some good ideas and perspectives to never be considered. In this case, I think a fair bit more rigor is worth the cost.)
Haha this is a great hypothetical comment!
The concreteness is helpful because I think my take is that, in general, writing something like this is emotionally exhausting (not to mention time consuming!) - especially so if youāve got skin in the game and across your life you often come up across things like this to respond to and you keep having the pressure to force your feelings into a more acceptable format.
I reckon that crafting a message like that if I were upset about something could well take half a work day. And Iād have in my head all the being upset /ā being angry /ā being scared people on the forum would find me unreasonable /ā resentful that people might find me unreasonable /ā doubting myself the whole time. (Though I know plausibly that Iām in part just the describing the human condition there. Trying to do things is hard...!)
Overall, I think Iām just more worried than you that requiring comments to be too far in this direction has too much of a chilling effect on discourse and is too costly for the individuals involved. And it really just is a matter of degree here and what tradeoffs weāre willing to make.
(It makes me think itād be an interesting excerise to write a number of hypothetical comments arrange them on a scale of how much they major on carefully explaining priors, caveating, communicating meta-level intention etc. and then see where weād draw the line of acceptable /ā not!)
Thereās an angry top-level post about evaporative cooling of group beliefs in EA that I havenāt written yet, and wonāt until it would no longer be an angry one. That might mean that the best moment has passed, which will make me sad for not being strong enough to have competently written it earlier. You could describe this as my having been chilled out of the discourse, but I would instead describe it as my politely waiting until I am able and ready to explain my concerns in a collected and rational manner.
I am doing this because I care about carefully articulating what Iām worried about, because I think itās important that I communicate it clearly. I donāt want to cause people to feel ambushed and embattled; I donāt want to draw battle lines between me and the people who agree with me on 99% of everything. I donāt want to engender offense that could fester into real and lasting animosity, in the very same people who if approached collaboratively would pull with me to solve our mutual problem out of mutual respect and love for the people who do good.
I donāt want to contribute to the internal divisions growing in EA. To the extent that it is happening, we should all prefer to nip the involution in the budāif one has ever been on team Everyone Who Logically Tries To Do The Most Good, thereās nowhere to go but down.
I think that if I wrote an angry top-level post, it would deserve to be downvoted into oblivion, though Iām not sure it would be.
I think on the margin Iām fine with posts that will start fights being chilled. Angry infighting and polarization are poisonous to what weāre trying to do.
I think you are upset because FLI or Tegmark was wronged. Would you consider hearing another perspective about this?
I barely give a gosh-guldarn about FLI or Tegmark outside of their (now reduced) capacity to reduce existential risk.
Obviously Iād rather bad things not happen to people and not happen to good people in particular, but I donāt specifically know anyone from FLI and they are a feather on the scales next to the full set of strangers who I care about.
If Tegmark or FLI was wronged in the way your comments and others imply, you are correct and justified in your beliefs. But if the apology or the current facts do not make that status clear, thereās an object level problem and itās bad to be angry that they are wronged, or build further arguments on that belief.
I think itās pretty obvious at this point that Tegmark and FLI was seriously wronged, but I barely care about any wrong done to them and am largely uninterested in the question of whether it was wildly disproportionate or merely sickeningly disproportionate.
I care about the consequences of what weāve done to them.
I care about how, in order to protect themselves from this community, the FLI is
I care about how everyone who watched this happen will also realize the need to protect themselves from us by shuffling along and taking their own pulses. I care about the new but promising EAs who no one will take a chance on, the moonshots that wonāt be funded even though theyād save lives in expectation, the good ideas with ābad opticsā that wonāt be acted on because of fear of backdraft on this forum. I care about the lives we can save if we donāt rush to conclusions, rush to anger, if we can give each other the benefit of the doubt for five freaking minutes and consider whether itād make any sense whatsoever for the accusation de jour to be what it looks like.
Getting to one object level issue:
If what happened was that Max Tegmark or FLI gets many dubious grant applications, and this particular application made it a few steps through FLIās processes before it was caught, expo.seās story and the negative response you object to on the EA forum would be bad, destructive and false. If this was what happened, it would absolutely deserve your disapproval and alarm.
I donāt think this isnāt true. What we know is:
An established (though hostile) newspaper gave an account with actual quotes from Tegmark that contradict his apparent actions
The bespoke funding letter, signed by Tegmark, explicitly promising funding, āapproved a grantā conditional on registration of the charity
The hiring of the lawyer by Tegmark
When Tegmark edited his comment with more content, Iām surprised by how positive the reception of this edit got, which simply disavowed funding extremist groups.
Iām further surprised by the reaction and changing sentiment on the forum in reaction of this post, which simply presents an exonerating story. This story itself is directly contradicted by the signed statement in the letter itself.
Contrary to the top level post, it is false that it is standard practice to hand out signed declarations of financial support, with wording like āapproved a grantā if substantial vetting remains. Also, itās extremely unusual for any non-profit to hire a lawyer to explain that a prospective grantee failed vetting in the application process. We also havenāt seen any evidence that FLI actually communicated a rejection. Expo.se seems to have a positive recordāeven accepting the aesthetic here that newspapers or journalists are untrustworthy, itās costly for an outlet to outright lie or misrepresent facts.
Thereās other issues with Tegmarkās/āFLI statements (e.g. deflections about the lack of direct financial benefit to his brother, not addressing the material support the letter provided for registration/āthe reasonable suspicion this was a ploy to produce the letter).
Thereās much more that is problematic that underpin this. If I had more time, I would start a long thread explaining how funding and family relationships could interact really badly in EA/ālongtermism for several reasons, and another about Tegmarkās insertions into geopolitical issues, which are clumsy at best.
Another comment said the EA forum reaction contributed to actual harm to Tegmark/āFLI in amplifying the false narrative. I think a look at Twitter, or how the story, which continues and has been picked up in Vice, suggests to me this isnāt this is true. Unfortunately, I think the opposite is true.
Yep, I think it absolutely is.
Itās also not an accident that my version of the comment is a lot longer and covers more topics (and therefore would presumably have taken way longer for someone to write and edit in a way they personally endorsed).
I donāt think the minimally acceptable comment needed to be quite that long or cover quite that much ground (though I think it would be praiseworthy to do so), but directionally Iām indeed asking people to do a significantly harder thing. And I expect this to be especially hard in exactly the situations where it matters most.
ā¤
Yeah, that sounds all too realistic!
Iām also imagining that while the author is trying to put together their comment, they might be tracking the fact that others have already rushed out their own replies (many of which probably suck from your perspective), and discussion is continuing, and the clock is ticking before the EA Forum buries this discussion entirely.
(I wonder if thereās a way to tweak how the EA Forum works so that thereās less incentive to go super fast?)
One reason I think itās worth trying to put in this extra effort is that it produces a virtuous cycle. If I take a bit longer to draft a comment I can more fully stand by, then other people will feel less pressure to rush out their own thoughts prematurely. Slowing down the discussion a little, and adding a bit more light relative to heat, can have a positive effect on all the other discussion that happens.
Iāve mentioned NVC a few times, but I do think NVC is a good example of a thing that can help a lot at relatively little time+effort cost. Quick easy hacks are very good here, exactly because this can otherwise be such a time suck.
A related hack is to put your immediate emotional reaction inside a āthis is my immediate emotional reactionā frame, and then say a few words outside that frame. Like:
āHereās my immediate emotional reaction to the OP:
[indented italicized text]
And here are my first-pass thoughts about physical reality, which are more neutral but might also need to be revised after I learn more or have more time to chew on things:
[indented italicized text]ā
This is kinda similar to some stuff I put in my imaginary Shakeel comment above, but being heavy-handed about it might be a lot easier and faster than trying to make it feel like an organic whole.
And I think it has very similar effects to the stuff I was going for, where you get to express the feeling at all, but itās in a container that makes it (a) a bit less likely that youāll trigger others and thereby get into a heated Internet fight, and (b) a bit less likely that your initial emotional reaction will get mistaken (by you or others) for an endorsed carefully-wordsmithed description of your factual beliefs.
Yeah, this very much sounds to me like a topic where reasonable people can disagree a lot!
Ooooo, this sounds very fun. :) Especially if we can tangent off into science and philosophy debates when it turns out that thereās a specific underlying disagreement that explains why we feel differently about a particular case. š
To be clear, my criticism of the EA Forumās initial response to the Expo article was never āitās wrong to feel strong emotions in a context like this, and EAs should never publicly express strong emotionsā, and it also wasnāt āit should have been obviously in advance to all EAs that this wasnāt a huge dealā.
If you thought I was saying either of those things, then I probably fucked up in how I expressed myself; sorry about that!
My criticism of the EA Forumās response was:
I think that EAs made factual claims about the world that werenāt warranted by the evidence at the time. (Including claims about what FLI and Tegmark did, claims about their motives, and claims about how likely it is that there are good reasons for an org to want more than a few hours or days to draft a proper public response to an incident like this.) We were overconfident and following poor epistemic practices (and Iād claim this was noticeable at the time, as someone who downvoted lots of comments at the time).
Part of this is, I suspect, just some level of naivetƩ about the press, about the base rate of good orgs bungling something or other, etc. Hopefully this example will help people calibrate their priors slightly better.
I think that at least some EAs deliberately leaned into bad epistemic practices here, out of a sense that prematurely and overconfidently condemning FLI would help protect EAās reputation.
The EA Forum sort of ātrappedā FLI, by simultaneously demanding that FLI respond extremely quickly, but also demanding that the response be pretty exhaustive (āa full explanation of what exactly happened hereā, in Shakeelās words) and across-the-board excellent (zero factual errors, excellent empathizing and excellent displays of empathy, good PR both for reaching EAs and for satisfying the larger non-EA public, etc.). This sort of trap is not a good way to treat anyone, including non-EAs.
I think that many EAsā words and upvote patterns at the time created a social space in which expressing uncertainty, moderation, or counter-narrative beliefs and evidence was strongly discouraged. Basically, we did the classic cancel-culture echo chamber thing, where groups update more and more extremely toward a negative view of X because they keep egging each other on with new negative opinions and data points, while the people with alternative views stay quiet for fear of the social repercussions.
The more general version of this phenomenon is discussed in the Death Spirals sequence, and in videos like ContraPointsā Canceling: thereās a general tendency for many different kinds of social network to push themselves toward more and more negative (or more and more positive) views of a thing, when groups donāt exert lots of deliberate and unusual effort to encourage dissent, voice moderation, explicitly acknowledge alternative perspectives or counter-narrative points, etc.
I think this is a special risk for EA discussions of heavily politicized topics, so if we want to reliably navigate to true beliefs on such topics ā many of which will be a lot messier than the Tegmark case ā weāll need to try to be unusually allowing of dissent, disagreement, ābut what if X?ā, etc. on topics that are more emotionally charged. (Hard as that sounds!)
Minor point: I read Jason talking about āstonewallingā as referring to FLIās communications with Expo.se, not with the communications (or lack of) with EAs on this Forum.
The paragraph says:
The context is āFLI would have made a statement hereā, and the rest of comment doesnāt make me think heās talking about Expo either. And itās in reply to Jack and Shakeelās comments, which both seem to be about FLI saying something publicly, not about FLIās interactions with Expo specifically.
And Jeff Kaufman replied to Jason to say āone thing to keep in mind is that organizations can take weirdly long times to make even super obvious public statementsā, and Jason responded āGood point.ā The whole context is very āwow why has FLI not made a public statementā, not āwow why did FLI stonewall Expoā.
Still, I appreciate you raising the possibility, since there now seems to be inertia in this comment section against the people who were criticizing FLI, and the same good processes that would have helped people avoid rushing to conclusions in that case, should also encourage some amount of curiosity, patience, and uncertainty in this case.
As should be clear from follow-up comment posted shortly after that one, I was referring to the nearly one month that had passed between Expo reaching out to FLI and the publication of the article. When Jeff responded by noting reasons an organization might delay in making a statement, I wrote in reply: āA decision was made to send a responseāthat sounds vaguely threatening/āintimidating to my earsāthrough FLIās lawyer within days.ā [1] Expo did allege a number of facts that I think can be fairly characterized as stonewalling.
Itās plausible that Expo is wildly misrepresenting the substance of its communications between it and FLI, but the article seems fairly well-sourced to me. If Expoās characterization of the correspondence was unfair, I would expect FLIās initial January 13 statement to have disclosed significant facts that FLI told Expo but it omitted from its article.
Of course, drawing adverse inferences because an organization hasnāt provided a response within two hours of a forum discussion starting would be ridiculous (over a holiday weekend in the US no less!). I wouldnāt have thought it was necessary to say that. However, based on the feedback I am getting here, it would have been much better for me to have said something like āI view FLIās reported responses to Expo as stonewalling, and if FLI continues to offer the same responses . . . .ā I apologize to FLI and everyone else here that my lack of clarity on that point contributed to a Forum environment on that morning that was too ready to settle on conclusions without giving FLI the opportunity to make a statement.
The line that sounded vaguely threatening/āintimidating was āAny implication to the contrary would be falseāāthat sounds how I would expect a lawyer to vaguely allude to a possible defamation claim when they knew they would never file one. If youāve already said X didnāt happen, whatās the point of that sentence?
My mistake! Sorry for misunderstanding your point, Jason. I really appreciate you clarifying here.
I donāt think you have internalized the point: there was no misconduct. If their initial statement was insufficient to convince us of this, that is on us, not on them. Their job as a charity is not to manage a public persona so that you or me continue to look good by affiliation, itās to actually do good. Accusing them of secretly financing nazis because weāre weak and afraid of being tarred by association is the exact reverse polar opposite of doing them a āfavorā.
First, Iāll state that allowing the grant to get past the vetting stage may not have been malicious, but it was incompetent. Tegmark has admitted as such, and proposed changes to remedy this. Finding out at least some of the insidious nature of the newspaper would have only have taken half an hour of googling.
The initial responses suggested either incompetence or malice on the part of FLI. I think assuming it was malice was uncalled for and wrong, but it was at the very least a possibility.
Charities rely on donors. Donors do not like being associated with neo-nazis, however unfairly. Doing basic research on your funding partners is part of a charities job, to avoid exactly this situation.
It did not make it past the vetting stage.
They did not award the grant.
FWIW, by FLIās own admission this is falseāthough perhaps you would call stage 5 (see below) the vetting stage.
In section 4) What was the meaning of FLIās letter of intent? FLI lay out 7 general stages for grant decision-making.
They say āThis proposal made it through 4) in August, then was rejected in November during 5), never reaching 6) or 7).ā
Where Stage 2 was: 2) Evaluation and vetting
And Stage 5 was: 5) Further due diligence on grantee
So it would be more accurate to say that it made it past initial vetting, but not further due diligence, and no grant was awarded.
Ah, I hadnāt meant to use āvetting stageā as a term of art.
Iād like to ask people not to downvote titotalās comment below zero, because that also hides RobBensingerās timeline. I had to strong upvote the parent comment to make the timeline visible again.
Sorry, Iām new here, and maybe Iām misunderstanding something, but...
it seems pretty clear that FLI is lying in this statement? Like, hereās the published evidence:
https://āāexpo.se/āāsites/āādefault/āāfiles/āāed-faksimil-loi.jpg
And here is what they say at the end of the current FAQ:
Do you not feel lied to? Thereās something wrong here. Thereās more to this story.
Tegmarkās brother published in this place. Expo says, reasonably: āWhether this connection is significant with regards to the promise of funding from Max Tegmark and the Future of Life Institute to Nya Dagbladet is one of the questions we have been trying to put to them, but neither Max Tegmark nor his brother Per Shapiro have commented.ā
Yet this does not make it to the FAQ, somehow? Like, FLI just refuses to address the suspicious connection here, except to say that Max Tegmark wouldnāt have been paid.
You can apologize if you want, but I personally still feel lied to.