Jan 13, 6:46am: Shakeel Hashim (speaking for himself and not for CEA; +110 karma, +109 net agreement as of the 15th) writes, “If this is true it’s absolutely horrifying. FLI needs to give a full explanation of what exactly happened here and I don’t understand why they haven’t. If FLI did knowingly agree to give money to a neo-Nazi group, that’s despicable. I don’t think people who would do something like that ought to have any place in this community.”
Jan 13, 9:18pm: Shakeel follows up, repeating that he sees no reason why FLI wouldn’t have already made a public statement that it’s really weird that FLI hasn’t already made a public statement, and raises the possibility that FLI has maybe done sinister questionably-legal things and that’s why they haven’t spoken up.
Jan 14, 3:43am: You (titotal) comment, “If the letter is genuine (and they have never denied that it is), then someone at FLI is either grossly incompetent or malicious. They need to address this ASAP. ”
Jan 14, 8:16am: Jason comments (+15 karma, +13 net agreement as of the 15th): “I think it very likely that FLI would have made a statement here if there were an innocent or merely negligent explanation (e.g., the document is a forgery, or they got duped somehow into believing the grantee was related to FLI’s stated charitable purposes and not pro-Nazi). So, unless there is a satisfactory explanation forthcoming, the stonewalling strongly points to a more sinister one.”
To be clear, this is Shakeel saying “I don’t understand why [FLI hasn’t given a full explanation]” six hours after the article came out / two hours after EAs started discussing it, at 9:46am Boston time. (FLI is based in Boston.) And Jason accusing FLI of “stonewalling” one day after the article’s release.
[Update 1⁄21: Jason says that he was actually thinking of FLI stonewalling Expo, not FLI stonewalling the EA Forum. That makes a big difference, though I wish Jason had been clear about this in his comments, since I think the aggregate effect of a bunch of comments like this on the EA Forum was to cause myself and others to think that Tegmark was taking a weirdly long time to reply to the article or to the EA Forum discussion.]
(And I’m only mentioning the explicit condemnation of FLI for not speaking up sooner here. The many highly upvoted and agreevoted EA Forum comments roasting FLI and making confident claims about what happened prior to Tegmark’s comment, with language like “the squalid character of Tegmark’s choices”, are obviously a further reason Tegmark / FLI might have wanted to rush out a response.)
The level of speed-in-replying demanded by EAs in this case (and endorsed by the larger EA Forum community, insofar as we strongly upvoted and up-agreevoted those comments) is frankly absurd, and I do think several apologies are owed here.
(Like, “respond within two hours of a 7am forum post” is wildly absurd even if we’re adopting a norm of expecting people to just blurt out their initial thoughts in real time, warts and errors and all. But it’s even more absurd if we’re demanding carefully crafted Public Statements that make no missteps and have no PR defects.)
Thanks for calling me out on this — I agree that I was too hasty to call for a response.
I’m glad that FLI has shared more information, and that they are rethinking their procedures as a result of this. This FAQ hasn’t completely alleviated my concerns about what happened here — I think it’s worrying that something like this can get to the stage it did without it being flagged (though again, I’m glad FLI seems to agree with this). And I also think that it would have been better if FLI had shared some more of the FAQ info with Expo too.
I do regret calling for FLI to speak up sooner, and I should have had more empathy for the situation they were in. I posted my comments not because I wanted to throw FLI under the bus for PR reasons, but because I was feeling upset; coming on the heels of the Bostrom situation I was worried that some people in the EA community were racist or at least not very sensitive about how discussions of race-related things can make people feel. At the time, I wanted to do my bit to make it clear — in particular to other non-white people who felt similarly to me — that EA isn’t racist. But I could and should have done that in a much better way. I’m sorry.
Thank you for making the apology, you have my approval for that! I also like your apology on the other thread – your words are hopeful for CEA going in a good direction.
Some feedback/reaction from me that I hope is helpful. In describing your motivation for the FLI comment, you say that it was not to throw FLI under the bus, but because of your fear that some people would think EA is racist, and you wanted to correct that. To me, that is a political motivation, not much different from a PR motivation.
To gesture at the difference (in my ontology) between PR/political motivations and truth-seeking motivations:
PR/political
you want people to believe a certain thing (even if it’s something you yourself sincerely believe), in this case, that EA is not racist
it’s about managing impressions and reputations (e.g. EA’s reputation as not racist)
Your initial comment (and also the Bostrom email statement) both struck me as “performative” in how they demonstrated really harsh and absolute condemnation (“absolutely horrifying”, “[no] place in this community”, “recklessly flawed and reprehensible” – granted that you said “if true”, but the tone and other comments seemed to suggest you did think it was true). That tone and manner of speaking as the first thing you say on a topic[1] feels pretty out of place to me within EA, and certainly isn’t what I want in the EA I would design.
Extreme condemnation pattern matches to someone signaling that they too punish the taboo thing (to be clear, I agree that racism should not be tolerated at all), as is seen on the lot of the Internet, and it feels pretty toxic. It feels like it’s coming from a place of needing to demonstrate “I/we are not the bad thing”.
So even if your motivation was “do your bit to make it clear that EA isn’t racist”, that does strike me as still political/PR (even if you sincerely believe it).
(And I don’t mean to doubt your upsetness! It is very reasonable to be upset if you think something will cause harm to others, and harm to the cause you are dedicating yourself to, and harm to your own reputation through association. Upsetness is real and caring about reputation can come from a really good place.)
I could write more on my feelings about PR/political stuff, because my view is not that it’s outright “bad/evil” or anything, more that caution is required.
Truth-seeking / info-propagation Such comments more focus on sharing the author’s beliefs (not performing them)[2] and explaining how they reached them, e.g. “this is what I think happened, this is why I think that” and inferences they’re making, and what makes sense. They tally uncertainty, and they leave open room for the chance they’re mistaken.
To me, the ideal spirit is “let me add my cognition to the collective so we all arrive at true beliefs” rather than “let me tug the collective beliefs in the direction I believe is correct” or “I need to ensure people believe the correct thing” (and especially not “I need people to believe the correct thing about me”).
My ideal CEA comms strategy would conceive of itself as having the goal of causing people to have accurate beliefs foremost, even when that makes EA look bad. That is the job – not to ensure EA looks good, but to ensure EA is perceived accurately, warts and all.
(And I’m interested in attracting to EA people who can appreciate that large movements have warts and who can tolerate weirdness in beliefs, and gets that movement leaders make mistakes. I want the people who see past that to the ideas and principles that make sense, and the many people (including you, I’d wager) are working very hard to make the world better.)
Encouragement I don’t want to respond to step in the right direction (a good apology) with something that feels negative, but it feels important to me that this distinction is deeply understood by CEA and EA in general, hence me writing it up for good measure. I hope this is helpful.
To me, the ideal spirit is “let me add my cognition to the collective so we all arrive at true beliefs” rather than “let me tug the collective beliefs in the direction I believe is correct” or “I need to ensure people believe the correct thing.”
I like this a lot.
I’ll add that you can just say out loud “I wish other people believed X” or “I think the correct collective belief here would be X”, in addition to saying your personal belief Y.
(An example of a case where this might make sense: You think another person or group believes Z, and you think they rationally should believe X instead, given the evidence available to them. You yourself believe a more-extreme proposition Y, but you don’t think others have enough evidence to believe Y yet—e.g., your belief may be based on technical expertise or hard-won life-experience that the other parties don’t have.)
It’s possible to care about the group’s beliefs, and try to intervene on them, in a way that’s honest and clear about what you’re doing.
“absolutely horrifying”, “[no] place in this community”, “recklessly flawed and reprehensible”
[...]
That tone and manner of speaking as the first thing you say on a topic[fn] feels pretty out of place to me within EA, and certainly isn’t what I want in the EA I would design.
Speaking locally to this point: I don’t think I agree! My first-pass take is that if something’s horrible, reprehensible, flawed, etc., then I think EAs should just say so. That strikes me as the default truth-seeking approach.[1]
There might be second-order reasons to be more cautious about when and how you report extreme negative evaluations (e.g., to keep forum discussions from degenerating as people emotionally trigger each other), but I would want to explicitly flag that this is us locally departing from the naive truth-seeking approach (“just say what seems true to you”) in the hope that the end result will be more truth-seeky via people having an easier time keeping a cool head.
(Note that I’m explicitly responding to the ‘extreme language’ side of this, not the ‘was this to some extent performative or strategic?’ side of things.)
With the caveat that maybe evaluative judgments in general get in the way of truth-seeking, unless they’re “owned” NVC-style, because of common confusions like “thinking my own evaluations are mind-independent properties of the world”. But if we’re allowing mild evaluative judgments like “OK” or “fine”, then I think there’s less philosophical basis for banning more extreme judgments like “awesome” or “terrible”.
I think I agree with your clarification and was in fact conflating the mere act of speaking with strong emotion with speaking in a way that felt more like a display. Yeah, I do think it’s a departure from naive truth-seeking.
In practice, I think it is hard, though I do think it is hard for the second order reasons you give and others. Perhaps an ideal is people share strong emotion when they feel it, but in some kind of format/container/manner that doesn’t shut down discussion or get things heated. “NVC” style, perhaps, as you suggest.
Fwiw, I do think “has no place in the community” without being owned as “no place in my community” or “shouldn’t have a place in the community” is probably too high a simulacrum level by default (though this isn’t necessarily a criticism of Shakeel, I don’t remember what exactly his original comment said.)
Cool. :) I think we broadly agree, and I don’t feel confident about what the ideal way to do this is, though I’d be pretty sad and weirded out by a complete ban on expressing strong feelings in any form.
you want people to believe a certain thing (even if it’s something you yourself sincerely believe), in this case that EA is not racist
it’s about managing impressions and reputations (e.g. EA’s reputation as not racist)
Your initial comment (and also the Bostrom email statement) both struck me as “performative” in how they demonstrated really harsh and absolute condemnation (“absolutely horrifying”, “[no] place in this community”, “recklessly flawed and reprehensible” – granted that you said “if true”, but the tone and other comments seemed to suggest you did think it was true). That tone and manner of speaking as the first thing you say on a topic[1] feels pretty out of place to me within EA, and certainly isn’t what I want in the EA I would design.
Extreme condemnation pattern matches to someone signaling that they too punish the taboo thing (to be clear, I agree that racism should not be tolerated at all), as is seen on the lot of the Internet, and feels pretty toxic. It feels like it’s coming from a place of needing to demonstrate “I/we are not the bad thing”.
So even if your motivation was “do your bit to make it clear that EA isn’t racist”, that does strike me as still political/PR (even if you sincerely believe it)
(And I don’t mean to doubt your upsetness! It is very reasonable to upset if you think something will cause harm to others, and harm to the cause you are dedicating yourself to. Upsetness is real and caring about reputation can come from a really good place.)
I could write more on my feelings about PR/political stuff, because my view is not that it’s outright “bad/evil” or anything, more that caution is required.
IMO, I think this is an area EA needs to be way better in. For better or worse, most of the world runs on persuasion, and PR matters. The nuanced truth doesn’t matter that much for social reality, and EA should ideally be persuasive and control social reality.
For better or worse, most of the world runs on persuasion, and PR matters. The nuanced truth doesn’t matter that much for social reality, and EA should ideally be persuasive and control social reality.
I think the extent to which nuanced truth does not matter to “most of the world” is overstated.
I additionally think that EA should not be optimizing for deceiving people who belong to the class “most of the world”.
Both because it wouldn’t be useful if it worked (realistically most of the world has very little they are offering) and because it wouldn’t work.
I additionally think think that trying to play nitwit political games at or around each hecking other would kill EA as a community and a movement dead, dead, dead.
Thanks for this Shakeel. This seems like a particularly rough time to be running comms for CEA. I’m grateful that in addition to having that on your plate, in your personal capacity you’re helping to make the community feel more supportive for non-white EAs feeling the alienation you point to. Also for doing that despite the emotional labour involved in that, which typically makes me shy away from internet discussions.
Responding swiftly to things seems helpful in service of that support. One of the risks from that is that you can end up taking a particular stance immediately and then it feeling hard to back down from that. But in fact you were able to respond swiftly, and then also quickly update and clearly apologise. Really appreciate your hard work!
(Flag that Shakeel and I both work for EV, though for different orgs under that umbrella)
Hey Shakeel, thanks for your apology and update (and I hope you’ve apologized to FLI). Even though call-out culture may be popular or expected in other contexts, it is not professional or appropriate for the Comms Head of CEA to initiate an interaction with an EA org by publicly putting them on blast and seemingly seconding what could be very damaging accusations (as well as inventing others by speculating about financial misconduct). Did you try to contact FLI before publicly commenting to get an idea of what happened (perhaps before they could prepare their statement)?
I appreciate that you apologized for this incident but I don’t think you understand how deep of a problem this behavior is. Get an anonymous account if you want to shoot from the hip. When you do it while your bio says “Head of Communications at CEA” it comes with a certain weight. Multiplying unfounded accusations, toward another EA org no less, is frankly acting in bad faith in a communications role.
Even though call-out culture may be popular or expected in other contexts, it is not professional or appropriate for the Comms Head of CEA to initiate an interaction with an EA org by publicly putting them on blast and seemingly seconding what could be very damaging accusations (as well as inventing others by speculating about financial misconduct). Did you try to contact FLI before publicly commenting to get an idea of what happened (perhaps before they could prepare their statement)?
For what it’s worth, this seems like the wrong way around to me. I don’t know exactly about the role and responsibilities of the “Head of Comm”, but in-general I would like people in EA to be more comfortable criticizing each other, and to feel less constrained to first air all criticism privately and resolve things behind closed doors.
I think the key thing that went wrong here was the absence of a concrete logical argument or probabilities about why the thing that was happening was actually quite bad, and also the time pressure, which made the context of the conversation much worse. Another big thing was also jumping to conclusions about FLI’s character in a way that felt like it was trying to apply direct political pressure instead of focusing on propagating accurate information.
it is not professional or appropriate for the Comms Head of CEA to initiate an interaction with an EA org by publicly putting them on blast and seemingly seconding what could be very damaging accusations
Maybe there are special rules that EA comms people (or the CEA comms person in particular) should follow; I possibly shouldn’t weigh in on that, since I’m another EA comms person (working at MIRI) and might be biased.
My initial thought, however, is that it’s good for full-time EAs on the current margin to speak more from their personal views, and to do less “speaking for the organizations”. E.g., in the case of FTX, I think it would have been healthy for EAs working at full-time orgs to express their candid thoughts about SBF, both negative and positive; and for other professional EAs to give their real counter-arguments, and for a real discussion to thereby happen.
My criticism of Shakeel’s post is very different from yours, and is about how truth-seeking the contents are and how well they incentivize truth-seeking from others, not about whether it’s inherently unprofessional for particular EAs to strongly criticize other EAs.
Get an anonymous account if you want to shoot from the hip.
This seems ~strictly worse to me than making a “Shakeel-Personal” account separate from “Shakeel-CEA”. It might be useful to have personal takes indexed separately (though I’d guess this is just not necessary, and would add friction and discourage people from sharing their real takes, which I want them to do more). But regardless, I don’t think it’s better to add even more of a fog of anonymity to EA Forum discussions, if someone’s willing to just say their stuff under their own name.
I’m glad anonymity is an option, but the number of anons in these discussions already makes it hard to know how much I might be double-counting views, makes it hard to contextualize comments by knowing what world-view or expertise or experience or they reflect, makes it hard to have sustained multi-month discussions with a specific person where we gradually converge on things, etc.
Idk I think it might be pretty hard to have a role like Head of Communications at CEA and then separately communicate your personal views about the same topics. Your position is rather unique for allowing that. I don’t see CEA becoming like MIRI in this respect. It comes across as though he’s saying this in his professional capacity when you hover over his account name and it says “Head of Communications at CEA”.
But the thing I think is most important about Shakeel’s job is that it means he should know better than to throw around and amplify allegations. A marked personal account would satisfy me but I would still hold it to a higher standard re:gossip since he’s supposed to know what’s appropriate. And I expect him to want EA orgs to succeed! I don’t think premature callouts for racism and demands to have already have apologized are good faith criticism to strengthen the community.
I mean, I want employees at EA orgs to try to make EA orgs succeed insofar as that does the most good, and try to make EA orgs fail insofar as that does the most good instead. Likewise, I want them to try to strengthen the EA community if their model says this is good, and to try to weaken it (or just ignore it) otherwise.
(Obviously, in each case I’d want them to be open and honest about what they’re trying to do; you can oppose an org you think is bad without doing anything unethical or deceptive.)
I’m not sure what I think CEA’s role should be in EA. I do feel more optimistic about EA succeeding if major EA orgs in general focus more on developing a model of the world and trying to do the most good under their idiosyncratic world-view, rather than trying to represent or reflect EA-at-large; and I feel more optimistic about EA if sending our best and brightest to work at EA orgs doesn’t mean that they have to do massively more self-censoring now.
Maybe CEA or CEA-comms is an exception, but I’m not sold yet. I do think it’s good to have high epistemic standards, but I see that as compatible with expressing personal feelings, criticizing other orgs, wanting specific EA orgs to fail, etc.
For what it’s worth, speaking as a non-comms person, I’m a big fan of Rob Bensinger style comms people. I like seeing him get into random twitter scraps with e/acc weirdos, or turning obnoxious memes into FAQs, or doing informal abstract-level research on the state of bioethics writing. I may be biased specifically because I like Rob’s contributions, and would miss them if he turned himself into a vessel of perfect public emptiness into which the disembodied spirit of MIRI’s preferred public image was poured, but, look, I also just find that type of job description obviously offputting. In general I liked getting to know the EAs I’ve gotten to know, and I don’t know Shakeel that well, but I would like to get to know him better. I certainly am averse to the idea of wrist slapping him back into this empty vessel to the extent that we are blaming him for carelessness even when he specifies very clearly that he isn’t speaking for his organization. I do think that his statement was hasty, but I also think we need to be forgiving of EAs whose emotions are running a bit hot right now, especially when they circle back to self-correct afterwards.
I like Rob’s contributions, and would miss them if he turned himself into a vessel of perfect public emptiness into which the disembodied spirit of MIRI’s preferred public image was poured
I think this would also just be logically inconsistent; MIRI’s preferred public image is that we not be the sort of org that turns people into vessels of perfect public emptiness into which the disembodied spirit of our preferred public image is poured.
“My initial thought, however, is that it’s good for full-time EAs on the current margin to speak more from their personal views, and to do less “speaking for the organizations”. E.g., in the case of FTX, I think it would have been healthy for EAs working at full-time orgs to express their candid thoughts about SBF, both negative and positive; and for other professional EAs to give their real counter-arguments, and for a real discussion to thereby happen.”
This seems a little naive. “We were all getting millions of dollars from this guy with billions to come, he’s personal friends with all the movement leaders, but if we had had more open discussions we would not have taken the millions...really??”
also if you’re in line to get millions of $$$ from someone of course you are never going to share your candid thoughts about them publicly under your real name!
This seems a little naive. “We were all getting millions of dollars from this guy with billions to come, he’s personal friends with all the movement leaders, but if we had had more open discussions we would not have taken the millions...really??”
I didn’t say a specific prediction about what would have happened differently if EAs had discussed their misgivings about SBF more openly. What I’d say is that if you took a hundred SBF-like cases with lots of the variables randomized, outcomes will be a lot better if people discuss early serious warning signs and serious misgivings in public.
That will sometimes look like “turning down money”, sometimes like “more people poke around to learn more”, sometimes like “this person is less able to win others’ trust via their EA associations”, sometimes like “fewer EAs go work for this guy”.
Sometimes it won’t do anything at all, or will be actively counterproductive, because the world is complicated and messy. But I think talking about this stuff and voicing criticisms is the best general policy, if we’re picking a policy to apply across many different cases and not just using hindsight to ask what an omniscient person would do differently in the specific case of FTX.
also if you’re in line to get millions of $$$ from someone of course you are never going to share your candid thoughts about them publicly under your real name!
I mean, Open Philanthropy is MIRI’s largest financial supporter, and
Makes sense to me! I appreciate knowing your perspective better, Shakeel. :)
On reflection, I think the thing I care about in situations like this is much more “mutual understanding of where people were coming from and where they’re at now”, whether or not anyone technically “apologizes”.
Apologizing is one way of communicating information about that (because it suggests we’re on the same page that there was a nontrivial foreseeable-in-advance fuck-up), but IMO a comment along those lines could be awesome without ever saying the words “I’m sorry”.
One of my concerns about “I’m sorry” is that I think some people think you can only owe apologies to Good Guys, not to Bad Guys. So if there’s a disagreement about who the Good Guys are, communities can get stuck arguing about whether X should apologize for Y, when it would be more productive to discuss upstream disagreements about facts and values.
I think some people are still uncertain about exactly how OK or bad FLI’s actions here were, but whether or not FLI fucked up badly here and whether or not FLI is bad as an org, I think the EA Forum’s response was bad given the evidence we had at the time. I want our culture to be such that it’s maximally easy for us to acknowledge that sort of thing and course-correct so we do better next time. And my intuition is that a sufficiently honest explanation of where you were coming from, that’s sufficiently curious about and open to understanding others’ perspectives, and sufficiently lacking in soldier-mindset-style defensiveness, can do even more than an apology to contribute to a healthy culture.
(In this case the apology is to FLI/Max, not to me, so it’s mostly none of my business. 😛 But since I called for “apologies” earlier, I wanted to consider the general question of whether that’s the thing that matters most.)
I find myself disliking this comment, and I think its mostly because it sounds like you 1) agree with many of the blunders Rob points out, yet 2) don’t seem to have learned anything from your mistake here? I don’t think many do or should blame you, and I’m personally concerned about repeated similar blunders on your part costing EA much loss of outside reputation and internal trust.
Like, do you think that the issue was that you were responding in heat, and if so, will you make a future policy of not responding in heat in future similar situations?
I feel like there are deeper problems here that won’t be corrected by such a policy, and your lack of concreteness is an impedance to communicating such concerns about your approach to CEA comms (and is itself a repeated issue that won’t be corrected by such a policy).
FWIW, I don’t really want Shakeel to rush into making public promises about his future behavior right now, or big public statements about long-term changes to his policies and heuristics, unless he finds that useful for some reason. I appreciated hearing his thoughts, and would rather leave him space to chew on things and figure out what makes sense for himself. If he or CEA make the wrong updates by my lights, then I expect that to be visible in future CEA/Shakeel actions, and I can just wait and criticize those when they happen.
FTX collapsed on November 8th; all the key facts were known by the 10th; CEA put out their statement on November 12th. This is a totally reasonable timeframe to respond. I would have hoped that this experience would make CEA sympathetic to a fellow EA org (with much less resources than CEA) experiencing a media crisis rather than being so quick to condemn.
I’m also not convinced that a Head of Communications, working for an organization with a very restrictive media policy for employees, commenting on a matter of importance for that organization, can really be said to be operating in a personal capacity. Despite claims to the contrary, I think it’s pretty reasonable to interpret these as official CEA communications. Skill at a PR role is as much about what you do not say as what you do.
The eagerness with which people rushed to condemn is frankly a warning sign for involution. We have to stop it with the pointless infighting or it’s all we will end up doing.
Just a quick note to say I don’t think everything in your comment above is entirely fair characterisation of the comments.
Two specific points (I haven’t checked everything you say above, so I don’t claim this is exhaustive):
I think you’re mischaracterising Shakeel’s 9.18pm response quite significantly. You paraphrased him as saying he sees no reason FLI wouldn’t have released a public statement but that is I think neither the text nor the spirit of that comment. He specifically acknowledged he might be missing some reasons. He said he thinks the lack of response is “very weird” which seems pretty different to me to “I see no reason for this”. Here’s some quoting but it’s so short people can just read the comment :P “Hi Jack — reasonable question! When I wrote this post I just didn’t see what the legal problems might be for FLI… Jason’s comment has made me realise there might be something else going on here, though; if that is the case then that would make the silence make more sense. I do still think it’s very weird that FLI hasn’t condemned Nya Dagbladet though”
You also left out that Shakeel did already apologise to Max Tegmark for in his words “jumping to conclusions” when Max explained a reason for the delay, which I think is relevant to the timeline you’re setting out here.
I think both those things are relevant to how reasonable some of these comments were and to what extent apologies might be owed.
I think you’re mischaracterising Shakeel’s 9.18pm response quite significantly.
The comments are short enough that I should probably just quote them here:
Comment 1: “The following is my personal opinion, not CEA’s. If this is true it’s absolutely horrifying. FLI needs to give a full explanation of what exactly happened here and I don’t understand why they haven’t. If FLI did knowingly agree to give money to a neo-Nazi group, that’s despicable. I don’t think people who would do something like that ought to have any place in this community.”
Comment 2: “Hi Jack — reasonable question! When I wrote this post I just didn’t see what the legal problems might be for FLI. With FTX, there are a ton of complications, most notably with regards to bankruptcy/clawbacks, and the fact that actual crimes were (seemingly) committed. This FLI situation, on face value, didn’t seem to have any similar complications — it seemed that something deeply immoral was done, but nothing more than that. Jason’s comment has made me realise there might be something else going on here, though; if that is the case then that would make the silence make more sense. I do still think it’s very weird that FLI hasn’t condemned Nya Dagbladet though — CEA did, after all, make it very clear very quickly what our stance on SBF was.”
My summary of comment 2: “Shakeel follows up, repeating that he sees no reason why FLI wouldn’t have already made a public statement, and raises the possibility that FLI has maybe done sinister questionably-legal things and that’s why they haven’t spoken up.”
I think this is a fine summary of the gist of Shakeel’s comment — obviously there isn’t literally “no reason” here (that would contradict the very next part of my sentence, “and raises the possibility that FLI has maybe done sinister questionably-legal things and that’s why they haven’t spoken up”), but there’s no good reason Shakeel can see, and Shakeel reiterates that he thinks “it’s very weird that FLI hasn’t condemned Nya Dagbladet”.
The main thing I was trying to point at is that Shakeel’s first comment says “I don’t understand” why FLI hasn’t given “a full explanation of exactly what happened here” (the implication being that there’s something really weird and suspicious about FLI not having already released a public statement), and Shakeel’s second comment doubles down on that basic perspective (it’s still weird and suspicious / he can’t think of an innocent explanation, though he acknowledges a non-innocent explanation).
That said, I think this is a great context to be a stickler about saying everything precisely (rather than relying on “gists”), and I’m generally a fan of the ethos that cares about precision and literalness. 🙂 Being completely literal, “he sees no reason” is flatly false (at least if ‘seeing no reason’ means ‘you haven’t thought of a remotely plausible motivation that might have caused this behavior’).
I’ll edit the comment to say “repeating that it’s really weird that FLI hasn’t already made a public statement”, since that’s closer to being a specific sentiment he expresses in both comments.
You also left out that Shakeel did already apologise to Max Tegmark for in his words “jumping to conclusions” when Max explained a reason for the delay, which I think is relevant to the timeline you’re setting out here.
I think this is a different thing, but it’s useful context anyway, so thanks for adding it. :)
I upvoted this, but disagreed. I think the timeline would be better if it included:
November 2022: FLI inform Nya Dagbladet Foundation (NDF) that they will not be funding them
15 December 2022: FLI learn of media interest in the story
I therefore don’t think it’s “absurd” to have expected FLI to have repudiated NDF sooner. You could argue that by apologising for their mistake before the media interest does more harm than good by drawing attention to it (and by association, to NDF), but once they became aware of the media attention, I think they should have issued something more like their current statement.
I also agreed with the thrust of titotal’s comment that their first statement was woefully inadequate (it was more like “nothing to see here” than “oh damn, we seriously considered supporting an odious publication and we’re sorry”). I don’t think lack of time gets them off the hook here, given they should have expected Expo to publish at some point.
I don’t think anyone owes an apology for expecting FLI to do better than this.
(Note: I appreciate Max Tegmark was dealing with a personal tragedy (for which, my condolences) at the time of it becoming ‘a thing’ on the EA Forum, so I of course wouldn’t expect him to be making quick-but-considered replies to everything posted on here at that time. But I think there’s a difference between that and the speed of the proper statement.)
***
FWIW I also had a different interpretation of Shakeel’s 9:18pm comment than what you write here:
“Jan 13, 9:18pm: Shakeel follows up, repeating that he sees no reason why FLI wouldn’t have already made a public statement, and raises the possibility that FLI has maybe done sinister questionably-legal things and that’s why they haven’t spoken up.”
Shakeel said “Jason’s comment has made me realise there might be something else going on here, though; if that is the case then that would make the silence make more sense.” → this seemed to me that Shakeel was trying to to be charitable, and understand the reasons FLI hadn’t replied quicker.
Only a subtle difference, but wanted to point that out.
November 2022: FLI inform Nya Dagbladet Foundation (NDF) that they will not be funding them
15 December 2022: FLI learn of media interest in the story
Yeah, if the early EA Forum comments had explicitly said “FLI should have said something public about this as soon as they discovered that NDF was bad”, “FLI should have said something public about this as soon as Expo contacted them”, or “FLI should have been way more response to Expo’s inquiries”—and if we’d generally expressed a lot more uncertainty and been more measured in what we said in the first few days—then I might still have disagreed, but I wouldn’t have seen this as an embarrassingly bad response in the same way.
I, as a casual reader who wasn’t trying to carefully track all the timestamps, had no idea when I first skimmed these threads on Jan. 13-14 that the article had only come out a few hours ago, and I didn’t track timestamps carefully enough to register just how fast the EA Forum went from “a top-level post exists about this at all” to “wow, FLI is stonewalling us” and “wow, there must be something really sinister here given that FLI still hasn’t responded”. I feel like I was misled by these comments, because I just took for granted (to some degree) that the people writing these highly upvoted comments were probably not saying something transparently silly.
If a commenter like Jason thought that FLI was “stonewalling” because they didn’t release a public statement about this in December, then it’s important to be explicit about that, so casual readers don’t come away from the comment section thinking that FLI is displaying some amazing level of unresponsiveness to the forum post or to the news article.
once they became aware of the media attention, I think they should have issued something more like their current statement.
This is less obvious to me, if they didn’t owe a public response before Expo reached out to them. A lot of press inquiries don’t end up turning into articles, and if the goal is to respond to press coverage, it’s often better to wait and see what’s in the actual article, since you might end up surprised about the article’s contents.
I don’t think anyone owes an apology for expecting FLI to do better than this.
“Do better than this”, notably, is switching out concrete actions for a much more general question, one that’s closer to “What’s the correct overall level of affect we should have about FLI right now?”.
If we’re going to have “apologize when you mess up enough” norms, I think they should be more about evaluating local process, and less about evaluating the overall character of the person you’re apologizing to. (Or even the character-in-this-particular-case, since it’s possible to owe someone an apology even if that person owes an apology too.) “Did I fuck-up when I did X?” should be a referendum on whether the local action was OK, not a referendum on the people you fucked up at.
More thoughts about apology norms in my comment here.
Thanks for this comment and timeline, I found it very useful.
I agree that “respond within two hours of a 7am forum post” seems like an unreasonable standard, and I also agree that some folks rushed too quickly to condemn FHI or make assumptions about Tegmark’s character/choices.
I do want to illustrate a related point: When the Bostrom news hit, many folks jumped to defend Bostrom’s apology as reasonable because it consisted of statements that Bostrom believed to be true, and that this reflects truth-seeking and good epistemics, and this should be something that the forum and community should uphold.
But if I look at Jason’s comment, “So, unless there is a satisfactory explanation forthcoming, the stonewalling strongly points to a more sinister one.”
There is actually nothing technically untrue about this statement? There WAS a satisfactory explanation that eventuated.
Similarly, if I look at Shakeel’s comment, the condemnation is conditional on if the events happened: “If this is true it’s absolutely horrifying”, “If FLI did knowingly agree to give money to a neo-Nazi group, that’s despicable”, “I don’t think people who would do something like that ought to have any place in this community”.
The sentence about speaking up sooner FLI reflects Shakeel expressing his desire that FLI needs to give a full explanation, and his confusion about why this has not yet happened, but reading the text of that statement, there’s actually no “explicit condemnation of FLI for not speaking up sooner ”.
Now, I raise these points not because I’m interested in defending Shakeel or Jason, because the subtext does matter, and it’s somewhat reasonable to read those statements and interpret those as explicit condemnation of FLI for not speaking up sooner, and push back accordingly.
But I’m just noting that there are a lot of upvotes on Rob’s comment, and quite a few voices (I think rightfully!) saying that some commentors were too quick to jump to conclusions about Tegmark or FLI. But I don’t see any commentors defending Jason or Shakeel’s statements with the “truth-seeking” and “good epistemics” argument that was being used to defend Bostrom’s apology.
Do you have any thoughts on the explanations for what seem like an inconsistent application of upholding these standards? It might not even be accurately characterized as an inconsistency, I’m likely missing something here.
I expect this comment will just get reflexively downvoted given how tribal the commentary on the forum is these days, but I am curious about what drives this perceived difference, especially from those who self-identify as high decouplers, truth-seekers, or those who place themselves in the “prioritize epistemics” camp.
There is actually nothing technically untrue about this statement?
[...]
Do you have any thoughts on the explanations for what seem like an inconsistent application of upholding these standards? It might not even be accurately characterized as an inconsistency, I’m likely missing something here.
“Technically not saying anything untrue” isn’t the same as “exhibiting a truth-seeking attitude.”
I’d say truth-seeking attitude would have been more like “Before we condemn FLI, let’s make sure we understand their perspective and can assess what really happened.” Perhaps accompagnied by “I agree we should condemn them harshly if the reporting is roughly as it looks like right now.” Similar statement, different emphasis. Shakeel’s comment did appropriate hedging, but its main content was sharing a (hedged) judgment/condemnation.
Edit: I still upvoted your comment for highlighting that Shakeel (and Jason) hedged their comments. I think that’s mostly fine! In hindsight, though, I agree with the sentiment that the community discussion was tending towards judgment a bit too quickly.
I agree with the sentiment that the community discussion was tending towards judgment a bit too quickly.
Yeah, I agree! I think my main point is to illustrate that the impression you got of the community discussion “tending towards judgement a bit too quickly” is pretty reasonable despite the technically true statements that they made, because of a reading of a subtext, including what they didn’t say or choose to focus on, instead of the literal text alone, which I felt like was a major crux between those who thought Bostrom’s apology was largely terrible VS. those who thought Bostrom’s apology was largely acceptable.
“Technically not saying anything untrue” isn’t the same as “exhibiting a truth-seeking attitude.”
Likewise, I also agree with this! I think what I’m most interested in here is like, what you (or others) think separates the two in general, because my guess is those who were upset with Bostrom’s apology would also agree with this statement. I think the crux is more likely that they would also think this statement applies to Bostrom’s comments (i.e. they were closer to “technically not saying anything untrue”, rather than “exhibiting a truth-seeking attitude”), while those who disagree would think “Bostrom is actually exhibiting a truth-seeking attitude”.
For example, if I apply your statement to Bostrom’s apology: ”I’d say truth-seeking attitude would have been more like: “Before I make a comment that’s strongly suggestive of a genetic difference between races, or easily misinterpreted to be a racist dogwhistle, let’s make sure I understand their perspective and can assess how this apology might actually be interpreted”, perhaps accompanied by “I think I should make true statements if I can make sure they will be interpreted to mean what my actual views are, and I know they are the true statements that are most relevant and important for the people I am apologizing to.”
Similar statement, different emphasis. Bostrom’s comment was “technically true”, but its main content was less about an apology and more about raising questions around a genetic component of intelligence, expression of support for some definition of eugenics and some usage of provocative communication.”
I think my point is less that “Shakeel and Jason’s comments are fine because they were hedged”, and less about pointing out the empirical fact that they were hedged, and more that “Shakeel and Jason’s comments were not fine just because they contained true statements, but this standard should be applied similarly to Bostrom’s apology, which was also not fine just because it contained true statements”.
More speculative: Like, part of me gets the impression this is in part modulated by a dislike of the typical SJW cancel culture (which I can resonate with), and therefore the truth-seeking defence is applied more strongly against condemnation of any kind, as opposed to just truth-seeking for truth’s sake. But I’m not sure that this, if true, is actually optimizing for truth, nor that it’s necessarily the best approach on consequentialist grounds, unless there’s good reason to think that a heuristic to err on the side of anti-condemnation in every situation is preferable to evaluating each on a case-by-case basis.
That makes sense – I get why you feel like there are double standards.
I don’t agree that there necessarily are.
Regarding Bostrom’s apology, I guess you could say that it’s part of “truth-seeking” to dive into any mistakes you might have made and acknowledge everything there is to acknowledge. (Whether we call it “truth-seeking” or not, that’s certainly how apologies should be, in an ideal world.) On this point, Bostrom’s apology was clearly suboptimal. It didn’t acknowledge that there was more bad stuff to the initial email than just the racial slur.
Namely, in my view, it’s not really defensible to say “technically true” things without some qualifying context, if those true things are easily interpreted in a misleadingly-negative or harmful-belief-promoting way on their own or even interpreted as, as you say, “racist dogwhistles.” (I think that phrase is sometimes thrown around so lightly that it seems a bit hysterical, but it does seem appropriate for the specific example of the sentence Bostrom claimed he “likes.”)
Take for example a newspaper reporting on a person with autism who committed a school shooting. Given the widespread stigma against autism, it would be inappropriate to imply that autism is linked to these types of crimes without some sort of very careful discussion that doesn’t make readers prejudiced against people on the spectrum. (I don’t actually know if there’s any such link.)
What I considered bad about Bostrom’s apology was that he didn’t say more about why his entire stance on “controversial communication” was a bad take.
Context matters: The initial email was never intended to be seen by anyone who wasn’t in that early group of transhumanists. In a small, closed group, communication functions very differently. For instance, among EA friends, I’ve recently (after the FTX situation) made a joke about how we should run a scam to make money. The joke works because my friends have enough context to know I don’t mean it. I wouldn’t make the same joke in a group where it isn’t common knowledge that I’m joking. Similarly, while I don’t know much about the transhumanist reading list, it’s probably safe to say that “we’re all high-decouplers and care about all of humanity” was common knowledge in that group. Given that context, it’s sort of defensible to think that there’s not that much wrong with the initial email (apart from cringiness) other than the use of the racial slur. Bostrom did apologize for the latter (even viscerally, and unambiguously).
I thought there was some ambiguity in the apology about whether he was just apologizing for the racial slur, or whether he also meant just the general email when he described how he hated re-reading it. When I said that the apology was “reasonable,” I interpreted him to mean the general email. I agree he could have made this more clear.
In any case, that’s one way to interpret “truth-seeking” – trying to get to the bottom of any mistakes that were made when apologizing.
That said, I think almost all the mentions of “truth-seeking is important” in the Bostrom discussion were about something else.
There was a faction of people who thought that people should be socially shunned for holding specific views on the underlying causes of group differences. Another faction that was like “it should be okay to say ‘I don’t know’ if you actually don’t know.”
While a few people criticized Bostrom’s apology for reasons similar to the ones I mentioned above (which I obviously think is reasonable!), my impression is that the people who were most critical of it did so for the “social shunning for not completely renouncing a specific view” reason.
For what it’s worth, I agree that emphasis on truth-seeking can go too far. While I appreciated this part of EA culture in the discussion around Bostrom, I’ve several times found myself accusing individual rationalists of fetishizing “truth-seeking.” :)
So, I certainly don’t disagree with your impression that there can be biases on both sides.
I wanted to say a bit about the “vibe” / thrust of this comment when it comes to community discourse norms...
(This is somewhat informed by your comments on twitter / facebook which themselves are phrased more strongly than this and are less specific in scope )
I suspect you and I agree that we should generally encourage posters to be charitable in their takes and reasonable in their requests—and it would be bad overall for discussions in general where this not the case. Being angry on the internet is often not at all constructive!
However, I think that being angry or upset where it seems like an organisation has done something egregious is very often an appropriate emotional response to feel. I think that the ideal amount of expressing that anger / upset that community norms endorse is non-zero! And yes when people are hurt they may go somewhat too far in what they request / suggest / speculate. But again the optimal amount of “too strong requests” is non-zero.
I think that expressing those feeling of hurt / anger / upset explicitly (or implicitly expressing them through the kinds of requests one is making) has many uses and there are costs to restricting it too much.
Some uses to expressing it:
Conveying the sheer seriousness or importance of the question to the poster. That can be useful information for the organisation under scrutiny about whether / how much people think they messed up (which itself is information about whether / how much they actually messed up). It will lead to better outcomes if organisation in fact get the information that some people are deeply hurt by their actions. If the people who are deeply hurt cannot / do not express this the organisation will not know.
Individuals within a community expressing values they hold dear (and which of those are strong enough to provoke the strongest emotional reaction) is part of how a community develops and maintains norms about behaviour that is / isn’t acceptable.
Some costs to restricting it:
People who have stronger emotional reactions are often closer to the issue. It is very hard when you feel really hurt by something to have to reformulate that in terms acceptable to people who are not at all affected by the thing.
If people who are really hurt by something get the impression from community norms that expressing their hurt is not welcome they may well not feel welcome in the community at all. This seems extra bad if you care about diversity in the community and certain issues affect certain groups more. (E.g. antisemitism, racism, sexism etc.)
If people who are really hurt by something do not post, the discourse will be selected towards people who aren’t hurt / don’t care as strongly. That will systematically skew the discussion towards a specific set of reactions and lead you further away from understanding what people across the community actually think about something.
I think that approaching online discussions on difficult topics is really really hard! I do not think I know what the ideal balance is. I have almost never before participated in such discussions and I’m personally finding my feet here. I am not arguing in favour of carte blanche for people making unreasonable angry demands.
But I want to push back pretty strongly against the idea that people should never be able to post hurt / upset comments or that the comments above seem very badly wrong. (Or that they warrant the things you said on facebook / twitter about EA discourse norms)
P.S. I’m wondering whether you would agree with me for all the above if the organisational behaviour was egregious enough by your / anyone’s lights? [Insert thought experiment here about shockingly beyond the pale behaviour by an organisation that people on the forum express angry comments about]. If yes, then we just disagree on where / how to draw the line not that there is a line at all. If not, then I think we have a more fundamental disagreement about how humans can be expected to communicate online.
I see “clearly expressing anger” and “posting when angry” as quite different things.
I endorse the former, but I rarely endorse the latter, especially in contexts like the EA Forum.
Let’s distinguish different stages of anger:
The “hot” kind—when one is not really thinking straight, prone to exaggeration and uncharitable interpretations, etc.
The “cool” kind—where one can think roughly as clearly about the topic as any other.
We could think of “hot” and “cold” anger as a spectrum.
Most people experience hot anger from time to time. But I think EA figures—especially senior figures—should model a norm of only posting on the EA Forum when fairly cool.
My impression is that, during the Bostrom and FLI incidents, several people posted with considerably more hot anger than I would endorse. In these cases, I think the mistake has been quite harmful, and may warrant public and private apologies.
As a positive example: Peter Hurford’s blog post, which he described as “angry”, showed a level of reasonableness and clarity that made it, in my mind, “above the bar” to publish. The text suggests a relatively cool anger. I disagree with some parts of the post, but I am glad he published it. At the meta-level, my impression is that Peter was well within the range of “appropriate states of mind” for a leadership figure to publish a message like that in public.
I’m not sure how I feel about this proposed norm. I probably think that senior EA figures should at least sometimes post when they’re feeling some version of “hot anger”, as opposed to literally never doing this.
The way you defined “cool vs. hot” here is that it’s about thinking straight vs. not thinking straight. Under that framing, I agree that you shouldn’t post comments when you have reason to suspect you might temporarily not be thinking straight. (Or you should find a way to flag this concern in the comment itself, e.g., with an epistemic status disclaimer or NVC-style language.)
But you also call these “different stages of anger”, which suggests a temporal interpretation: hot anger comes first, followed by cool. And the use of the words “hot” and “cool”, to my ear, also suggests something about the character of the feeling itself.
I feel comfortable suggesting that EAs self-censor under the “thinking straight?” interpretation. But if you’re feeling really intense emotion and it’s very close in time to the triggering event, but you think you’re nonetheless thinking straight — or you think you can add appropriate caveats and context so people can correct for the ways in which you’re not thinking straight — then I’m a lot more wary about adding a strong “don’t say what’s on your mind” norm here.
I suspect you and I agree that we should generally encourage posters to be charitable in their takes and reasonable in their requests
I think “charity” isn’t quite the right framing here, but I think we should encourage posters to really try to understand each other; to ask themselves “what does this other person think the physical world is like, and what evidence do I have that it’s not like that?”; to not exaggerate how negative their takes are; and to be mindful of biases and social dynamics that often cause people to have unrealistically negative beliefs about The Other Side.
However, I think that being angry or upset where it seems like an organisation has done something egregious is very often an appropriate emotional response to feel. I think that the ideal amount of expressing that anger / upset that community norms endorse is non-zero!
I 100% agree! I happened to write something similar here just before reading your comment. :)
From my perspective, the goal is more “have accurate models” and “be honest about what your models are”. In interpersonal contexts, the gold standard is often that you’re able to pass someone else’s ideological Turing Test.
Sometimes, your model really is that something is terrible! In cases like that, I think we should be pretty cautious about discouraging people from sharing what they really think about the terrible thing. (Like, I think “be civil all the time”, “don’t rock the boat”, “be very cautious about criticizing other EAs” is one of the main processes that got in the way of people like me hearing earlier about SBF’s bad track record — I think EAs in the know kept waaay too quiet about this information.)
It’s true that there are real costs to encouraging EAs to routinely speak up about their criticisms — it can make the space feel more negative and aversive to a lot of people, which I’d expect to contribute to burnout and to some people feeling less comfortable honestly expressing their thoughts and feelings.
I don’t know what the best solution is (though I think that tech like NVC can help a whole lot), but I’d be very surprised if the best solution involved EAs never expressing actually intense feelings in any format, no matter how much the context cries for it.
Sometimes shit’s actually just fucked up, and I’d rather a community where people can say as much (even if not everyone agrees) than one where we’re all performatively friendly and smiley all the time.
If people who are really hurt by something do not post, the discourse will be selected towards people who aren’t hurt / don’t care as strongly. That will systematically skew the discussion towards a specific set of reactions and lead you further away from understanding what people across the community actually think about something.
Seems right. Digging a bit deeper, I suspect we’d disagree about what the right tradeoff to make is in some cases, based on different background beliefs about the world and about how to do the most good.
Like, we can hopefully agree that it’s sometimes OK to pick the “talk in a way that hurts some people and thereby makes those people less likely to engage with EA” side of the tradeoff. An example of this is that some people find discussion of food or veg*nism triggering (e.g., because they have an ED).
We could choose to hide discussion of animal products from the EA Forum in order to be more inclusive to those people; but given the importance of this topic to a lot of what EA does today, it seems more reasonable to just accept that we’re going to exclude a few people (at least from spaces like the EA Forum and EA Global, where all the different cause areas are rubbing elbows and it’s important to keep the friction on starting animal-related topics very low).
If we agree that it’s ever OK to pick the “talk in way X even though it hurts some people” side of the tradeoff, then I think we have enough common ground that the remaining disagreements can be resolved (given enough time) by going back and forth about what sort of EA community we think has the best chance of helping the world (and about how questions of interpersonal ethics, integrity, etc. bear on what we should do in practice).
(Or that they warrant the things you said on facebook / twitter about EA discourse norms)
Oh, did I say something wrong? I was imagining that all the stuff I said above is compatible with what I’ve said on social media. I’d be curious which things you disagree with that I said elsewhere, since that might point at other background disagreements I’m not tracking.
Just a quick note to say thanks for such a thoughtful response! <3
I think you’re doing a great job here modelling discourse norms and I appreciate the substance of your points!
Ngl I was kinda trepidatious opening the forum… but the reasonableness of your reply and warmth of your tone is legit making me smile! (It probably doesn’t hurt that happily we agree more than I realised. :P )
I may well write a litte more substantial response at some point but will likely take a weekend break :)
P.S. Real quick re social media… Things I was thinking about were phrases from fb like “EAs f’d up” and the “fairly shameful initial response”- which I wondered if were stronger than you were expressing here but probably just you saying the same thing. And in this twitter thread you talk about the “cancel mob”—but I think you’re talking there are about a general case. You don’t have to justify yourself on those I’m happy to read it all via the lens of the comments you’ve written on this post.
Aw, that makes me really happy to hear. I’m surprised that it made such a positive difference, and I update that I should do it more!
(The warmth part, not the agreement part. I can’t really control the agreement part, if we disagree then we’re just fucked. 🙃😛)
Re the social media things: yeah, I stand by that stuff, though I basically always expect reasonable people to disagree a lot about exactly how big a fuck-up is, since natural language is so imprecise and there are so many background variables we could disagree on.
I feel a bit weird about the fact that I use such a different tone in different venues, but I think I like this practice for how my brain works, and plan to keep doing it. I definitely talk differently with different friends, and in private vs. public, so I like the idea of making this fact about me relatively obvious in public too.
I don’t want to have such a perfect and consistent public mask/persona that people think my public self exactly matches my private self, since then they might come away deceived about how much to trust (for example) that my tone in a tweet exactly matches the emotions I was feeling when I wrote it.
I want to be honest in my private and public communications, but (even more than that) I want to be meta-honest, in the sense of trying to make it easy for people to model what kind of person I am and what kinds of things I tend to be more candid about, what it might mean if I steer clear of a topic, etc.
Trying too hard to look like I’m an open book who always says what’s on his mind, never self-censors in order to look more polite on the EA Forum, etc. would systematically cause people to have falser beliefs about the delta between “what Rob B said” and “what Rob B is really thinking and feeling right now”. And while I don’t think I owe everyone a full print-out of my stream of consciousness, I do sorta feel like I owe it to people to not deliberately make it sound like I’m more transparent than I am.
This is maybe more of a problem for me than for other people: I’m constantly going on about what a big fan of candor and blurting I am, so I think there’s more risk of people thinking I’m a 100% open book, compared to the risk a typical EA faces.
So, to be clear: I don’t advocate that EAs be 100% open books. And separately, I don’t perfectly live up to my own stated ideals.
Like, I think an early comment like this would have been awesome (with apologies to Shakeel for using his comments as an example, and keeping in mind that this is me cobbling something together rather than something Shakeel endorses):
Note: The following is me expressing my own feelings and beliefs. Other people at CEA may feel differently or have different models, and I don’t mean to speak for them.
If this is true then I feel absolutely horrified. Supporting neo-Nazi groups is despicable, and I don’t think people who would do something like that ought to have any place in this community. [mention my priors about how reliable this sort of journalism tends to be] [mention my priors about FLI’s moral character, epistemics, and/or political views, or mention that I don’t know much about FLI and haven’t thought about them before] Given that, [rough description of how confident I feel that FLI would financially support a group that they knew had views like Holocaust-denialism].
But it’s hard to be confident about what happened based on a single news article, in advance of hearing FLI’s side of things; and there are many good reasons it can take time to craft a complete and accurate public statement that expresses the proper amount of empathy, properly weighs the PR and optics concerns, etc. So I commit to upvoting FLI’s official response when it releases one (even if I don’t like the response), to make it likelier that people see the follow-up and not just the initial claims.
I also want to encourage others to speak up if they disagree on any of this, including chiming in with views contrary to mine (which I’ll try to upvote at least enough to make it obviously socially accepted to express uncertainty or disagreement on this topic, while the facts are still coming in). But for myself, my immediate response to this is that I feel extremely upset.
For context: Coming on the heels of the Bostrom situation, I feel seriously concerned that some people in the EA community think of non-white people as inherently low-status, and I feel surprised and deeply hurt at the lack of empathy to non-white people many EAs have shown in their public comments. I feel profoundly disgusted at the thought of racist ideas and attitudes finding acceptance within EA, and though I’ll need to hear more about the case of FLI before I reach any confident conclusions about this case, my emotional reaction is one of anger at the possibility that FLI knowingly funded neo-Nazis, and a strong desire to tell EAs and non-EAs alike that this is not who we are.
The above hypothetical, not-Shakeel-authored comment meets a higher bar than what I think was required in this context — I think it’s fine for EAs to be a bit sloppier than that, even if they work at CEA — but hopefully it directionally points at what I mean when I say that there are epistemically good ways to express strong feelings. (Though I don’t think it’s easy, and I think there are hard tradeoffs here: demanding more rigor will always cause some number of comments to just not get written at all, which will cause some good ideas and perspectives to never be considered. In this case, I think a fair bit more rigor is worth the cost.)
The concreteness is helpful because I think my take is that, in general, writing something like this is emotionally exhausting (not to mention time consuming!) - especially so if you’ve got skin in the game and across your life you often come up across things like this to respond to and you keep having the pressure to force your feelings into a more acceptable format.
I reckon that crafting a message like that if I were upset about something could well take half a work day. And I’d have in my head all the being upset / being angry / being scared people on the forum would find me unreasonable / resentful that people might find me unreasonable / doubting myself the whole time. (Though I know plausibly that I’m in part just the describing the human condition there. Trying to do things is hard...!)
Overall, I think I’m just more worried than you that requiring comments to be too far in this direction has too much of a chilling effect on discourse and is too costly for the individuals involved. And it really just is a matter of degree here and what tradeoffs we’re willing to make.
(It makes me think it’d be an interesting excerise to write a number of hypothetical comments arrange them on a scale of how much they major on carefully explaining priors, caveating, communicating meta-level intention etc. and then see where we’d draw the line of acceptable / not!)
There’s an angry top-level post about evaporative cooling of group beliefs in EA that I haven’t written yet, and won’t until it would no longer be an angry one. That might mean that the best moment has passed, which will make me sad for not being strong enough to have competently written it earlier. You could describe this as my having been chilled out of the discourse, but I would instead describe it as my politely waiting until I am able and ready to explain my concerns in a collected and rational manner.
I am doing this because I care about carefully articulating what I’m worried about, because I think it’s important that I communicate it clearly. I don’t want to cause people to feel ambushed and embattled; I don’t want to draw battle lines between me and the people who agree with me on 99% of everything. I don’t want to engender offense that could fester into real and lasting animosity, in the very same people who if approached collaboratively would pull with me to solve our mutual problem out of mutual respect and love for the people who do good.
I don’t want to contribute to the internal divisions growing in EA. To the extent that it is happening, we should all prefer to nip the involution in the bud—if one has ever been on team Everyone Who Logically Tries To Do The Most Good, there’s nowhere to go but down.
I think that if I wrote an angry top-level post, it would deserve to be downvoted into oblivion, though I’m not sure it would be.
I think on the margin I’m fine with posts that will start fights being chilled. Angry infighting and polarization are poisonous to what we’re trying to do.
I barely give a gosh-guldarn about FLI or Tegmark outside of their (now reduced) capacity to reduce existential risk.
Obviously I’d rather bad things not happen to people and not happen to good people in particular, but I don’t specifically know anyone from FLI and they are a feather on the scales next to the full set of strangers who I care about.
If Tegmark or FLI was wronged in the way your comments and others imply, you are correct and justified in your beliefs. But if the apology or the current facts do not make that status clear, there’s an object level problem and it’s bad to be angry that they are wronged, or build further arguments on that belief.
I think it’s pretty obvious at this point that Tegmark and FLI was seriously wronged, but I barely care about any wrong done to them and am largely uninterested in the question of whether it was wildly disproportionate or merely sickeningly disproportionate.
I care about the consequences of what we’ve done to them.
I care about how, in order to protect themselves from this community, the FLI is
working hard to continue improving the structure and process of our grantmaking processes, including more internal and (in appropriate cases) external review. For starters, for organizations not already well-known to FLI or clearly unexceptionable (e.g. major universities), we will request and evaluate more information about the organization, its personnel, and its history before moving on to additional stages.
I care about how everyone who watched this happen will also realize the need to protect themselves from us by shuffling along and taking their own pulses. I care about the new but promising EAs who no one will take a chance on, the moonshots that won’t be funded even though they’d save lives in expectation, the good ideas with “bad optics” that won’t be acted on because of fear of backdraft on this forum. I care about the lives we can save if we don’t rush to conclusions, rush to anger, if we can give each other the benefit of the doubt for five freaking minutes and consider whether it’d make any sense whatsoever for the accusation de jour to be what it looks like.
If what happened was that Max Tegmark or FLI gets many dubious grant applications, and this particular application made it a few steps through FLI’s processes before it was caught, expo.se’s story and the negative response you object to on the EA forum would be bad, destructive and false. If this was what happened, it would absolutely deserve your disapproval and alarm.
I don’t think this isn’t true. What we know is:
An established (though hostile) newspaper gave an account with actual quotes from Tegmark that contradict his apparent actions
The bespoke funding letter, signed by Tegmark, explicitly promising funding, “approved a grant” conditional on registration of the charity
The hiring of the lawyer by Tegmark
When Tegmark edited his comment with more content, I’m surprised by how positive the reception of this edit got, which simply disavowed funding extremist groups.
I’m further surprised by the reaction and changing sentiment on the forum in reaction of this post, which simply presents an exonerating story. This story itself is directly contradicted by the signed statement in the letter itself.
Contrary to the top level post, it is false that it is standard practice to hand out signed declarations of financial support, with wording like “approved a grant” if substantial vetting remains. Also, it’s extremely unusual for any non-profit to hire a lawyer to explain that a prospective grantee failed vetting in the application process. We also haven’t seen any evidence that FLI actually communicated a rejection. Expo.se seems to have a positive record—even accepting the aesthetic here that newspapers or journalists are untrustworthy, it’s costly for an outlet to outright lie or misrepresent facts.
There’s other issues with Tegmark’s/FLI statements (e.g. deflections about the lack of direct financial benefit to his brother, not addressing the material support the letter provided for registration/the reasonable suspicion this was a ploy to produce the letter).
There’s much more that is problematic that underpin this. If I had more time, I would start a long thread explaining how funding and family relationships could interact really badly in EA/longtermism for several reasons, and another about Tegmark’s insertions into geopolitical issues, which are clumsy at best.
Another comment said the EA forum reaction contributed to actual harm to Tegmark/FLI in amplifying the false narrative. I think a look at Twitter, or how the story, which continues and has been picked up in Vice, suggests to me this isn’t this is true. Unfortunately, I think the opposite is true.
The concreteness is helpful because I think my take is that, in general, writing something like this is emotionally exhausting (not to mention time consuming!) - especially so if you’ve got skin in the game and across your life you often come up across things like this to respond to and you keep having the pressure to force your feelings into a more acceptable format.
Yep, I think it absolutely is.
It’s also not an accident that my version of the comment is a lot longer and covers more topics (and therefore would presumably have taken way longer for someone to write and edit in a way they personally endorsed).
I don’t think the minimally acceptable comment needed to be quite that long or cover quite that much ground (though I think it would be praiseworthy to do so), but directionally I’m indeed asking people to do a significantly harder thing. And I expect this to be especially hard in exactly the situations where it matters most.
I reckon that crafting a message like that if I were upset about something could well take half a work day. And I’d have in my head all the being upset / being angry / being scared people on the forum would find me unreasonable / resentful that people might find me unreasonable / doubting myself the whole time. (Though I know plausibly that I’m in part just the describing the human condition there. Trying to do things is hard...!)
❤
Yeah, that sounds all too realistic!
I’m also imagining that while the author is trying to put together their comment, they might be tracking the fact that others have already rushed out their own replies (many of which probably suck from your perspective), and discussion is continuing, and the clock is ticking before the EA Forum buries this discussion entirely.
(I wonder if there’s a way to tweak how the EA Forum works so that there’s less incentive to go super fast?)
One reason I think it’s worth trying to put in this extra effort is that it produces a virtuous cycle. If I take a bit longer to draft a comment I can more fully stand by, then other people will feel less pressure to rush out their own thoughts prematurely. Slowing down the discussion a little, and adding a bit more light relative to heat, can have a positive effect on all the other discussion that happens.
I’ve mentioned NVC a few times, but I do think NVC is a good example of a thing that can help a lot at relatively little time+effort cost. Quick easy hacks are very good here, exactly because this can otherwise be such a time suck.
A related hack is to put your immediate emotional reaction inside a ‘this is my immediate emotional reaction’ frame, and then say a few words outside that frame. Like:
“Here’s my immediate emotional reaction to the OP:
[indented italicized text]
And here are my first-pass thoughts about physical reality, which are more neutral but might also need to be revised after I learn more or have more time to chew on things:
[indented italicized text]”
This is kinda similar to some stuff I put in my imaginary Shakeel comment above, but being heavy-handed about it might be a lot easier and faster than trying to make it feel like an organic whole.
And I think it has very similar effects to the stuff I was going for, where you get to express the feeling at all, but it’s in a container that makes it (a) a bit less likely that you’ll trigger others and thereby get into a heated Internet fight, and (b) a bit less likely that your initial emotional reaction will get mistaken (by you or others) for an endorsed carefully-wordsmithed description of your factual beliefs.
Overall, I think I’m just more worried than you that requiring comments to be too far in this direction has too much of a chilling effect on discourse and is too costly for the individuals involved. And it really just is a matter of degree here and what tradeoffs we’re willing to make.
Yeah, this very much sounds to me like a topic where reasonable people can disagree a lot!
(It makes me think it’d be an interesting excerise to write a number of hypothetical comments arrange them on a scale of how much they major on carefully explaining priors, caveating, communicating meta-level intention etc. and then see where we’d draw the line of acceptable / not!)
Ooooo, this sounds very fun. :) Especially if we can tangent off into science and philosophy debates when it turns out that there’s a specific underlying disagreement that explains why we feel differently about a particular case. 😛
To be clear, my criticism of the EA Forum’s initial response to the Expo article was never “it’s wrong to feel strong emotions in a context like this, and EAs should never publicly express strong emotions”, and it also wasn’t “it should have been obviously in advance to all EAs that this wasn’t a huge deal”.
If you thought I was saying either of those things, then I probably fucked up in how I expressed myself; sorry about that!
My criticism of the EA Forum’s response was:
I think that EAs made factual claims about the world that weren’t warranted by the evidence at the time. (Including claims about what FLI and Tegmark did, claims about their motives, and claims about how likely it is that there are good reasons for an org to want more than a few hours or days to draft a proper public response to an incident like this.) We were overconfident and following poor epistemic practices (and I’d claim this was noticeable at the time, as someone who downvoted lots of comments at the time).
Part of this is, I suspect, just some level of naiveté about the press, about the base rate of good orgs bungling something or other, etc. Hopefully this example will help people calibrate their priors slightly better.
I think that at least some EAs deliberately leaned into bad epistemic practices here, out of a sense that prematurely and overconfidently condemning FLI would help protect EA’s reputation.
The EA Forum sort of “trapped” FLI, by simultaneously demanding that FLI respond extremely quickly, but also demanding that the response be pretty exhaustive (“a full explanation of what exactly happened here”, in Shakeel’s words) and across-the-board excellent (zero factual errors, excellent empathizing and excellent displays of empathy, good PR both for reaching EAs and for satisfying the larger non-EA public, etc.). This sort of trap is not a good way to treat anyone, including non-EAs.
I think that many EAs’ words and upvote patterns at the time created a social space in which expressing uncertainty, moderation, or counter-narrative beliefs and evidence was strongly discouraged. Basically, we did the classic cancel-culture echo chamber thing, where groups update more and more extremely toward a negative view of X because they keep egging each other on with new negative opinions and data points, while the people with alternative views stay quiet for fear of the social repercussions.
The more general version of this phenomenon is discussed in the Death Spirals sequence, and in videos like ContraPoints’ Canceling: there’s a general tendency for many different kinds of social network to push themselves toward more and more negative (or more and more positive) views of a thing, when groups don’t exert lots of deliberate and unusual effort to encourage dissent, voice moderation, explicitly acknowledge alternative perspectives or counter-narrative points, etc.
I think this is a special risk for EA discussions of heavily politicized topics, so if we want to reliably navigate to true beliefs on such topics — many of which will be a lot messier than the Tegmark case — we’ll need to try to be unusually allowing of dissent, disagreement, “but what if X?”, etc. on topics that are more emotionally charged. (Hard as that sounds!)
And Jason accusing FLI of “stonewalling” one day after the article’s release.
Minor point: I read Jason talking about “stonewalling” as referring to FLI’s communications with Expo.se, not with the communications (or lack of) with EAs on this Forum.
I think it very likely that FLI would have made a statement here if there were an innocent or merely negligent explanation (e.g., the document is a forgery, or they got duped somehow into believing the grantee was related to FLI’s stated charitable purposes and not pro-Nazi). So, unless there is a satisfactory explanation forthcoming, the stonewalling strongly points to a more sinister one.
The context is “FLI would have made a statement here”, and the rest of comment doesn’t make me think he’s talking about Expo either. And it’s in reply to Jack and Shakeel’s comments, which both seem to be about FLI saying something publicly, not about FLI’s interactions with Expo specifically.
And Jeff Kaufman replied to Jason to say “one thing to keep in mind is that organizations can take weirdly long times to make even super obvious public statements”, and Jason responded “Good point.” The whole context is very ‘wow why has FLI not made a public statement’, not ‘wow why did FLI stonewall Expo’.
Still, I appreciate you raising the possibility, since there now seems to be inertia in this comment section against the people who were criticizing FLI, and the same good processes that would have helped people avoid rushing to conclusions in that case, should also encourage some amount of curiosity, patience, and uncertainty in this case.
As should be clear from follow-up comment posted shortly after that one, I was referring to the nearly one month that had passed between Expo reaching out to FLI and the publication of the article. When Jeff responded by noting reasons an organization might delay in making a statement, I wrote in reply: “A decision was made to send a response—that sounds vaguely threatening/intimidating to my ears—through FLI’s lawyer within days.” [1] Expo did allege a number of facts that I think can be fairly characterized as stonewalling.
It’s plausible that Expo is wildly misrepresenting the substance of its communications between it and FLI, but the article seems fairly well-sourced to me. If Expo’s characterization of the correspondence was unfair, I would expect FLI’s initial January 13 statement to have disclosed significant facts that FLI told Expo but it omitted from its article.
Of course, drawing adverse inferences because an organization hasn’t provided a response within two hours of a forum discussion starting would be ridiculous (over a holiday weekend in the US no less!). I wouldn’t have thought it was necessary to say that. However, based on the feedback I am getting here, it would have been much better for me to have said something like “I view FLI’s reported responses to Expo as stonewalling, and if FLI continues to offer the same responses . . . .” I apologize to FLI and everyone else here that my lack of clarity on that point contributed to a Forum environment on that morning that was too ready to settle on conclusions without giving FLI the opportunity to make a statement.
The line that sounded vaguely threatening/intimidating was “Any implication to the contrary would be false”—that sounds how I would expect a lawyer to vaguely allude to a possible defamation claim when they knew they would never file one. If you’ve already said X didn’t happen, what’s the point of that sentence?
The timeline (in PT time zone) seems to be:
Jan 13, 12:46am: Expo article published.
Jan 13, 4:20am: First mention of this on the EA Forum.
Jan 13, 6:46am: Shakeel Hashim (speaking for himself and not for CEA; +110 karma, +109 net agreement as of the 15th) writes, “If this is true it’s absolutely horrifying. FLI needs to give a full explanation of what exactly happened here and I don’t understand why they haven’t. If FLI did knowingly agree to give money to a neo-Nazi group, that’s despicable. I don’t think people who would do something like that ought to have any place in this community.”
Jan 13, 9:18pm: Shakeel follows up, repeating
that he sees no reason why FLI wouldn’t have already made a public statementthat it’s really weird that FLI hasn’t already made a public statement, and raises the possibility that FLI has maybe done sinister questionably-legal things and that’s why they haven’t spoken up.Jan 14, 3:43am: You (titotal) comment, “If the letter is genuine (and they have never denied that it is), then someone at FLI is either grossly incompetent or malicious. They need to address this ASAP. ”
Jan 14, 8:16am: Jason comments (+15 karma, +13 net agreement as of the 15th): “I think it very likely that FLI would have made a statement here if there were an innocent or merely negligent explanation (e.g., the document is a forgery, or they got duped somehow into believing the grantee was related to FLI’s stated charitable purposes and not pro-Nazi). So, unless there is a satisfactory explanation forthcoming, the stonewalling strongly points to a more sinister one.”
Jan 14, 6:39pm: Tegmark’s initial response.
To be clear, this is Shakeel saying “I don’t understand why [FLI hasn’t given a full explanation]” six hours after the article came out / two hours after EAs started discussing it, at 9:46am Boston time. (FLI is based in Boston.) And Jason accusing FLI of “stonewalling” one day after the article’s release.
[Update 1⁄21: Jason says that he was actually thinking of FLI stonewalling Expo, not FLI stonewalling the EA Forum. That makes a big difference, though I wish Jason had been clear about this in his comments, since I think the aggregate effect of a bunch of comments like this on the EA Forum was to cause myself and others to think that Tegmark was taking a weirdly long time to reply to the article or to the EA Forum discussion.]
(And I’m only mentioning the explicit condemnation of FLI for not speaking up sooner here. The many highly upvoted and agreevoted EA Forum comments roasting FLI and making confident claims about what happened prior to Tegmark’s comment, with language like “the squalid character of Tegmark’s choices”, are obviously a further reason Tegmark / FLI might have wanted to rush out a response.)
The level of speed-in-replying demanded by EAs in this case (and endorsed by the larger EA Forum community, insofar as we strongly upvoted and up-agreevoted those comments) is frankly absurd, and I do think several apologies are owed here.
(Like, “respond within two hours of a 7am forum post” is wildly absurd even if we’re adopting a norm of expecting people to just blurt out their initial thoughts in real time, warts and errors and all. But it’s even more absurd if we’re demanding carefully crafted Public Statements that make no missteps and have no PR defects.)
Thanks for calling me out on this — I agree that I was too hasty to call for a response.
I’m glad that FLI has shared more information, and that they are rethinking their procedures as a result of this. This FAQ hasn’t completely alleviated my concerns about what happened here — I think it’s worrying that something like this can get to the stage it did without it being flagged (though again, I’m glad FLI seems to agree with this). And I also think that it would have been better if FLI had shared some more of the FAQ info with Expo too.
I do regret calling for FLI to speak up sooner, and I should have had more empathy for the situation they were in. I posted my comments not because I wanted to throw FLI under the bus for PR reasons, but because I was feeling upset; coming on the heels of the Bostrom situation I was worried that some people in the EA community were racist or at least not very sensitive about how discussions of race-related things can make people feel. At the time, I wanted to do my bit to make it clear — in particular to other non-white people who felt similarly to me — that EA isn’t racist. But I could and should have done that in a much better way. I’m sorry.
Hey Shakeel,
Thank you for making the apology, you have my approval for that! I also like your apology on the other thread – your words are hopeful for CEA going in a good direction.
Some feedback/reaction from me that I hope is helpful. In describing your motivation for the FLI comment, you say that it was not to throw FLI under the bus, but because of your fear that some people would think EA is racist, and you wanted to correct that. To me, that is a political motivation, not much different from a PR motivation.
To gesture at the difference (in my ontology) between PR/political motivations and truth-seeking motivations:
PR/political
you want people to believe a certain thing (even if it’s something you yourself sincerely believe), in this case, that EA is not racist
it’s about managing impressions and reputations (e.g. EA’s reputation as not racist)
Your initial comment (and also the Bostrom email statement) both struck me as “performative” in how they demonstrated really harsh and absolute condemnation (“absolutely horrifying”, “[no] place in this community”, “recklessly flawed and reprehensible” – granted that you said “if true”, but the tone and other comments seemed to suggest you did think it was true). That tone and manner of speaking as the first thing you say on a topic[1] feels pretty out of place to me within EA, and certainly isn’t what I want in the EA I would design.
Extreme condemnation pattern matches to someone signaling that they too punish the taboo thing (to be clear, I agree that racism should not be tolerated at all), as is seen on the lot of the Internet, and it feels pretty toxic. It feels like it’s coming from a place of needing to demonstrate “I/we are not the bad thing”.
So even if your motivation was “do your bit to make it clear that EA isn’t racist”, that does strike me as still political/PR (even if you sincerely believe it).
(And I don’t mean to doubt your upsetness! It is very reasonable to be upset if you think something will cause harm to others, and harm to the cause you are dedicating yourself to, and harm to your own reputation through association. Upsetness is real and caring about reputation can come from a really good place.)
I could write more on my feelings about PR/political stuff, because my view is not that it’s outright “bad/evil” or anything, more that caution is required.
Truth-seeking / info-propagation
Such comments more focus on sharing the author’s beliefs (not performing them)[2] and explaining how they reached them, e.g. “this is what I think happened, this is why I think that” and inferences they’re making, and what makes sense. They tally uncertainty, and they leave open room for the chance they’re mistaken.
To me, the ideal spirit is “let me add my cognition to the collective so we all arrive at true beliefs” rather than “let me tug the collective beliefs in the direction I believe is correct” or “I need to ensure people believe the correct thing” (and especially not “I need people to believe the correct thing about me”).
My ideal CEA comms strategy would conceive of itself as having the goal of causing people to have accurate beliefs foremost, even when that makes EA look bad. That is the job – not to ensure EA looks good, but to ensure EA is perceived accurately, warts and all.
(And I’m interested in attracting to EA people who can appreciate that large movements have warts and who can tolerate weirdness in beliefs, and gets that movement leaders make mistakes. I want the people who see past that to the ideas and principles that make sense, and the many people (including you, I’d wager) are working very hard to make the world better.)
Encouragement
I don’t want to respond to step in the right direction (a good apology) with something that feels negative, but it feels important to me that this distinction is deeply understood by CEA and EA in general, hence me writing it up for good measure. I hope this is helpful.
ETA: Happy to clarify more here or chat sometime.
I think that after things have been clarified and the picture is looking pretty clear, then indeed, such condemnation might be appropriate.
The LessWrong frontpage commenting guidelines are “aim to explain, not persuade”.
I like this a lot.
I’ll add that you can just say out loud “I wish other people believed X” or “I think the correct collective belief here would be X”, in addition to saying your personal belief Y.
(An example of a case where this might make sense: You think another person or group believes Z, and you think they rationally should believe X instead, given the evidence available to them. You yourself believe a more-extreme proposition Y, but you don’t think others have enough evidence to believe Y yet—e.g., your belief may be based on technical expertise or hard-won life-experience that the other parties don’t have.)
It’s possible to care about the group’s beliefs, and try to intervene on them, in a way that’s honest and clear about what you’re doing.
Speaking locally to this point: I don’t think I agree! My first-pass take is that if something’s horrible, reprehensible, flawed, etc., then I think EAs should just say so. That strikes me as the default truth-seeking approach.[1]
There might be second-order reasons to be more cautious about when and how you report extreme negative evaluations (e.g., to keep forum discussions from degenerating as people emotionally trigger each other), but I would want to explicitly flag that this is us locally departing from the naive truth-seeking approach (“just say what seems true to you”) in the hope that the end result will be more truth-seeky via people having an easier time keeping a cool head.
(Note that I’m explicitly responding to the ‘extreme language’ side of this, not the ‘was this to some extent performative or strategic?’ side of things.)
With the caveat that maybe evaluative judgments in general get in the way of truth-seeking, unless they’re “owned” NVC-style, because of common confusions like “thinking my own evaluations are mind-independent properties of the world”. But if we’re allowing mild evaluative judgments like “OK” or “fine”, then I think there’s less philosophical basis for banning more extreme judgments like “awesome” or “terrible”.
I think I agree with your clarification and was in fact conflating the mere act of speaking with strong emotion with speaking in a way that felt more like a display. Yeah, I do think it’s a departure from naive truth-seeking.
In practice, I think it is hard, though I do think it is hard for the second order reasons you give and others. Perhaps an ideal is people share strong emotion when they feel it, but in some kind of format/container/manner that doesn’t shut down discussion or get things heated. “NVC” style, perhaps, as you suggest.
Fwiw, I do think “has no place in the community” without being owned as “no place in my community” or “shouldn’t have a place in the community” is probably too high a simulacrum level by default (though this isn’t necessarily a criticism of Shakeel, I don’t remember what exactly his original comment said.)
Cool. :) I think we broadly agree, and I don’t feel confident about what the ideal way to do this is, though I’d be pretty sad and weirded out by a complete ban on expressing strong feelings in any form.
Really appreciated a bunch about this comment. I think it’s that it:
flags where it comes from clearly, both emotionally and cognitively
expresses a pragmatism around PR and appreciation for where it comes from that to my mind has been underplayed
Does a lot of “my ideal EA”, “I” language in a way that seems good for conversation
Adds good thoughts to the “what is politics” discussion
IMO, I think this is an area EA needs to be way better in. For better or worse, most of the world runs on persuasion, and PR matters. The nuanced truth doesn’t matter that much for social reality, and EA should ideally be persuasive and control social reality.
I think the extent to which nuanced truth does not matter to “most of the world” is overstated.
I additionally think that EA should not be optimizing for deceiving people who belong to the class “most of the world”.
Both because it wouldn’t be useful if it worked (realistically most of the world has very little they are offering) and because it wouldn’t work.
I additionally think think that trying to play nitwit political games at or around each hecking other would kill EA as a community and a movement dead, dead, dead.
Thanks for this Shakeel. This seems like a particularly rough time to be running comms for CEA. I’m grateful that in addition to having that on your plate, in your personal capacity you’re helping to make the community feel more supportive for non-white EAs feeling the alienation you point to. Also for doing that despite the emotional labour involved in that, which typically makes me shy away from internet discussions.
Responding swiftly to things seems helpful in service of that support. One of the risks from that is that you can end up taking a particular stance immediately and then it feeling hard to back down from that. But in fact you were able to respond swiftly, and then also quickly update and clearly apologise. Really appreciate your hard work!
(Flag that Shakeel and I both work for EV, though for different orgs under that umbrella)
I liked this apology.
Hey Shakeel, thanks for your apology and update (and I hope you’ve apologized to FLI). Even though call-out culture may be popular or expected in other contexts, it is not professional or appropriate for the Comms Head of CEA to initiate an interaction with an EA org by publicly putting them on blast and seemingly seconding what could be very damaging accusations (as well as inventing others by speculating about financial misconduct). Did you try to contact FLI before publicly commenting to get an idea of what happened (perhaps before they could prepare their statement)?
I appreciate that you apologized for this incident but I don’t think you understand how deep of a problem this behavior is. Get an anonymous account if you want to shoot from the hip. When you do it while your bio says “Head of Communications at CEA” it comes with a certain weight. Multiplying unfounded accusations, toward another EA org no less, is frankly acting in bad faith in a communications role.
For what it’s worth, this seems like the wrong way around to me. I don’t know exactly about the role and responsibilities of the “Head of Comm”, but in-general I would like people in EA to be more comfortable criticizing each other, and to feel less constrained to first air all criticism privately and resolve things behind closed doors.
I think the key thing that went wrong here was the absence of a concrete logical argument or probabilities about why the thing that was happening was actually quite bad, and also the time pressure, which made the context of the conversation much worse. Another big thing was also jumping to conclusions about FLI’s character in a way that felt like it was trying to apply direct political pressure instead of focusing on propagating accurate information.
Maybe there are special rules that EA comms people (or the CEA comms person in particular) should follow; I possibly shouldn’t weigh in on that, since I’m another EA comms person (working at MIRI) and might be biased.
My initial thought, however, is that it’s good for full-time EAs on the current margin to speak more from their personal views, and to do less “speaking for the organizations”. E.g., in the case of FTX, I think it would have been healthy for EAs working at full-time orgs to express their candid thoughts about SBF, both negative and positive; and for other professional EAs to give their real counter-arguments, and for a real discussion to thereby happen.
My criticism of Shakeel’s post is very different from yours, and is about how truth-seeking the contents are and how well they incentivize truth-seeking from others, not about whether it’s inherently unprofessional for particular EAs to strongly criticize other EAs.
This seems ~strictly worse to me than making a “Shakeel-Personal” account separate from “Shakeel-CEA”. It might be useful to have personal takes indexed separately (though I’d guess this is just not necessary, and would add friction and discourage people from sharing their real takes, which I want them to do more). But regardless, I don’t think it’s better to add even more of a fog of anonymity to EA Forum discussions, if someone’s willing to just say their stuff under their own name.
I’m glad anonymity is an option, but the number of anons in these discussions already makes it hard to know how much I might be double-counting views, makes it hard to contextualize comments by knowing what world-view or expertise or experience or they reflect, makes it hard to have sustained multi-month discussions with a specific person where we gradually converge on things, etc.
Idk I think it might be pretty hard to have a role like Head of Communications at CEA and then separately communicate your personal views about the same topics. Your position is rather unique for allowing that. I don’t see CEA becoming like MIRI in this respect. It comes across as though he’s saying this in his professional capacity when you hover over his account name and it says “Head of Communications at CEA”.
But the thing I think is most important about Shakeel’s job is that it means he should know better than to throw around and amplify allegations. A marked personal account would satisfy me but I would still hold it to a higher standard re:gossip since he’s supposed to know what’s appropriate. And I expect him to want EA orgs to succeed! I don’t think premature callouts for racism and demands to have already have apologized are good faith criticism to strengthen the community.
I mean, I want employees at EA orgs to try to make EA orgs succeed insofar as that does the most good, and try to make EA orgs fail insofar as that does the most good instead. Likewise, I want them to try to strengthen the EA community if their model says this is good, and to try to weaken it (or just ignore it) otherwise.
(Obviously, in each case I’d want them to be open and honest about what they’re trying to do; you can oppose an org you think is bad without doing anything unethical or deceptive.)
I’m not sure what I think CEA’s role should be in EA. I do feel more optimistic about EA succeeding if major EA orgs in general focus more on developing a model of the world and trying to do the most good under their idiosyncratic world-view, rather than trying to represent or reflect EA-at-large; and I feel more optimistic about EA if sending our best and brightest to work at EA orgs doesn’t mean that they have to do massively more self-censoring now.
Maybe CEA or CEA-comms is an exception, but I’m not sold yet. I do think it’s good to have high epistemic standards, but I see that as compatible with expressing personal feelings, criticizing other orgs, wanting specific EA orgs to fail, etc.
For what it’s worth, speaking as a non-comms person, I’m a big fan of Rob Bensinger style comms people. I like seeing him get into random twitter scraps with e/acc weirdos, or turning obnoxious memes into FAQs, or doing informal abstract-level research on the state of bioethics writing. I may be biased specifically because I like Rob’s contributions, and would miss them if he turned himself into a vessel of perfect public emptiness into which the disembodied spirit of MIRI’s preferred public image was poured, but, look, I also just find that type of job description obviously offputting. In general I liked getting to know the EAs I’ve gotten to know, and I don’t know Shakeel that well, but I would like to get to know him better. I certainly am averse to the idea of wrist slapping him back into this empty vessel to the extent that we are blaming him for carelessness even when he specifies very clearly that he isn’t speaking for his organization. I do think that his statement was hasty, but I also think we need to be forgiving of EAs whose emotions are running a bit hot right now, especially when they circle back to self-correct afterwards.
I think this would also just be logically inconsistent; MIRI’s preferred public image is that we not be the sort of org that turns people into vessels of perfect public emptiness into which the disembodied spirit of our preferred public image is poured.
I don’t agree with MIRI on everything, but yes, this is one of the things I like most about it
“My initial thought, however, is that it’s good for full-time EAs on the current margin to speak more from their personal views, and to do less “speaking for the organizations”. E.g., in the case of FTX, I think it would have been healthy for EAs working at full-time orgs to express their candid thoughts about SBF, both negative and positive; and for other professional EAs to give their real counter-arguments, and for a real discussion to thereby happen.”
This seems a little naive. “We were all getting millions of dollars from this guy with billions to come, he’s personal friends with all the movement leaders, but if we had had more open discussions we would not have taken the millions...really??”
also if you’re in line to get millions of $$$ from someone of course you are never going to share your candid thoughts about them publicly under your real name!
I didn’t say a specific prediction about what would have happened differently if EAs had discussed their misgivings about SBF more openly. What I’d say is that if you took a hundred SBF-like cases with lots of the variables randomized, outcomes will be a lot better if people discuss early serious warning signs and serious misgivings in public.
That will sometimes look like “turning down money”, sometimes like “more people poke around to learn more”, sometimes like “this person is less able to win others’ trust via their EA associations”, sometimes like “fewer EAs go work for this guy”.
Sometimes it won’t do anything at all, or will be actively counterproductive, because the world is complicated and messy. But I think talking about this stuff and voicing criticisms is the best general policy, if we’re picking a policy to apply across many different cases and not just using hindsight to ask what an omniscient person would do differently in the specific case of FTX.
I mean, Open Philanthropy is MIRI’s largest financial supporter, and
Makes sense to me! I appreciate knowing your perspective better, Shakeel. :)
On reflection, I think the thing I care about in situations like this is much more “mutual understanding of where people were coming from and where they’re at now”, whether or not anyone technically “apologizes”.
Apologizing is one way of communicating information about that (because it suggests we’re on the same page that there was a nontrivial foreseeable-in-advance fuck-up), but IMO a comment along those lines could be awesome without ever saying the words “I’m sorry”.
One of my concerns about “I’m sorry” is that I think some people think you can only owe apologies to Good Guys, not to Bad Guys. So if there’s a disagreement about who the Good Guys are, communities can get stuck arguing about whether X should apologize for Y, when it would be more productive to discuss upstream disagreements about facts and values.
I think some people are still uncertain about exactly how OK or bad FLI’s actions here were, but whether or not FLI fucked up badly here and whether or not FLI is bad as an org, I think the EA Forum’s response was bad given the evidence we had at the time. I want our culture to be such that it’s maximally easy for us to acknowledge that sort of thing and course-correct so we do better next time. And my intuition is that a sufficiently honest explanation of where you were coming from, that’s sufficiently curious about and open to understanding others’ perspectives, and sufficiently lacking in soldier-mindset-style defensiveness, can do even more than an apology to contribute to a healthy culture.
(In this case the apology is to FLI/Max, not to me, so it’s mostly none of my business. 😛 But since I called for “apologies” earlier, I wanted to consider the general question of whether that’s the thing that matters most.)
I find myself disliking this comment, and I think its mostly because it sounds like you 1) agree with many of the blunders Rob points out, yet 2) don’t seem to have learned anything from your mistake here? I don’t think many do or should blame you, and I’m personally concerned about repeated similar blunders on your part costing EA much loss of outside reputation and internal trust.
Like, do you think that the issue was that you were responding in heat, and if so, will you make a future policy of not responding in heat in future similar situations?
I feel like there are deeper problems here that won’t be corrected by such a policy, and your lack of concreteness is an impedance to communicating such concerns about your approach to CEA comms (and is itself a repeated issue that won’t be corrected by such a policy).
FWIW, I don’t really want Shakeel to rush into making public promises about his future behavior right now, or big public statements about long-term changes to his policies and heuristics, unless he finds that useful for some reason. I appreciated hearing his thoughts, and would rather leave him space to chew on things and figure out what makes sense for himself. If he or CEA make the wrong updates by my lights, then I expect that to be visible in future CEA/Shakeel actions, and I can just wait and criticize those when they happen.
FTX collapsed on November 8th; all the key facts were known by the 10th; CEA put out their statement on November 12th. This is a totally reasonable timeframe to respond. I would have hoped that this experience would make CEA sympathetic to a fellow EA org (with much less resources than CEA) experiencing a media crisis rather than being so quick to condemn.
I’m also not convinced that a Head of Communications, working for an organization with a very restrictive media policy for employees, commenting on a matter of importance for that organization, can really be said to be operating in a personal capacity. Despite claims to the contrary, I think it’s pretty reasonable to interpret these as official CEA communications. Skill at a PR role is as much about what you do not say as what you do.
The eagerness with which people rushed to condemn is frankly a warning sign for involution. We have to stop it with the pointless infighting or it’s all we will end up doing.
Hi Rob!
Just a quick note to say I don’t think everything in your comment above is entirely fair characterisation of the comments.
Two specific points (I haven’t checked everything you say above, so I don’t claim this is exhaustive):
I think you’re mischaracterising Shakeel’s 9.18pm response quite significantly. You paraphrased him as saying he sees no reason FLI wouldn’t have released a public statement but that is I think neither the text nor the spirit of that comment. He specifically acknowledged he might be missing some reasons. He said he thinks the lack of response is “very weird” which seems pretty different to me to “I see no reason for this”. Here’s some quoting but it’s so short people can just read the comment :P “Hi Jack — reasonable question! When I wrote this post I just didn’t see what the legal problems might be for FLI… Jason’s comment has made me realise there might be something else going on here, though; if that is the case then that would make the silence make more sense. I do still think it’s very weird that FLI hasn’t condemned Nya Dagbladet though”
You also left out that Shakeel did already apologise to Max Tegmark for in his words “jumping to conclusions” when Max explained a reason for the delay, which I think is relevant to the timeline you’re setting out here.
I think both those things are relevant to how reasonable some of these comments were and to what extent apologies might be owed.
Thanks for the response, Habiba. :)
The comments are short enough that I should probably just quote them here:
Comment 1: “The following is my personal opinion, not CEA’s. If this is true it’s absolutely horrifying. FLI needs to give a full explanation of what exactly happened here and I don’t understand why they haven’t. If FLI did knowingly agree to give money to a neo-Nazi group, that’s despicable. I don’t think people who would do something like that ought to have any place in this community.”
Comment 2: “Hi Jack — reasonable question! When I wrote this post I just didn’t see what the legal problems might be for FLI. With FTX, there are a ton of complications, most notably with regards to bankruptcy/clawbacks, and the fact that actual crimes were (seemingly) committed. This FLI situation, on face value, didn’t seem to have any similar complications — it seemed that something deeply immoral was done, but nothing more than that. Jason’s comment has made me realise there might be something else going on here, though; if that is the case then that would make the silence make more sense. I do still think it’s very weird that FLI hasn’t condemned Nya Dagbladet though — CEA did, after all, make it very clear very quickly what our stance on SBF was.”
My summary of comment 2: “Shakeel follows up, repeating that he sees no reason why FLI wouldn’t have already made a public statement, and raises the possibility that FLI has maybe done sinister questionably-legal things and that’s why they haven’t spoken up.”
I think this is a fine summary of the gist of Shakeel’s comment — obviously there isn’t literally “no reason” here (that would contradict the very next part of my sentence, “and raises the possibility that FLI has maybe done sinister questionably-legal things and that’s why they haven’t spoken up”), but there’s no good reason Shakeel can see, and Shakeel reiterates that he thinks “it’s very weird that FLI hasn’t condemned Nya Dagbladet”.
The main thing I was trying to point at is that Shakeel’s first comment says “I don’t understand” why FLI hasn’t given “a full explanation of exactly what happened here” (the implication being that there’s something really weird and suspicious about FLI not having already released a public statement), and Shakeel’s second comment doubles down on that basic perspective (it’s still weird and suspicious / he can’t think of an innocent explanation, though he acknowledges a non-innocent explanation).
That said, I think this is a great context to be a stickler about saying everything precisely (rather than relying on “gists”), and I’m generally a fan of the ethos that cares about precision and literalness. 🙂 Being completely literal, “he sees no reason” is flatly false (at least if ‘seeing no reason’ means ‘you haven’t thought of a remotely plausible motivation that might have caused this behavior’).
I’ll edit the comment to say “repeating that it’s really weird that FLI hasn’t already made a public statement”, since that’s closer to being a specific sentiment he expresses in both comments.
I think this is a different thing, but it’s useful context anyway, so thanks for adding it. :)
Agree. Should have added those to my own comment, but felt like I’d already spent too much time on it!
I also spent too much time on comments :P
I upvoted this, but disagreed. I think the timeline would be better if it included:
November 2022: FLI inform Nya Dagbladet Foundation (NDF) that they will not be funding them
15 December 2022: FLI learn of media interest in the story
I therefore don’t think it’s “absurd” to have expected FLI to have repudiated NDF sooner. You could argue that by apologising for their mistake before the media interest does more harm than good by drawing attention to it (and by association, to NDF), but once they became aware of the media attention, I think they should have issued something more like their current statement.
I also agreed with the thrust of titotal’s comment that their first statement was woefully inadequate (it was more like “nothing to see here” than “oh damn, we seriously considered supporting an odious publication and we’re sorry”). I don’t think lack of time gets them off the hook here, given they should have expected Expo to publish at some point.
I don’t think anyone owes an apology for expecting FLI to do better than this.
(Note: I appreciate Max Tegmark was dealing with a personal tragedy (for which, my condolences) at the time of it becoming ‘a thing’ on the EA Forum, so I of course wouldn’t expect him to be making quick-but-considered replies to everything posted on here at that time. But I think there’s a difference between that and the speed of the proper statement.)
***
FWIW I also had a different interpretation of Shakeel’s 9:18pm comment than what you write here:
“Jan 13, 9:18pm: Shakeel follows up, repeating that he sees no reason why FLI wouldn’t have already made a public statement, and raises the possibility that FLI has maybe done sinister questionably-legal things and that’s why they haven’t spoken up.”
Shakeel said “Jason’s comment has made me realise there might be something else going on here, though; if that is the case then that would make the silence make more sense.” → this seemed to me that Shakeel was trying to to be charitable, and understand the reasons FLI hadn’t replied quicker.
Only a subtle difference, but wanted to point that out.
Yeah, if the early EA Forum comments had explicitly said “FLI should have said something public about this as soon as they discovered that NDF was bad”, “FLI should have said something public about this as soon as Expo contacted them”, or “FLI should have been way more response to Expo’s inquiries”—and if we’d generally expressed a lot more uncertainty and been more measured in what we said in the first few days—then I might still have disagreed, but I wouldn’t have seen this as an embarrassingly bad response in the same way.
I, as a casual reader who wasn’t trying to carefully track all the timestamps, had no idea when I first skimmed these threads on Jan. 13-14 that the article had only come out a few hours ago, and I didn’t track timestamps carefully enough to register just how fast the EA Forum went from “a top-level post exists about this at all” to “wow, FLI is stonewalling us” and “wow, there must be something really sinister here given that FLI still hasn’t responded”. I feel like I was misled by these comments, because I just took for granted (to some degree) that the people writing these highly upvoted comments were probably not saying something transparently silly.
If a commenter like Jason thought that FLI was “stonewalling” because they didn’t release a public statement about this in December, then it’s important to be explicit about that, so casual readers don’t come away from the comment section thinking that FLI is displaying some amazing level of unresponsiveness to the forum post or to the news article.
This is less obvious to me, if they didn’t owe a public response before Expo reached out to them. A lot of press inquiries don’t end up turning into articles, and if the goal is to respond to press coverage, it’s often better to wait and see what’s in the actual article, since you might end up surprised about the article’s contents.
“Do better than this”, notably, is switching out concrete actions for a much more general question, one that’s closer to “What’s the correct overall level of affect we should have about FLI right now?”.
If we’re going to have “apologize when you mess up enough” norms, I think they should be more about evaluating local process, and less about evaluating the overall character of the person you’re apologizing to. (Or even the character-in-this-particular-case, since it’s possible to owe someone an apology even if that person owes an apology too.) “Did I fuck-up when I did X?” should be a referendum on whether the local action was OK, not a referendum on the people you fucked up at.
More thoughts about apology norms in my comment here.
Thanks for this comment and timeline, I found it very useful.
I agree that “respond within two hours of a 7am forum post” seems like an unreasonable standard, and I also agree that some folks rushed too quickly to condemn FHI or make assumptions about Tegmark’s character/choices.
I do want to illustrate a related point:
When the Bostrom news hit, many folks jumped to defend Bostrom’s apology as reasonable because it consisted of statements that Bostrom believed to be true, and that this reflects truth-seeking and good epistemics, and this should be something that the forum and community should uphold.
But if I look at Jason’s comment, “So, unless there is a satisfactory explanation forthcoming, the stonewalling strongly points to a more sinister one.”
There is actually nothing technically untrue about this statement? There WAS a satisfactory explanation that eventuated.
Similarly, if I look at Shakeel’s comment, the condemnation is conditional on if the events happened: “If this is true it’s absolutely horrifying”, “If FLI did knowingly agree to give money to a neo-Nazi group, that’s despicable”, “I don’t think people who would do something like that ought to have any place in this community”.
The sentence about speaking up sooner FLI reflects Shakeel expressing his desire that FLI needs to give a full explanation, and his confusion about why this has not yet happened, but reading the text of that statement, there’s actually no “explicit condemnation of FLI for not speaking up sooner ”.
Now, I raise these points not because I’m interested in defending Shakeel or Jason, because the subtext does matter, and it’s somewhat reasonable to read those statements and interpret those as explicit condemnation of FLI for not speaking up sooner, and push back accordingly.
But I’m just noting that there are a lot of upvotes on Rob’s comment, and quite a few voices (I think rightfully!) saying that some commentors were too quick to jump to conclusions about Tegmark or FLI. But I don’t see any commentors defending Jason or Shakeel’s statements with the “truth-seeking” and “good epistemics” argument that was being used to defend Bostrom’s apology.
Do you have any thoughts on the explanations for what seem like an inconsistent application of upholding these standards? It might not even be accurately characterized as an inconsistency, I’m likely missing something here.
I expect this comment will just get reflexively downvoted given how tribal the commentary on the forum is these days, but I am curious about what drives this perceived difference, especially from those who self-identify as high decouplers, truth-seekers, or those who place themselves in the “prioritize epistemics” camp.
“Technically not saying anything untrue” isn’t the same as “exhibiting a truth-seeking attitude.”
I’d say truth-seeking attitude would have been more like “Before we condemn FLI, let’s make sure we understand their perspective and can assess what really happened.” Perhaps accompagnied by “I agree we should condemn them harshly if the reporting is roughly as it looks like right now.” Similar statement, different emphasis. Shakeel’s comment did appropriate hedging, but its main content was sharing a (hedged) judgment/condemnation.
Edit: I still upvoted your comment for highlighting that Shakeel (and Jason) hedged their comments. I think that’s mostly fine! In hindsight, though, I agree with the sentiment that the community discussion was tending towards judgment a bit too quickly.
Thanks for the engagement Lukas, have upvoted.
Yeah, I agree! I think my main point is to illustrate that the impression you got of the community discussion “tending towards judgement a bit too quickly” is pretty reasonable despite the technically true statements that they made, because of a reading of a subtext, including what they didn’t say or choose to focus on, instead of the literal text alone, which I felt like was a major crux between those who thought Bostrom’s apology was largely terrible VS. those who thought Bostrom’s apology was largely acceptable.
Likewise, I also agree with this! I think what I’m most interested in here is like, what you (or others) think separates the two in general, because my guess is those who were upset with Bostrom’s apology would also agree with this statement. I think the crux is more likely that they would also think this statement applies to Bostrom’s comments (i.e. they were closer to “technically not saying anything untrue”, rather than “exhibiting a truth-seeking attitude”), while those who disagree would think “Bostrom is actually exhibiting a truth-seeking attitude”.
For example, if I apply your statement to Bostrom’s apology:
”I’d say truth-seeking attitude would have been more like: “Before I make a comment that’s strongly suggestive of a genetic difference between races, or easily misinterpreted to be a racist dogwhistle, let’s make sure I understand their perspective and can assess how this apology might actually be interpreted”, perhaps accompanied by “I think I should make true statements if I can make sure they will be interpreted to mean what my actual views are, and I know they are the true statements that are most relevant and important for the people I am apologizing to.”
Similar statement, different emphasis. Bostrom’s comment was “technically true”, but its main content was less about an apology and more about raising questions around a genetic component of intelligence, expression of support for some definition of eugenics and some usage of provocative communication.”
I think my point is less that “Shakeel and Jason’s comments are fine because they were hedged”, and less about pointing out the empirical fact that they were hedged, and more that “Shakeel and Jason’s comments were not fine just because they contained true statements, but this standard should be applied similarly to Bostrom’s apology, which was also not fine just because it contained true statements”.
More speculative:
Like, part of me gets the impression this is in part modulated by a dislike of the typical SJW cancel culture (which I can resonate with), and therefore the truth-seeking defence is applied more strongly against condemnation of any kind, as opposed to just truth-seeking for truth’s sake. But I’m not sure that this, if true, is actually optimizing for truth, nor that it’s necessarily the best approach on consequentialist grounds, unless there’s good reason to think that a heuristic to err on the side of anti-condemnation in every situation is preferable to evaluating each on a case-by-case basis.
That makes sense – I get why you feel like there are double standards.
I don’t agree that there necessarily are.
Regarding Bostrom’s apology, I guess you could say that it’s part of “truth-seeking” to dive into any mistakes you might have made and acknowledge everything there is to acknowledge. (Whether we call it “truth-seeking” or not, that’s certainly how apologies should be, in an ideal world.) On this point, Bostrom’s apology was clearly suboptimal. It didn’t acknowledge that there was more bad stuff to the initial email than just the racial slur.
Namely, in my view, it’s not really defensible to say “technically true” things without some qualifying context, if those true things are easily interpreted in a misleadingly-negative or harmful-belief-promoting way on their own or even interpreted as, as you say, “racist dogwhistles.” (I think that phrase is sometimes thrown around so lightly that it seems a bit hysterical, but it does seem appropriate for the specific example of the sentence Bostrom claimed he “likes.”)
Take for example a newspaper reporting on a person with autism who committed a school shooting. Given the widespread stigma against autism, it would be inappropriate to imply that autism is linked to these types of crimes without some sort of very careful discussion that doesn’t make readers prejudiced against people on the spectrum. (I don’t actually know if there’s any such link.)
What I considered bad about Bostrom’s apology was that he didn’t say more about why his entire stance on “controversial communication” was a bad take.
Given all of the above, why did I say that I found Bostrom’s apology “”reasonable”″?
“Reasonable” is a lower bar than “good.”
Context matters: The initial email was never intended to be seen by anyone who wasn’t in that early group of transhumanists. In a small, closed group, communication functions very differently. For instance, among EA friends, I’ve recently (after the FTX situation) made a joke about how we should run a scam to make money. The joke works because my friends have enough context to know I don’t mean it. I wouldn’t make the same joke in a group where it isn’t common knowledge that I’m joking. Similarly, while I don’t know much about the transhumanist reading list, it’s probably safe to say that “we’re all high-decouplers and care about all of humanity” was common knowledge in that group. Given that context, it’s sort of defensible to think that there’s not that much wrong with the initial email (apart from cringiness) other than the use of the racial slur. Bostrom did apologize for the latter (even viscerally, and unambiguously).
I thought there was some ambiguity in the apology about whether he was just apologizing for the racial slur, or whether he also meant just the general email when he described how he hated re-reading it. When I said that the apology was “reasonable,” I interpreted him to mean the general email. I agree he could have made this more clear.
In any case, that’s one way to interpret “truth-seeking” – trying to get to the bottom of any mistakes that were made when apologizing.
That said, I think almost all the mentions of “truth-seeking is important” in the Bostrom discussion were about something else.
There was a faction of people who thought that people should be socially shunned for holding specific views on the underlying causes of group differences. Another faction that was like “it should be okay to say ‘I don’t know’ if you actually don’t know.”
While a few people criticized Bostrom’s apology for reasons similar to the ones I mentioned above (which I obviously think is reasonable!), my impression is that the people who were most critical of it did so for the “social shunning for not completely renouncing a specific view” reason.
For what it’s worth, I agree that emphasis on truth-seeking can go too far. While I appreciated this part of EA culture in the discussion around Bostrom, I’ve several times found myself accusing individual rationalists of fetishizing “truth-seeking.” :)
So, I certainly don’t disagree with your impression that there can be biases on both sides.
I found myself agreeing with a lot of this. Thanks for your nuanced take on truth-seeking ideals, I appreciated the conversation!
I wanted to say a bit about the “vibe” / thrust of this comment when it comes to community discourse norms...
(This is somewhat informed by your comments on twitter / facebook which themselves are phrased more strongly than this and are less specific in scope )
I suspect you and I agree that we should generally encourage posters to be charitable in their takes and reasonable in their requests—and it would be bad overall for discussions in general where this not the case. Being angry on the internet is often not at all constructive!
However, I think that being angry or upset where it seems like an organisation has done something egregious is very often an appropriate emotional response to feel. I think that the ideal amount of expressing that anger / upset that community norms endorse is non-zero! And yes when people are hurt they may go somewhat too far in what they request / suggest / speculate. But again the optimal amount of “too strong requests” is non-zero.
I think that expressing those feeling of hurt / anger / upset explicitly (or implicitly expressing them through the kinds of requests one is making) has many uses and there are costs to restricting it too much.
Some uses to expressing it:
Conveying the sheer seriousness or importance of the question to the poster. That can be useful information for the organisation under scrutiny about whether / how much people think they messed up (which itself is information about whether / how much they actually messed up). It will lead to better outcomes if organisation in fact get the information that some people are deeply hurt by their actions. If the people who are deeply hurt cannot / do not express this the organisation will not know.
Individuals within a community expressing values they hold dear (and which of those are strong enough to provoke the strongest emotional reaction) is part of how a community develops and maintains norms about behaviour that is / isn’t acceptable.
Some costs to restricting it:
People who have stronger emotional reactions are often closer to the issue. It is very hard when you feel really hurt by something to have to reformulate that in terms acceptable to people who are not at all affected by the thing.
If people who are really hurt by something get the impression from community norms that expressing their hurt is not welcome they may well not feel welcome in the community at all. This seems extra bad if you care about diversity in the community and certain issues affect certain groups more. (E.g. antisemitism, racism, sexism etc.)
If people who are really hurt by something do not post, the discourse will be selected towards people who aren’t hurt / don’t care as strongly. That will systematically skew the discussion towards a specific set of reactions and lead you further away from understanding what people across the community actually think about something.
I think that approaching online discussions on difficult topics is really really hard! I do not think I know what the ideal balance is. I have almost never before participated in such discussions and I’m personally finding my feet here. I am not arguing in favour of carte blanche for people making unreasonable angry demands.
But I want to push back pretty strongly against the idea that people should never be able to post hurt / upset comments or that the comments above seem very badly wrong. (Or that they warrant the things you said on facebook / twitter about EA discourse norms)
P.S. I’m wondering whether you would agree with me for all the above if the organisational behaviour was egregious enough by your / anyone’s lights? [Insert thought experiment here about shockingly beyond the pale behaviour by an organisation that people on the forum express angry comments about]. If yes, then we just disagree on where / how to draw the line not that there is a line at all. If not, then I think we have a more fundamental disagreement about how humans can be expected to communicate online.
I see “clearly expressing anger” and “posting when angry” as quite different things.
I endorse the former, but I rarely endorse the latter, especially in contexts like the EA Forum.
Let’s distinguish different stages of anger:
We could think of “hot” and “cold” anger as a spectrum.
Most people experience hot anger from time to time. But I think EA figures—especially senior figures—should model a norm of only posting on the EA Forum when fairly cool.
My impression is that, during the Bostrom and FLI incidents, several people posted with considerably more hot anger than I would endorse. In these cases, I think the mistake has been quite harmful, and may warrant public and private apologies.
As a positive example: Peter Hurford’s blog post, which he described as “angry”, showed a level of reasonableness and clarity that made it, in my mind, “above the bar” to publish. The text suggests a relatively cool anger. I disagree with some parts of the post, but I am glad he published it. At the meta-level, my impression is that Peter was well within the range of “appropriate states of mind” for a leadership figure to publish a message like that in public.
I’m not sure how I feel about this proposed norm. I probably think that senior EA figures should at least sometimes post when they’re feeling some version of “hot anger”, as opposed to literally never doing this.
The way you defined “cool vs. hot” here is that it’s about thinking straight vs. not thinking straight. Under that framing, I agree that you shouldn’t post comments when you have reason to suspect you might temporarily not be thinking straight. (Or you should find a way to flag this concern in the comment itself, e.g., with an epistemic status disclaimer or NVC-style language.)
But you also call these “different stages of anger”, which suggests a temporal interpretation: hot anger comes first, followed by cool. And the use of the words “hot” and “cool”, to my ear, also suggests something about the character of the feeling itself.
I feel comfortable suggesting that EAs self-censor under the “thinking straight?” interpretation. But if you’re feeling really intense emotion and it’s very close in time to the triggering event, but you think you’re nonetheless thinking straight — or you think you can add appropriate caveats and context so people can correct for the ways in which you’re not thinking straight — then I’m a lot more wary about adding a strong “don’t say what’s on your mind” norm here.
I think “charity” isn’t quite the right framing here, but I think we should encourage posters to really try to understand each other; to ask themselves “what does this other person think the physical world is like, and what evidence do I have that it’s not like that?”; to not exaggerate how negative their takes are; and to be mindful of biases and social dynamics that often cause people to have unrealistically negative beliefs about The Other Side.
I 100% agree! I happened to write something similar here just before reading your comment. :)
From my perspective, the goal is more “have accurate models” and “be honest about what your models are”. In interpersonal contexts, the gold standard is often that you’re able to pass someone else’s ideological Turing Test.
Sometimes, your model really is that something is terrible! In cases like that, I think we should be pretty cautious about discouraging people from sharing what they really think about the terrible thing. (Like, I think “be civil all the time”, “don’t rock the boat”, “be very cautious about criticizing other EAs” is one of the main processes that got in the way of people like me hearing earlier about SBF’s bad track record — I think EAs in the know kept waaay too quiet about this information.)
It’s true that there are real costs to encouraging EAs to routinely speak up about their criticisms — it can make the space feel more negative and aversive to a lot of people, which I’d expect to contribute to burnout and to some people feeling less comfortable honestly expressing their thoughts and feelings.
I don’t know what the best solution is (though I think that tech like NVC can help a whole lot), but I’d be very surprised if the best solution involved EAs never expressing actually intense feelings in any format, no matter how much the context cries for it.
Sometimes shit’s actually just fucked up, and I’d rather a community where people can say as much (even if not everyone agrees) than one where we’re all performatively friendly and smiley all the time.
Seems right. Digging a bit deeper, I suspect we’d disagree about what the right tradeoff to make is in some cases, based on different background beliefs about the world and about how to do the most good.
Like, we can hopefully agree that it’s sometimes OK to pick the “talk in a way that hurts some people and thereby makes those people less likely to engage with EA” side of the tradeoff. An example of this is that some people find discussion of food or veg*nism triggering (e.g., because they have an ED).
We could choose to hide discussion of animal products from the EA Forum in order to be more inclusive to those people; but given the importance of this topic to a lot of what EA does today, it seems more reasonable to just accept that we’re going to exclude a few people (at least from spaces like the EA Forum and EA Global, where all the different cause areas are rubbing elbows and it’s important to keep the friction on starting animal-related topics very low).
If we agree that it’s ever OK to pick the “talk in way X even though it hurts some people” side of the tradeoff, then I think we have enough common ground that the remaining disagreements can be resolved (given enough time) by going back and forth about what sort of EA community we think has the best chance of helping the world (and about how questions of interpersonal ethics, integrity, etc. bear on what we should do in practice).
Oh, did I say something wrong? I was imagining that all the stuff I said above is compatible with what I’ve said on social media. I’d be curious which things you disagree with that I said elsewhere, since that might point at other background disagreements I’m not tracking.
Just a quick note to say thanks for such a thoughtful response! <3
I think you’re doing a great job here modelling discourse norms and I appreciate the substance of your points!
Ngl I was kinda trepidatious opening the forum… but the reasonableness of your reply and warmth of your tone is legit making me smile! (It probably doesn’t hurt that happily we agree more than I realised. :P )
I may well write a litte more substantial response at some point but will likely take a weekend break :)
P.S. Real quick re social media… Things I was thinking about were phrases from fb like “EAs f’d up” and the “fairly shameful initial response”- which I wondered if were stronger than you were expressing here but probably just you saying the same thing. And in this twitter thread you talk about the “cancel mob”—but I think you’re talking there are about a general case. You don’t have to justify yourself on those I’m happy to read it all via the lens of the comments you’ve written on this post.
Aw, that makes me really happy to hear. I’m surprised that it made such a positive difference, and I update that I should do it more!
(The warmth part, not the agreement part. I can’t really control the agreement part, if we disagree then we’re just fucked. 🙃😛)
Re the social media things: yeah, I stand by that stuff, though I basically always expect reasonable people to disagree a lot about exactly how big a fuck-up is, since natural language is so imprecise and there are so many background variables we could disagree on.
I feel a bit weird about the fact that I use such a different tone in different venues, but I think I like this practice for how my brain works, and plan to keep doing it. I definitely talk differently with different friends, and in private vs. public, so I like the idea of making this fact about me relatively obvious in public too.
I don’t want to have such a perfect and consistent public mask/persona that people think my public self exactly matches my private self, since then they might come away deceived about how much to trust (for example) that my tone in a tweet exactly matches the emotions I was feeling when I wrote it.
I want to be honest in my private and public communications, but (even more than that) I want to be meta-honest, in the sense of trying to make it easy for people to model what kind of person I am and what kinds of things I tend to be more candid about, what it might mean if I steer clear of a topic, etc.
Trying too hard to look like I’m an open book who always says what’s on his mind, never self-censors in order to look more polite on the EA Forum, etc. would systematically cause people to have falser beliefs about the delta between “what Rob B said” and “what Rob B is really thinking and feeling right now”. And while I don’t think I owe everyone a full print-out of my stream of consciousness, I do sorta feel like I owe it to people to not deliberately make it sound like I’m more transparent than I am.
This is maybe more of a problem for me than for other people: I’m constantly going on about what a big fan of candor and blurting I am, so I think there’s more risk of people thinking I’m a 100% open book, compared to the risk a typical EA faces.
So, to be clear: I don’t advocate that EAs be 100% open books. And separately, I don’t perfectly live up to my own stated ideals.
Like, I think an early comment like this would have been awesome (with apologies to Shakeel for using his comments as an example, and keeping in mind that this is me cobbling something together rather than something Shakeel endorses):
Note: The following is me expressing my own feelings and beliefs. Other people at CEA may feel differently or have different models, and I don’t mean to speak for them.
If this is true then I feel absolutely horrified. Supporting neo-Nazi groups is despicable, and I don’t think people who would do something like that ought to have any place in this community. [mention my priors about how reliable this sort of journalism tends to be] [mention my priors about FLI’s moral character, epistemics, and/or political views, or mention that I don’t know much about FLI and haven’t thought about them before] Given that, [rough description of how confident I feel that FLI would financially support a group that they knew had views like Holocaust-denialism].
But it’s hard to be confident about what happened based on a single news article, in advance of hearing FLI’s side of things; and there are many good reasons it can take time to craft a complete and accurate public statement that expresses the proper amount of empathy, properly weighs the PR and optics concerns, etc. So I commit to upvoting FLI’s official response when it releases one (even if I don’t like the response), to make it likelier that people see the follow-up and not just the initial claims.
I also want to encourage others to speak up if they disagree on any of this, including chiming in with views contrary to mine (which I’ll try to upvote at least enough to make it obviously socially accepted to express uncertainty or disagreement on this topic, while the facts are still coming in). But for myself, my immediate response to this is that I feel extremely upset.
For context: Coming on the heels of the Bostrom situation, I feel seriously concerned that some people in the EA community think of non-white people as inherently low-status, and I feel surprised and deeply hurt at the lack of empathy to non-white people many EAs have shown in their public comments. I feel profoundly disgusted at the thought of racist ideas and attitudes finding acceptance within EA, and though I’ll need to hear more about the case of FLI before I reach any confident conclusions about this case, my emotional reaction is one of anger at the possibility that FLI knowingly funded neo-Nazis, and a strong desire to tell EAs and non-EAs alike that this is not who we are.
The above hypothetical, not-Shakeel-authored comment meets a higher bar than what I think was required in this context — I think it’s fine for EAs to be a bit sloppier than that, even if they work at CEA — but hopefully it directionally points at what I mean when I say that there are epistemically good ways to express strong feelings. (Though I don’t think it’s easy, and I think there are hard tradeoffs here: demanding more rigor will always cause some number of comments to just not get written at all, which will cause some good ideas and perspectives to never be considered. In this case, I think a fair bit more rigor is worth the cost.)
Haha this is a great hypothetical comment!
The concreteness is helpful because I think my take is that, in general, writing something like this is emotionally exhausting (not to mention time consuming!) - especially so if you’ve got skin in the game and across your life you often come up across things like this to respond to and you keep having the pressure to force your feelings into a more acceptable format.
I reckon that crafting a message like that if I were upset about something could well take half a work day. And I’d have in my head all the being upset / being angry / being scared people on the forum would find me unreasonable / resentful that people might find me unreasonable / doubting myself the whole time. (Though I know plausibly that I’m in part just the describing the human condition there. Trying to do things is hard...!)
Overall, I think I’m just more worried than you that requiring comments to be too far in this direction has too much of a chilling effect on discourse and is too costly for the individuals involved. And it really just is a matter of degree here and what tradeoffs we’re willing to make.
(It makes me think it’d be an interesting excerise to write a number of hypothetical comments arrange them on a scale of how much they major on carefully explaining priors, caveating, communicating meta-level intention etc. and then see where we’d draw the line of acceptable / not!)
There’s an angry top-level post about evaporative cooling of group beliefs in EA that I haven’t written yet, and won’t until it would no longer be an angry one. That might mean that the best moment has passed, which will make me sad for not being strong enough to have competently written it earlier. You could describe this as my having been chilled out of the discourse, but I would instead describe it as my politely waiting until I am able and ready to explain my concerns in a collected and rational manner.
I am doing this because I care about carefully articulating what I’m worried about, because I think it’s important that I communicate it clearly. I don’t want to cause people to feel ambushed and embattled; I don’t want to draw battle lines between me and the people who agree with me on 99% of everything. I don’t want to engender offense that could fester into real and lasting animosity, in the very same people who if approached collaboratively would pull with me to solve our mutual problem out of mutual respect and love for the people who do good.
I don’t want to contribute to the internal divisions growing in EA. To the extent that it is happening, we should all prefer to nip the involution in the bud—if one has ever been on team Everyone Who Logically Tries To Do The Most Good, there’s nowhere to go but down.
I think that if I wrote an angry top-level post, it would deserve to be downvoted into oblivion, though I’m not sure it would be.
I think on the margin I’m fine with posts that will start fights being chilled. Angry infighting and polarization are poisonous to what we’re trying to do.
I think you are upset because FLI or Tegmark was wronged. Would you consider hearing another perspective about this?
I barely give a gosh-guldarn about FLI or Tegmark outside of their (now reduced) capacity to reduce existential risk.
Obviously I’d rather bad things not happen to people and not happen to good people in particular, but I don’t specifically know anyone from FLI and they are a feather on the scales next to the full set of strangers who I care about.
If Tegmark or FLI was wronged in the way your comments and others imply, you are correct and justified in your beliefs. But if the apology or the current facts do not make that status clear, there’s an object level problem and it’s bad to be angry that they are wronged, or build further arguments on that belief.
I think it’s pretty obvious at this point that Tegmark and FLI was seriously wronged, but I barely care about any wrong done to them and am largely uninterested in the question of whether it was wildly disproportionate or merely sickeningly disproportionate.
I care about the consequences of what we’ve done to them.
I care about how, in order to protect themselves from this community, the FLI is
I care about how everyone who watched this happen will also realize the need to protect themselves from us by shuffling along and taking their own pulses. I care about the new but promising EAs who no one will take a chance on, the moonshots that won’t be funded even though they’d save lives in expectation, the good ideas with “bad optics” that won’t be acted on because of fear of backdraft on this forum. I care about the lives we can save if we don’t rush to conclusions, rush to anger, if we can give each other the benefit of the doubt for five freaking minutes and consider whether it’d make any sense whatsoever for the accusation de jour to be what it looks like.
Getting to one object level issue:
If what happened was that Max Tegmark or FLI gets many dubious grant applications, and this particular application made it a few steps through FLI’s processes before it was caught, expo.se’s story and the negative response you object to on the EA forum would be bad, destructive and false. If this was what happened, it would absolutely deserve your disapproval and alarm.
I don’t think this isn’t true. What we know is:
An established (though hostile) newspaper gave an account with actual quotes from Tegmark that contradict his apparent actions
The bespoke funding letter, signed by Tegmark, explicitly promising funding, “approved a grant” conditional on registration of the charity
The hiring of the lawyer by Tegmark
When Tegmark edited his comment with more content, I’m surprised by how positive the reception of this edit got, which simply disavowed funding extremist groups.
I’m further surprised by the reaction and changing sentiment on the forum in reaction of this post, which simply presents an exonerating story. This story itself is directly contradicted by the signed statement in the letter itself.
Contrary to the top level post, it is false that it is standard practice to hand out signed declarations of financial support, with wording like “approved a grant” if substantial vetting remains. Also, it’s extremely unusual for any non-profit to hire a lawyer to explain that a prospective grantee failed vetting in the application process. We also haven’t seen any evidence that FLI actually communicated a rejection. Expo.se seems to have a positive record—even accepting the aesthetic here that newspapers or journalists are untrustworthy, it’s costly for an outlet to outright lie or misrepresent facts.
There’s other issues with Tegmark’s/FLI statements (e.g. deflections about the lack of direct financial benefit to his brother, not addressing the material support the letter provided for registration/the reasonable suspicion this was a ploy to produce the letter).
There’s much more that is problematic that underpin this. If I had more time, I would start a long thread explaining how funding and family relationships could interact really badly in EA/longtermism for several reasons, and another about Tegmark’s insertions into geopolitical issues, which are clumsy at best.
Another comment said the EA forum reaction contributed to actual harm to Tegmark/FLI in amplifying the false narrative. I think a look at Twitter, or how the story, which continues and has been picked up in Vice, suggests to me this isn’t this is true. Unfortunately, I think the opposite is true.
Yep, I think it absolutely is.
It’s also not an accident that my version of the comment is a lot longer and covers more topics (and therefore would presumably have taken way longer for someone to write and edit in a way they personally endorsed).
I don’t think the minimally acceptable comment needed to be quite that long or cover quite that much ground (though I think it would be praiseworthy to do so), but directionally I’m indeed asking people to do a significantly harder thing. And I expect this to be especially hard in exactly the situations where it matters most.
❤
Yeah, that sounds all too realistic!
I’m also imagining that while the author is trying to put together their comment, they might be tracking the fact that others have already rushed out their own replies (many of which probably suck from your perspective), and discussion is continuing, and the clock is ticking before the EA Forum buries this discussion entirely.
(I wonder if there’s a way to tweak how the EA Forum works so that there’s less incentive to go super fast?)
One reason I think it’s worth trying to put in this extra effort is that it produces a virtuous cycle. If I take a bit longer to draft a comment I can more fully stand by, then other people will feel less pressure to rush out their own thoughts prematurely. Slowing down the discussion a little, and adding a bit more light relative to heat, can have a positive effect on all the other discussion that happens.
I’ve mentioned NVC a few times, but I do think NVC is a good example of a thing that can help a lot at relatively little time+effort cost. Quick easy hacks are very good here, exactly because this can otherwise be such a time suck.
A related hack is to put your immediate emotional reaction inside a ‘this is my immediate emotional reaction’ frame, and then say a few words outside that frame. Like:
“Here’s my immediate emotional reaction to the OP:
[indented italicized text]
And here are my first-pass thoughts about physical reality, which are more neutral but might also need to be revised after I learn more or have more time to chew on things:
[indented italicized text]”
This is kinda similar to some stuff I put in my imaginary Shakeel comment above, but being heavy-handed about it might be a lot easier and faster than trying to make it feel like an organic whole.
And I think it has very similar effects to the stuff I was going for, where you get to express the feeling at all, but it’s in a container that makes it (a) a bit less likely that you’ll trigger others and thereby get into a heated Internet fight, and (b) a bit less likely that your initial emotional reaction will get mistaken (by you or others) for an endorsed carefully-wordsmithed description of your factual beliefs.
Yeah, this very much sounds to me like a topic where reasonable people can disagree a lot!
Ooooo, this sounds very fun. :) Especially if we can tangent off into science and philosophy debates when it turns out that there’s a specific underlying disagreement that explains why we feel differently about a particular case. 😛
To be clear, my criticism of the EA Forum’s initial response to the Expo article was never “it’s wrong to feel strong emotions in a context like this, and EAs should never publicly express strong emotions”, and it also wasn’t “it should have been obviously in advance to all EAs that this wasn’t a huge deal”.
If you thought I was saying either of those things, then I probably fucked up in how I expressed myself; sorry about that!
My criticism of the EA Forum’s response was:
I think that EAs made factual claims about the world that weren’t warranted by the evidence at the time. (Including claims about what FLI and Tegmark did, claims about their motives, and claims about how likely it is that there are good reasons for an org to want more than a few hours or days to draft a proper public response to an incident like this.) We were overconfident and following poor epistemic practices (and I’d claim this was noticeable at the time, as someone who downvoted lots of comments at the time).
Part of this is, I suspect, just some level of naiveté about the press, about the base rate of good orgs bungling something or other, etc. Hopefully this example will help people calibrate their priors slightly better.
I think that at least some EAs deliberately leaned into bad epistemic practices here, out of a sense that prematurely and overconfidently condemning FLI would help protect EA’s reputation.
The EA Forum sort of “trapped” FLI, by simultaneously demanding that FLI respond extremely quickly, but also demanding that the response be pretty exhaustive (“a full explanation of what exactly happened here”, in Shakeel’s words) and across-the-board excellent (zero factual errors, excellent empathizing and excellent displays of empathy, good PR both for reaching EAs and for satisfying the larger non-EA public, etc.). This sort of trap is not a good way to treat anyone, including non-EAs.
I think that many EAs’ words and upvote patterns at the time created a social space in which expressing uncertainty, moderation, or counter-narrative beliefs and evidence was strongly discouraged. Basically, we did the classic cancel-culture echo chamber thing, where groups update more and more extremely toward a negative view of X because they keep egging each other on with new negative opinions and data points, while the people with alternative views stay quiet for fear of the social repercussions.
The more general version of this phenomenon is discussed in the Death Spirals sequence, and in videos like ContraPoints’ Canceling: there’s a general tendency for many different kinds of social network to push themselves toward more and more negative (or more and more positive) views of a thing, when groups don’t exert lots of deliberate and unusual effort to encourage dissent, voice moderation, explicitly acknowledge alternative perspectives or counter-narrative points, etc.
I think this is a special risk for EA discussions of heavily politicized topics, so if we want to reliably navigate to true beliefs on such topics — many of which will be a lot messier than the Tegmark case — we’ll need to try to be unusually allowing of dissent, disagreement, “but what if X?”, etc. on topics that are more emotionally charged. (Hard as that sounds!)
Minor point: I read Jason talking about “stonewalling” as referring to FLI’s communications with Expo.se, not with the communications (or lack of) with EAs on this Forum.
The paragraph says:
The context is “FLI would have made a statement here”, and the rest of comment doesn’t make me think he’s talking about Expo either. And it’s in reply to Jack and Shakeel’s comments, which both seem to be about FLI saying something publicly, not about FLI’s interactions with Expo specifically.
And Jeff Kaufman replied to Jason to say “one thing to keep in mind is that organizations can take weirdly long times to make even super obvious public statements”, and Jason responded “Good point.” The whole context is very ‘wow why has FLI not made a public statement’, not ‘wow why did FLI stonewall Expo’.
Still, I appreciate you raising the possibility, since there now seems to be inertia in this comment section against the people who were criticizing FLI, and the same good processes that would have helped people avoid rushing to conclusions in that case, should also encourage some amount of curiosity, patience, and uncertainty in this case.
As should be clear from follow-up comment posted shortly after that one, I was referring to the nearly one month that had passed between Expo reaching out to FLI and the publication of the article. When Jeff responded by noting reasons an organization might delay in making a statement, I wrote in reply: “A decision was made to send a response—that sounds vaguely threatening/intimidating to my ears—through FLI’s lawyer within days.” [1] Expo did allege a number of facts that I think can be fairly characterized as stonewalling.
It’s plausible that Expo is wildly misrepresenting the substance of its communications between it and FLI, but the article seems fairly well-sourced to me. If Expo’s characterization of the correspondence was unfair, I would expect FLI’s initial January 13 statement to have disclosed significant facts that FLI told Expo but it omitted from its article.
Of course, drawing adverse inferences because an organization hasn’t provided a response within two hours of a forum discussion starting would be ridiculous (over a holiday weekend in the US no less!). I wouldn’t have thought it was necessary to say that. However, based on the feedback I am getting here, it would have been much better for me to have said something like “I view FLI’s reported responses to Expo as stonewalling, and if FLI continues to offer the same responses . . . .” I apologize to FLI and everyone else here that my lack of clarity on that point contributed to a Forum environment on that morning that was too ready to settle on conclusions without giving FLI the opportunity to make a statement.
The line that sounded vaguely threatening/intimidating was “Any implication to the contrary would be false”—that sounds how I would expect a lawyer to vaguely allude to a possible defamation claim when they knew they would never file one. If you’ve already said X didn’t happen, what’s the point of that sentence?
My mistake! Sorry for misunderstanding your point, Jason. I really appreciate you clarifying here.