Thanks for calling me out on this — I agree that I was too hasty to call for a response.
I’m glad that FLI has shared more information, and that they are rethinking their procedures as a result of this. This FAQ hasn’t completely alleviated my concerns about what happened here — I think it’s worrying that something like this can get to the stage it did without it being flagged (though again, I’m glad FLI seems to agree with this). And I also think that it would have been better if FLI had shared some more of the FAQ info with Expo too.
I do regret calling for FLI to speak up sooner, and I should have had more empathy for the situation they were in. I posted my comments not because I wanted to throw FLI under the bus for PR reasons, but because I was feeling upset; coming on the heels of the Bostrom situation I was worried that some people in the EA community were racist or at least not very sensitive about how discussions of race-related things can make people feel. At the time, I wanted to do my bit to make it clear — in particular to other non-white people who felt similarly to me — that EA isn’t racist. But I could and should have done that in a much better way. I’m sorry.
Thank you for making the apology, you have my approval for that! I also like your apology on the other thread – your words are hopeful for CEA going in a good direction.
Some feedback/reaction from me that I hope is helpful. In describing your motivation for the FLI comment, you say that it was not to throw FLI under the bus, but because of your fear that some people would think EA is racist, and you wanted to correct that. To me, that is a political motivation, not much different from a PR motivation.
To gesture at the difference (in my ontology) between PR/political motivations and truth-seeking motivations:
PR/political
you want people to believe a certain thing (even if it’s something you yourself sincerely believe), in this case, that EA is not racist
it’s about managing impressions and reputations (e.g. EA’s reputation as not racist)
Your initial comment (and also the Bostrom email statement) both struck me as “performative” in how they demonstrated really harsh and absolute condemnation (“absolutely horrifying”, “[no] place in this community”, “recklessly flawed and reprehensible” – granted that you said “if true”, but the tone and other comments seemed to suggest you did think it was true). That tone and manner of speaking as the first thing you say on a topic[1] feels pretty out of place to me within EA, and certainly isn’t what I want in the EA I would design.
Extreme condemnation pattern matches to someone signaling that they too punish the taboo thing (to be clear, I agree that racism should not be tolerated at all), as is seen on the lot of the Internet, and it feels pretty toxic. It feels like it’s coming from a place of needing to demonstrate “I/we are not the bad thing”.
So even if your motivation was “do your bit to make it clear that EA isn’t racist”, that does strike me as still political/PR (even if you sincerely believe it).
(And I don’t mean to doubt your upsetness! It is very reasonable to be upset if you think something will cause harm to others, and harm to the cause you are dedicating yourself to, and harm to your own reputation through association. Upsetness is real and caring about reputation can come from a really good place.)
I could write more on my feelings about PR/political stuff, because my view is not that it’s outright “bad/evil” or anything, more that caution is required.
Truth-seeking / info-propagation Such comments more focus on sharing the author’s beliefs (not performing them)[2] and explaining how they reached them, e.g. “this is what I think happened, this is why I think that” and inferences they’re making, and what makes sense. They tally uncertainty, and they leave open room for the chance they’re mistaken.
To me, the ideal spirit is “let me add my cognition to the collective so we all arrive at true beliefs” rather than “let me tug the collective beliefs in the direction I believe is correct” or “I need to ensure people believe the correct thing” (and especially not “I need people to believe the correct thing about me”).
My ideal CEA comms strategy would conceive of itself as having the goal of causing people to have accurate beliefs foremost, even when that makes EA look bad. That is the job – not to ensure EA looks good, but to ensure EA is perceived accurately, warts and all.
(And I’m interested in attracting to EA people who can appreciate that large movements have warts and who can tolerate weirdness in beliefs, and gets that movement leaders make mistakes. I want the people who see past that to the ideas and principles that make sense, and the many people (including you, I’d wager) are working very hard to make the world better.)
Encouragement I don’t want to respond to step in the right direction (a good apology) with something that feels negative, but it feels important to me that this distinction is deeply understood by CEA and EA in general, hence me writing it up for good measure. I hope this is helpful.
To me, the ideal spirit is “let me add my cognition to the collective so we all arrive at true beliefs” rather than “let me tug the collective beliefs in the direction I believe is correct” or “I need to ensure people believe the correct thing.”
I like this a lot.
I’ll add that you can just say out loud “I wish other people believed X” or “I think the correct collective belief here would be X”, in addition to saying your personal belief Y.
(An example of a case where this might make sense: You think another person or group believes Z, and you think they rationally should believe X instead, given the evidence available to them. You yourself believe a more-extreme proposition Y, but you don’t think others have enough evidence to believe Y yet—e.g., your belief may be based on technical expertise or hard-won life-experience that the other parties don’t have.)
It’s possible to care about the group’s beliefs, and try to intervene on them, in a way that’s honest and clear about what you’re doing.
“absolutely horrifying”, “[no] place in this community”, “recklessly flawed and reprehensible”
[...]
That tone and manner of speaking as the first thing you say on a topic[fn] feels pretty out of place to me within EA, and certainly isn’t what I want in the EA I would design.
Speaking locally to this point: I don’t think I agree! My first-pass take is that if something’s horrible, reprehensible, flawed, etc., then I think EAs should just say so. That strikes me as the default truth-seeking approach.[1]
There might be second-order reasons to be more cautious about when and how you report extreme negative evaluations (e.g., to keep forum discussions from degenerating as people emotionally trigger each other), but I would want to explicitly flag that this is us locally departing from the naive truth-seeking approach (“just say what seems true to you”) in the hope that the end result will be more truth-seeky via people having an easier time keeping a cool head.
(Note that I’m explicitly responding to the ‘extreme language’ side of this, not the ‘was this to some extent performative or strategic?’ side of things.)
With the caveat that maybe evaluative judgments in general get in the way of truth-seeking, unless they’re “owned” NVC-style, because of common confusions like “thinking my own evaluations are mind-independent properties of the world”. But if we’re allowing mild evaluative judgments like “OK” or “fine”, then I think there’s less philosophical basis for banning more extreme judgments like “awesome” or “terrible”.
I think I agree with your clarification and was in fact conflating the mere act of speaking with strong emotion with speaking in a way that felt more like a display. Yeah, I do think it’s a departure from naive truth-seeking.
In practice, I think it is hard, though I do think it is hard for the second order reasons you give and others. Perhaps an ideal is people share strong emotion when they feel it, but in some kind of format/container/manner that doesn’t shut down discussion or get things heated. “NVC” style, perhaps, as you suggest.
Fwiw, I do think “has no place in the community” without being owned as “no place in my community” or “shouldn’t have a place in the community” is probably too high a simulacrum level by default (though this isn’t necessarily a criticism of Shakeel, I don’t remember what exactly his original comment said.)
Cool. :) I think we broadly agree, and I don’t feel confident about what the ideal way to do this is, though I’d be pretty sad and weirded out by a complete ban on expressing strong feelings in any form.
you want people to believe a certain thing (even if it’s something you yourself sincerely believe), in this case that EA is not racist
it’s about managing impressions and reputations (e.g. EA’s reputation as not racist)
Your initial comment (and also the Bostrom email statement) both struck me as “performative” in how they demonstrated really harsh and absolute condemnation (“absolutely horrifying”, “[no] place in this community”, “recklessly flawed and reprehensible” – granted that you said “if true”, but the tone and other comments seemed to suggest you did think it was true). That tone and manner of speaking as the first thing you say on a topic[1] feels pretty out of place to me within EA, and certainly isn’t what I want in the EA I would design.
Extreme condemnation pattern matches to someone signaling that they too punish the taboo thing (to be clear, I agree that racism should not be tolerated at all), as is seen on the lot of the Internet, and feels pretty toxic. It feels like it’s coming from a place of needing to demonstrate “I/we are not the bad thing”.
So even if your motivation was “do your bit to make it clear that EA isn’t racist”, that does strike me as still political/PR (even if you sincerely believe it)
(And I don’t mean to doubt your upsetness! It is very reasonable to upset if you think something will cause harm to others, and harm to the cause you are dedicating yourself to. Upsetness is real and caring about reputation can come from a really good place.)
I could write more on my feelings about PR/political stuff, because my view is not that it’s outright “bad/evil” or anything, more that caution is required.
IMO, I think this is an area EA needs to be way better in. For better or worse, most of the world runs on persuasion, and PR matters. The nuanced truth doesn’t matter that much for social reality, and EA should ideally be persuasive and control social reality.
For better or worse, most of the world runs on persuasion, and PR matters. The nuanced truth doesn’t matter that much for social reality, and EA should ideally be persuasive and control social reality.
I think the extent to which nuanced truth does not matter to “most of the world” is overstated.
I additionally think that EA should not be optimizing for deceiving people who belong to the class “most of the world”.
Both because it wouldn’t be useful if it worked (realistically most of the world has very little they are offering) and because it wouldn’t work.
I additionally think think that trying to play nitwit political games at or around each hecking other would kill EA as a community and a movement dead, dead, dead.
Thanks for this Shakeel. This seems like a particularly rough time to be running comms for CEA. I’m grateful that in addition to having that on your plate, in your personal capacity you’re helping to make the community feel more supportive for non-white EAs feeling the alienation you point to. Also for doing that despite the emotional labour involved in that, which typically makes me shy away from internet discussions.
Responding swiftly to things seems helpful in service of that support. One of the risks from that is that you can end up taking a particular stance immediately and then it feeling hard to back down from that. But in fact you were able to respond swiftly, and then also quickly update and clearly apologise. Really appreciate your hard work!
(Flag that Shakeel and I both work for EV, though for different orgs under that umbrella)
Hey Shakeel, thanks for your apology and update (and I hope you’ve apologized to FLI). Even though call-out culture may be popular or expected in other contexts, it is not professional or appropriate for the Comms Head of CEA to initiate an interaction with an EA org by publicly putting them on blast and seemingly seconding what could be very damaging accusations (as well as inventing others by speculating about financial misconduct). Did you try to contact FLI before publicly commenting to get an idea of what happened (perhaps before they could prepare their statement)?
I appreciate that you apologized for this incident but I don’t think you understand how deep of a problem this behavior is. Get an anonymous account if you want to shoot from the hip. When you do it while your bio says “Head of Communications at CEA” it comes with a certain weight. Multiplying unfounded accusations, toward another EA org no less, is frankly acting in bad faith in a communications role.
Even though call-out culture may be popular or expected in other contexts, it is not professional or appropriate for the Comms Head of CEA to initiate an interaction with an EA org by publicly putting them on blast and seemingly seconding what could be very damaging accusations (as well as inventing others by speculating about financial misconduct). Did you try to contact FLI before publicly commenting to get an idea of what happened (perhaps before they could prepare their statement)?
For what it’s worth, this seems like the wrong way around to me. I don’t know exactly about the role and responsibilities of the “Head of Comm”, but in-general I would like people in EA to be more comfortable criticizing each other, and to feel less constrained to first air all criticism privately and resolve things behind closed doors.
I think the key thing that went wrong here was the absence of a concrete logical argument or probabilities about why the thing that was happening was actually quite bad, and also the time pressure, which made the context of the conversation much worse. Another big thing was also jumping to conclusions about FLI’s character in a way that felt like it was trying to apply direct political pressure instead of focusing on propagating accurate information.
it is not professional or appropriate for the Comms Head of CEA to initiate an interaction with an EA org by publicly putting them on blast and seemingly seconding what could be very damaging accusations
Maybe there are special rules that EA comms people (or the CEA comms person in particular) should follow; I possibly shouldn’t weigh in on that, since I’m another EA comms person (working at MIRI) and might be biased.
My initial thought, however, is that it’s good for full-time EAs on the current margin to speak more from their personal views, and to do less “speaking for the organizations”. E.g., in the case of FTX, I think it would have been healthy for EAs working at full-time orgs to express their candid thoughts about SBF, both negative and positive; and for other professional EAs to give their real counter-arguments, and for a real discussion to thereby happen.
My criticism of Shakeel’s post is very different from yours, and is about how truth-seeking the contents are and how well they incentivize truth-seeking from others, not about whether it’s inherently unprofessional for particular EAs to strongly criticize other EAs.
Get an anonymous account if you want to shoot from the hip.
This seems ~strictly worse to me than making a “Shakeel-Personal” account separate from “Shakeel-CEA”. It might be useful to have personal takes indexed separately (though I’d guess this is just not necessary, and would add friction and discourage people from sharing their real takes, which I want them to do more). But regardless, I don’t think it’s better to add even more of a fog of anonymity to EA Forum discussions, if someone’s willing to just say their stuff under their own name.
I’m glad anonymity is an option, but the number of anons in these discussions already makes it hard to know how much I might be double-counting views, makes it hard to contextualize comments by knowing what world-view or expertise or experience or they reflect, makes it hard to have sustained multi-month discussions with a specific person where we gradually converge on things, etc.
Idk I think it might be pretty hard to have a role like Head of Communications at CEA and then separately communicate your personal views about the same topics. Your position is rather unique for allowing that. I don’t see CEA becoming like MIRI in this respect. It comes across as though he’s saying this in his professional capacity when you hover over his account name and it says “Head of Communications at CEA”.
But the thing I think is most important about Shakeel’s job is that it means he should know better than to throw around and amplify allegations. A marked personal account would satisfy me but I would still hold it to a higher standard re:gossip since he’s supposed to know what’s appropriate. And I expect him to want EA orgs to succeed! I don’t think premature callouts for racism and demands to have already have apologized are good faith criticism to strengthen the community.
I mean, I want employees at EA orgs to try to make EA orgs succeed insofar as that does the most good, and try to make EA orgs fail insofar as that does the most good instead. Likewise, I want them to try to strengthen the EA community if their model says this is good, and to try to weaken it (or just ignore it) otherwise.
(Obviously, in each case I’d want them to be open and honest about what they’re trying to do; you can oppose an org you think is bad without doing anything unethical or deceptive.)
I’m not sure what I think CEA’s role should be in EA. I do feel more optimistic about EA succeeding if major EA orgs in general focus more on developing a model of the world and trying to do the most good under their idiosyncratic world-view, rather than trying to represent or reflect EA-at-large; and I feel more optimistic about EA if sending our best and brightest to work at EA orgs doesn’t mean that they have to do massively more self-censoring now.
Maybe CEA or CEA-comms is an exception, but I’m not sold yet. I do think it’s good to have high epistemic standards, but I see that as compatible with expressing personal feelings, criticizing other orgs, wanting specific EA orgs to fail, etc.
For what it’s worth, speaking as a non-comms person, I’m a big fan of Rob Bensinger style comms people. I like seeing him get into random twitter scraps with e/acc weirdos, or turning obnoxious memes into FAQs, or doing informal abstract-level research on the state of bioethics writing. I may be biased specifically because I like Rob’s contributions, and would miss them if he turned himself into a vessel of perfect public emptiness into which the disembodied spirit of MIRI’s preferred public image was poured, but, look, I also just find that type of job description obviously offputting. In general I liked getting to know the EAs I’ve gotten to know, and I don’t know Shakeel that well, but I would like to get to know him better. I certainly am averse to the idea of wrist slapping him back into this empty vessel to the extent that we are blaming him for carelessness even when he specifies very clearly that he isn’t speaking for his organization. I do think that his statement was hasty, but I also think we need to be forgiving of EAs whose emotions are running a bit hot right now, especially when they circle back to self-correct afterwards.
I like Rob’s contributions, and would miss them if he turned himself into a vessel of perfect public emptiness into which the disembodied spirit of MIRI’s preferred public image was poured
I think this would also just be logically inconsistent; MIRI’s preferred public image is that we not be the sort of org that turns people into vessels of perfect public emptiness into which the disembodied spirit of our preferred public image is poured.
“My initial thought, however, is that it’s good for full-time EAs on the current margin to speak more from their personal views, and to do less “speaking for the organizations”. E.g., in the case of FTX, I think it would have been healthy for EAs working at full-time orgs to express their candid thoughts about SBF, both negative and positive; and for other professional EAs to give their real counter-arguments, and for a real discussion to thereby happen.”
This seems a little naive. “We were all getting millions of dollars from this guy with billions to come, he’s personal friends with all the movement leaders, but if we had had more open discussions we would not have taken the millions...really??”
also if you’re in line to get millions of $$$ from someone of course you are never going to share your candid thoughts about them publicly under your real name!
This seems a little naive. “We were all getting millions of dollars from this guy with billions to come, he’s personal friends with all the movement leaders, but if we had had more open discussions we would not have taken the millions...really??”
I didn’t say a specific prediction about what would have happened differently if EAs had discussed their misgivings about SBF more openly. What I’d say is that if you took a hundred SBF-like cases with lots of the variables randomized, outcomes will be a lot better if people discuss early serious warning signs and serious misgivings in public.
That will sometimes look like “turning down money”, sometimes like “more people poke around to learn more”, sometimes like “this person is less able to win others’ trust via their EA associations”, sometimes like “fewer EAs go work for this guy”.
Sometimes it won’t do anything at all, or will be actively counterproductive, because the world is complicated and messy. But I think talking about this stuff and voicing criticisms is the best general policy, if we’re picking a policy to apply across many different cases and not just using hindsight to ask what an omniscient person would do differently in the specific case of FTX.
also if you’re in line to get millions of $$$ from someone of course you are never going to share your candid thoughts about them publicly under your real name!
I mean, Open Philanthropy is MIRI’s largest financial supporter, and
Makes sense to me! I appreciate knowing your perspective better, Shakeel. :)
On reflection, I think the thing I care about in situations like this is much more “mutual understanding of where people were coming from and where they’re at now”, whether or not anyone technically “apologizes”.
Apologizing is one way of communicating information about that (because it suggests we’re on the same page that there was a nontrivial foreseeable-in-advance fuck-up), but IMO a comment along those lines could be awesome without ever saying the words “I’m sorry”.
One of my concerns about “I’m sorry” is that I think some people think you can only owe apologies to Good Guys, not to Bad Guys. So if there’s a disagreement about who the Good Guys are, communities can get stuck arguing about whether X should apologize for Y, when it would be more productive to discuss upstream disagreements about facts and values.
I think some people are still uncertain about exactly how OK or bad FLI’s actions here were, but whether or not FLI fucked up badly here and whether or not FLI is bad as an org, I think the EA Forum’s response was bad given the evidence we had at the time. I want our culture to be such that it’s maximally easy for us to acknowledge that sort of thing and course-correct so we do better next time. And my intuition is that a sufficiently honest explanation of where you were coming from, that’s sufficiently curious about and open to understanding others’ perspectives, and sufficiently lacking in soldier-mindset-style defensiveness, can do even more than an apology to contribute to a healthy culture.
(In this case the apology is to FLI/Max, not to me, so it’s mostly none of my business. 😛 But since I called for “apologies” earlier, I wanted to consider the general question of whether that’s the thing that matters most.)
I find myself disliking this comment, and I think its mostly because it sounds like you 1) agree with many of the blunders Rob points out, yet 2) don’t seem to have learned anything from your mistake here? I don’t think many do or should blame you, and I’m personally concerned about repeated similar blunders on your part costing EA much loss of outside reputation and internal trust.
Like, do you think that the issue was that you were responding in heat, and if so, will you make a future policy of not responding in heat in future similar situations?
I feel like there are deeper problems here that won’t be corrected by such a policy, and your lack of concreteness is an impedance to communicating such concerns about your approach to CEA comms (and is itself a repeated issue that won’t be corrected by such a policy).
FWIW, I don’t really want Shakeel to rush into making public promises about his future behavior right now, or big public statements about long-term changes to his policies and heuristics, unless he finds that useful for some reason. I appreciated hearing his thoughts, and would rather leave him space to chew on things and figure out what makes sense for himself. If he or CEA make the wrong updates by my lights, then I expect that to be visible in future CEA/Shakeel actions, and I can just wait and criticize those when they happen.
Thanks for calling me out on this — I agree that I was too hasty to call for a response.
I’m glad that FLI has shared more information, and that they are rethinking their procedures as a result of this. This FAQ hasn’t completely alleviated my concerns about what happened here — I think it’s worrying that something like this can get to the stage it did without it being flagged (though again, I’m glad FLI seems to agree with this). And I also think that it would have been better if FLI had shared some more of the FAQ info with Expo too.
I do regret calling for FLI to speak up sooner, and I should have had more empathy for the situation they were in. I posted my comments not because I wanted to throw FLI under the bus for PR reasons, but because I was feeling upset; coming on the heels of the Bostrom situation I was worried that some people in the EA community were racist or at least not very sensitive about how discussions of race-related things can make people feel. At the time, I wanted to do my bit to make it clear — in particular to other non-white people who felt similarly to me — that EA isn’t racist. But I could and should have done that in a much better way. I’m sorry.
Hey Shakeel,
Thank you for making the apology, you have my approval for that! I also like your apology on the other thread – your words are hopeful for CEA going in a good direction.
Some feedback/reaction from me that I hope is helpful. In describing your motivation for the FLI comment, you say that it was not to throw FLI under the bus, but because of your fear that some people would think EA is racist, and you wanted to correct that. To me, that is a political motivation, not much different from a PR motivation.
To gesture at the difference (in my ontology) between PR/political motivations and truth-seeking motivations:
PR/political
you want people to believe a certain thing (even if it’s something you yourself sincerely believe), in this case, that EA is not racist
it’s about managing impressions and reputations (e.g. EA’s reputation as not racist)
Your initial comment (and also the Bostrom email statement) both struck me as “performative” in how they demonstrated really harsh and absolute condemnation (“absolutely horrifying”, “[no] place in this community”, “recklessly flawed and reprehensible” – granted that you said “if true”, but the tone and other comments seemed to suggest you did think it was true). That tone and manner of speaking as the first thing you say on a topic[1] feels pretty out of place to me within EA, and certainly isn’t what I want in the EA I would design.
Extreme condemnation pattern matches to someone signaling that they too punish the taboo thing (to be clear, I agree that racism should not be tolerated at all), as is seen on the lot of the Internet, and it feels pretty toxic. It feels like it’s coming from a place of needing to demonstrate “I/we are not the bad thing”.
So even if your motivation was “do your bit to make it clear that EA isn’t racist”, that does strike me as still political/PR (even if you sincerely believe it).
(And I don’t mean to doubt your upsetness! It is very reasonable to be upset if you think something will cause harm to others, and harm to the cause you are dedicating yourself to, and harm to your own reputation through association. Upsetness is real and caring about reputation can come from a really good place.)
I could write more on my feelings about PR/political stuff, because my view is not that it’s outright “bad/evil” or anything, more that caution is required.
Truth-seeking / info-propagation
Such comments more focus on sharing the author’s beliefs (not performing them)[2] and explaining how they reached them, e.g. “this is what I think happened, this is why I think that” and inferences they’re making, and what makes sense. They tally uncertainty, and they leave open room for the chance they’re mistaken.
To me, the ideal spirit is “let me add my cognition to the collective so we all arrive at true beliefs” rather than “let me tug the collective beliefs in the direction I believe is correct” or “I need to ensure people believe the correct thing” (and especially not “I need people to believe the correct thing about me”).
My ideal CEA comms strategy would conceive of itself as having the goal of causing people to have accurate beliefs foremost, even when that makes EA look bad. That is the job – not to ensure EA looks good, but to ensure EA is perceived accurately, warts and all.
(And I’m interested in attracting to EA people who can appreciate that large movements have warts and who can tolerate weirdness in beliefs, and gets that movement leaders make mistakes. I want the people who see past that to the ideas and principles that make sense, and the many people (including you, I’d wager) are working very hard to make the world better.)
Encouragement
I don’t want to respond to step in the right direction (a good apology) with something that feels negative, but it feels important to me that this distinction is deeply understood by CEA and EA in general, hence me writing it up for good measure. I hope this is helpful.
ETA: Happy to clarify more here or chat sometime.
I think that after things have been clarified and the picture is looking pretty clear, then indeed, such condemnation might be appropriate.
The LessWrong frontpage commenting guidelines are “aim to explain, not persuade”.
I like this a lot.
I’ll add that you can just say out loud “I wish other people believed X” or “I think the correct collective belief here would be X”, in addition to saying your personal belief Y.
(An example of a case where this might make sense: You think another person or group believes Z, and you think they rationally should believe X instead, given the evidence available to them. You yourself believe a more-extreme proposition Y, but you don’t think others have enough evidence to believe Y yet—e.g., your belief may be based on technical expertise or hard-won life-experience that the other parties don’t have.)
It’s possible to care about the group’s beliefs, and try to intervene on them, in a way that’s honest and clear about what you’re doing.
Speaking locally to this point: I don’t think I agree! My first-pass take is that if something’s horrible, reprehensible, flawed, etc., then I think EAs should just say so. That strikes me as the default truth-seeking approach.[1]
There might be second-order reasons to be more cautious about when and how you report extreme negative evaluations (e.g., to keep forum discussions from degenerating as people emotionally trigger each other), but I would want to explicitly flag that this is us locally departing from the naive truth-seeking approach (“just say what seems true to you”) in the hope that the end result will be more truth-seeky via people having an easier time keeping a cool head.
(Note that I’m explicitly responding to the ‘extreme language’ side of this, not the ‘was this to some extent performative or strategic?’ side of things.)
With the caveat that maybe evaluative judgments in general get in the way of truth-seeking, unless they’re “owned” NVC-style, because of common confusions like “thinking my own evaluations are mind-independent properties of the world”. But if we’re allowing mild evaluative judgments like “OK” or “fine”, then I think there’s less philosophical basis for banning more extreme judgments like “awesome” or “terrible”.
I think I agree with your clarification and was in fact conflating the mere act of speaking with strong emotion with speaking in a way that felt more like a display. Yeah, I do think it’s a departure from naive truth-seeking.
In practice, I think it is hard, though I do think it is hard for the second order reasons you give and others. Perhaps an ideal is people share strong emotion when they feel it, but in some kind of format/container/manner that doesn’t shut down discussion or get things heated. “NVC” style, perhaps, as you suggest.
Fwiw, I do think “has no place in the community” without being owned as “no place in my community” or “shouldn’t have a place in the community” is probably too high a simulacrum level by default (though this isn’t necessarily a criticism of Shakeel, I don’t remember what exactly his original comment said.)
Cool. :) I think we broadly agree, and I don’t feel confident about what the ideal way to do this is, though I’d be pretty sad and weirded out by a complete ban on expressing strong feelings in any form.
Really appreciated a bunch about this comment. I think it’s that it:
flags where it comes from clearly, both emotionally and cognitively
expresses a pragmatism around PR and appreciation for where it comes from that to my mind has been underplayed
Does a lot of “my ideal EA”, “I” language in a way that seems good for conversation
Adds good thoughts to the “what is politics” discussion
IMO, I think this is an area EA needs to be way better in. For better or worse, most of the world runs on persuasion, and PR matters. The nuanced truth doesn’t matter that much for social reality, and EA should ideally be persuasive and control social reality.
I think the extent to which nuanced truth does not matter to “most of the world” is overstated.
I additionally think that EA should not be optimizing for deceiving people who belong to the class “most of the world”.
Both because it wouldn’t be useful if it worked (realistically most of the world has very little they are offering) and because it wouldn’t work.
I additionally think think that trying to play nitwit political games at or around each hecking other would kill EA as a community and a movement dead, dead, dead.
Thanks for this Shakeel. This seems like a particularly rough time to be running comms for CEA. I’m grateful that in addition to having that on your plate, in your personal capacity you’re helping to make the community feel more supportive for non-white EAs feeling the alienation you point to. Also for doing that despite the emotional labour involved in that, which typically makes me shy away from internet discussions.
Responding swiftly to things seems helpful in service of that support. One of the risks from that is that you can end up taking a particular stance immediately and then it feeling hard to back down from that. But in fact you were able to respond swiftly, and then also quickly update and clearly apologise. Really appreciate your hard work!
(Flag that Shakeel and I both work for EV, though for different orgs under that umbrella)
I liked this apology.
Hey Shakeel, thanks for your apology and update (and I hope you’ve apologized to FLI). Even though call-out culture may be popular or expected in other contexts, it is not professional or appropriate for the Comms Head of CEA to initiate an interaction with an EA org by publicly putting them on blast and seemingly seconding what could be very damaging accusations (as well as inventing others by speculating about financial misconduct). Did you try to contact FLI before publicly commenting to get an idea of what happened (perhaps before they could prepare their statement)?
I appreciate that you apologized for this incident but I don’t think you understand how deep of a problem this behavior is. Get an anonymous account if you want to shoot from the hip. When you do it while your bio says “Head of Communications at CEA” it comes with a certain weight. Multiplying unfounded accusations, toward another EA org no less, is frankly acting in bad faith in a communications role.
For what it’s worth, this seems like the wrong way around to me. I don’t know exactly about the role and responsibilities of the “Head of Comm”, but in-general I would like people in EA to be more comfortable criticizing each other, and to feel less constrained to first air all criticism privately and resolve things behind closed doors.
I think the key thing that went wrong here was the absence of a concrete logical argument or probabilities about why the thing that was happening was actually quite bad, and also the time pressure, which made the context of the conversation much worse. Another big thing was also jumping to conclusions about FLI’s character in a way that felt like it was trying to apply direct political pressure instead of focusing on propagating accurate information.
Maybe there are special rules that EA comms people (or the CEA comms person in particular) should follow; I possibly shouldn’t weigh in on that, since I’m another EA comms person (working at MIRI) and might be biased.
My initial thought, however, is that it’s good for full-time EAs on the current margin to speak more from their personal views, and to do less “speaking for the organizations”. E.g., in the case of FTX, I think it would have been healthy for EAs working at full-time orgs to express their candid thoughts about SBF, both negative and positive; and for other professional EAs to give their real counter-arguments, and for a real discussion to thereby happen.
My criticism of Shakeel’s post is very different from yours, and is about how truth-seeking the contents are and how well they incentivize truth-seeking from others, not about whether it’s inherently unprofessional for particular EAs to strongly criticize other EAs.
This seems ~strictly worse to me than making a “Shakeel-Personal” account separate from “Shakeel-CEA”. It might be useful to have personal takes indexed separately (though I’d guess this is just not necessary, and would add friction and discourage people from sharing their real takes, which I want them to do more). But regardless, I don’t think it’s better to add even more of a fog of anonymity to EA Forum discussions, if someone’s willing to just say their stuff under their own name.
I’m glad anonymity is an option, but the number of anons in these discussions already makes it hard to know how much I might be double-counting views, makes it hard to contextualize comments by knowing what world-view or expertise or experience or they reflect, makes it hard to have sustained multi-month discussions with a specific person where we gradually converge on things, etc.
Idk I think it might be pretty hard to have a role like Head of Communications at CEA and then separately communicate your personal views about the same topics. Your position is rather unique for allowing that. I don’t see CEA becoming like MIRI in this respect. It comes across as though he’s saying this in his professional capacity when you hover over his account name and it says “Head of Communications at CEA”.
But the thing I think is most important about Shakeel’s job is that it means he should know better than to throw around and amplify allegations. A marked personal account would satisfy me but I would still hold it to a higher standard re:gossip since he’s supposed to know what’s appropriate. And I expect him to want EA orgs to succeed! I don’t think premature callouts for racism and demands to have already have apologized are good faith criticism to strengthen the community.
I mean, I want employees at EA orgs to try to make EA orgs succeed insofar as that does the most good, and try to make EA orgs fail insofar as that does the most good instead. Likewise, I want them to try to strengthen the EA community if their model says this is good, and to try to weaken it (or just ignore it) otherwise.
(Obviously, in each case I’d want them to be open and honest about what they’re trying to do; you can oppose an org you think is bad without doing anything unethical or deceptive.)
I’m not sure what I think CEA’s role should be in EA. I do feel more optimistic about EA succeeding if major EA orgs in general focus more on developing a model of the world and trying to do the most good under their idiosyncratic world-view, rather than trying to represent or reflect EA-at-large; and I feel more optimistic about EA if sending our best and brightest to work at EA orgs doesn’t mean that they have to do massively more self-censoring now.
Maybe CEA or CEA-comms is an exception, but I’m not sold yet. I do think it’s good to have high epistemic standards, but I see that as compatible with expressing personal feelings, criticizing other orgs, wanting specific EA orgs to fail, etc.
For what it’s worth, speaking as a non-comms person, I’m a big fan of Rob Bensinger style comms people. I like seeing him get into random twitter scraps with e/acc weirdos, or turning obnoxious memes into FAQs, or doing informal abstract-level research on the state of bioethics writing. I may be biased specifically because I like Rob’s contributions, and would miss them if he turned himself into a vessel of perfect public emptiness into which the disembodied spirit of MIRI’s preferred public image was poured, but, look, I also just find that type of job description obviously offputting. In general I liked getting to know the EAs I’ve gotten to know, and I don’t know Shakeel that well, but I would like to get to know him better. I certainly am averse to the idea of wrist slapping him back into this empty vessel to the extent that we are blaming him for carelessness even when he specifies very clearly that he isn’t speaking for his organization. I do think that his statement was hasty, but I also think we need to be forgiving of EAs whose emotions are running a bit hot right now, especially when they circle back to self-correct afterwards.
I think this would also just be logically inconsistent; MIRI’s preferred public image is that we not be the sort of org that turns people into vessels of perfect public emptiness into which the disembodied spirit of our preferred public image is poured.
I don’t agree with MIRI on everything, but yes, this is one of the things I like most about it
“My initial thought, however, is that it’s good for full-time EAs on the current margin to speak more from their personal views, and to do less “speaking for the organizations”. E.g., in the case of FTX, I think it would have been healthy for EAs working at full-time orgs to express their candid thoughts about SBF, both negative and positive; and for other professional EAs to give their real counter-arguments, and for a real discussion to thereby happen.”
This seems a little naive. “We were all getting millions of dollars from this guy with billions to come, he’s personal friends with all the movement leaders, but if we had had more open discussions we would not have taken the millions...really??”
also if you’re in line to get millions of $$$ from someone of course you are never going to share your candid thoughts about them publicly under your real name!
I didn’t say a specific prediction about what would have happened differently if EAs had discussed their misgivings about SBF more openly. What I’d say is that if you took a hundred SBF-like cases with lots of the variables randomized, outcomes will be a lot better if people discuss early serious warning signs and serious misgivings in public.
That will sometimes look like “turning down money”, sometimes like “more people poke around to learn more”, sometimes like “this person is less able to win others’ trust via their EA associations”, sometimes like “fewer EAs go work for this guy”.
Sometimes it won’t do anything at all, or will be actively counterproductive, because the world is complicated and messy. But I think talking about this stuff and voicing criticisms is the best general policy, if we’re picking a policy to apply across many different cases and not just using hindsight to ask what an omniscient person would do differently in the specific case of FTX.
I mean, Open Philanthropy is MIRI’s largest financial supporter, and
Makes sense to me! I appreciate knowing your perspective better, Shakeel. :)
On reflection, I think the thing I care about in situations like this is much more “mutual understanding of where people were coming from and where they’re at now”, whether or not anyone technically “apologizes”.
Apologizing is one way of communicating information about that (because it suggests we’re on the same page that there was a nontrivial foreseeable-in-advance fuck-up), but IMO a comment along those lines could be awesome without ever saying the words “I’m sorry”.
One of my concerns about “I’m sorry” is that I think some people think you can only owe apologies to Good Guys, not to Bad Guys. So if there’s a disagreement about who the Good Guys are, communities can get stuck arguing about whether X should apologize for Y, when it would be more productive to discuss upstream disagreements about facts and values.
I think some people are still uncertain about exactly how OK or bad FLI’s actions here were, but whether or not FLI fucked up badly here and whether or not FLI is bad as an org, I think the EA Forum’s response was bad given the evidence we had at the time. I want our culture to be such that it’s maximally easy for us to acknowledge that sort of thing and course-correct so we do better next time. And my intuition is that a sufficiently honest explanation of where you were coming from, that’s sufficiently curious about and open to understanding others’ perspectives, and sufficiently lacking in soldier-mindset-style defensiveness, can do even more than an apology to contribute to a healthy culture.
(In this case the apology is to FLI/Max, not to me, so it’s mostly none of my business. 😛 But since I called for “apologies” earlier, I wanted to consider the general question of whether that’s the thing that matters most.)
I find myself disliking this comment, and I think its mostly because it sounds like you 1) agree with many of the blunders Rob points out, yet 2) don’t seem to have learned anything from your mistake here? I don’t think many do or should blame you, and I’m personally concerned about repeated similar blunders on your part costing EA much loss of outside reputation and internal trust.
Like, do you think that the issue was that you were responding in heat, and if so, will you make a future policy of not responding in heat in future similar situations?
I feel like there are deeper problems here that won’t be corrected by such a policy, and your lack of concreteness is an impedance to communicating such concerns about your approach to CEA comms (and is itself a repeated issue that won’t be corrected by such a policy).
FWIW, I don’t really want Shakeel to rush into making public promises about his future behavior right now, or big public statements about long-term changes to his policies and heuristics, unless he finds that useful for some reason. I appreciated hearing his thoughts, and would rather leave him space to chew on things and figure out what makes sense for himself. If he or CEA make the wrong updates by my lights, then I expect that to be visible in future CEA/Shakeel actions, and I can just wait and criticize those when they happen.