it is not professional or appropriate for the Comms Head of CEA to initiate an interaction with an EA org by publicly putting them on blast and seemingly seconding what could be very damaging accusations
Maybe there are special rules that EA comms people (or the CEA comms person in particular) should follow; I possibly shouldn’t weigh in on that, since I’m another EA comms person (working at MIRI) and might be biased.
My initial thought, however, is that it’s good for full-time EAs on the current margin to speak more from their personal views, and to do less “speaking for the organizations”. E.g., in the case of FTX, I think it would have been healthy for EAs working at full-time orgs to express their candid thoughts about SBF, both negative and positive; and for other professional EAs to give their real counter-arguments, and for a real discussion to thereby happen.
My criticism of Shakeel’s post is very different from yours, and is about how truth-seeking the contents are and how well they incentivize truth-seeking from others, not about whether it’s inherently unprofessional for particular EAs to strongly criticize other EAs.
Get an anonymous account if you want to shoot from the hip.
This seems ~strictly worse to me than making a “Shakeel-Personal” account separate from “Shakeel-CEA”. It might be useful to have personal takes indexed separately (though I’d guess this is just not necessary, and would add friction and discourage people from sharing their real takes, which I want them to do more). But regardless, I don’t think it’s better to add even more of a fog of anonymity to EA Forum discussions, if someone’s willing to just say their stuff under their own name.
I’m glad anonymity is an option, but the number of anons in these discussions already makes it hard to know how much I might be double-counting views, makes it hard to contextualize comments by knowing what world-view or expertise or experience or they reflect, makes it hard to have sustained multi-month discussions with a specific person where we gradually converge on things, etc.
Idk I think it might be pretty hard to have a role like Head of Communications at CEA and then separately communicate your personal views about the same topics. Your position is rather unique for allowing that. I don’t see CEA becoming like MIRI in this respect. It comes across as though he’s saying this in his professional capacity when you hover over his account name and it says “Head of Communications at CEA”.
But the thing I think is most important about Shakeel’s job is that it means he should know better than to throw around and amplify allegations. A marked personal account would satisfy me but I would still hold it to a higher standard re:gossip since he’s supposed to know what’s appropriate. And I expect him to want EA orgs to succeed! I don’t think premature callouts for racism and demands to have already have apologized are good faith criticism to strengthen the community.
I mean, I want employees at EA orgs to try to make EA orgs succeed insofar as that does the most good, and try to make EA orgs fail insofar as that does the most good instead. Likewise, I want them to try to strengthen the EA community if their model says this is good, and to try to weaken it (or just ignore it) otherwise.
(Obviously, in each case I’d want them to be open and honest about what they’re trying to do; you can oppose an org you think is bad without doing anything unethical or deceptive.)
I’m not sure what I think CEA’s role should be in EA. I do feel more optimistic about EA succeeding if major EA orgs in general focus more on developing a model of the world and trying to do the most good under their idiosyncratic world-view, rather than trying to represent or reflect EA-at-large; and I feel more optimistic about EA if sending our best and brightest to work at EA orgs doesn’t mean that they have to do massively more self-censoring now.
Maybe CEA or CEA-comms is an exception, but I’m not sold yet. I do think it’s good to have high epistemic standards, but I see that as compatible with expressing personal feelings, criticizing other orgs, wanting specific EA orgs to fail, etc.
For what it’s worth, speaking as a non-comms person, I’m a big fan of Rob Bensinger style comms people. I like seeing him get into random twitter scraps with e/acc weirdos, or turning obnoxious memes into FAQs, or doing informal abstract-level research on the state of bioethics writing. I may be biased specifically because I like Rob’s contributions, and would miss them if he turned himself into a vessel of perfect public emptiness into which the disembodied spirit of MIRI’s preferred public image was poured, but, look, I also just find that type of job description obviously offputting. In general I liked getting to know the EAs I’ve gotten to know, and I don’t know Shakeel that well, but I would like to get to know him better. I certainly am averse to the idea of wrist slapping him back into this empty vessel to the extent that we are blaming him for carelessness even when he specifies very clearly that he isn’t speaking for his organization. I do think that his statement was hasty, but I also think we need to be forgiving of EAs whose emotions are running a bit hot right now, especially when they circle back to self-correct afterwards.
I like Rob’s contributions, and would miss them if he turned himself into a vessel of perfect public emptiness into which the disembodied spirit of MIRI’s preferred public image was poured
I think this would also just be logically inconsistent; MIRI’s preferred public image is that we not be the sort of org that turns people into vessels of perfect public emptiness into which the disembodied spirit of our preferred public image is poured.
“My initial thought, however, is that it’s good for full-time EAs on the current margin to speak more from their personal views, and to do less “speaking for the organizations”. E.g., in the case of FTX, I think it would have been healthy for EAs working at full-time orgs to express their candid thoughts about SBF, both negative and positive; and for other professional EAs to give their real counter-arguments, and for a real discussion to thereby happen.”
This seems a little naive. “We were all getting millions of dollars from this guy with billions to come, he’s personal friends with all the movement leaders, but if we had had more open discussions we would not have taken the millions...really??”
also if you’re in line to get millions of $$$ from someone of course you are never going to share your candid thoughts about them publicly under your real name!
This seems a little naive. “We were all getting millions of dollars from this guy with billions to come, he’s personal friends with all the movement leaders, but if we had had more open discussions we would not have taken the millions...really??”
I didn’t say a specific prediction about what would have happened differently if EAs had discussed their misgivings about SBF more openly. What I’d say is that if you took a hundred SBF-like cases with lots of the variables randomized, outcomes will be a lot better if people discuss early serious warning signs and serious misgivings in public.
That will sometimes look like “turning down money”, sometimes like “more people poke around to learn more”, sometimes like “this person is less able to win others’ trust via their EA associations”, sometimes like “fewer EAs go work for this guy”.
Sometimes it won’t do anything at all, or will be actively counterproductive, because the world is complicated and messy. But I think talking about this stuff and voicing criticisms is the best general policy, if we’re picking a policy to apply across many different cases and not just using hindsight to ask what an omniscient person would do differently in the specific case of FTX.
also if you’re in line to get millions of $$$ from someone of course you are never going to share your candid thoughts about them publicly under your real name!
I mean, Open Philanthropy is MIRI’s largest financial supporter, and
Maybe there are special rules that EA comms people (or the CEA comms person in particular) should follow; I possibly shouldn’t weigh in on that, since I’m another EA comms person (working at MIRI) and might be biased.
My initial thought, however, is that it’s good for full-time EAs on the current margin to speak more from their personal views, and to do less “speaking for the organizations”. E.g., in the case of FTX, I think it would have been healthy for EAs working at full-time orgs to express their candid thoughts about SBF, both negative and positive; and for other professional EAs to give their real counter-arguments, and for a real discussion to thereby happen.
My criticism of Shakeel’s post is very different from yours, and is about how truth-seeking the contents are and how well they incentivize truth-seeking from others, not about whether it’s inherently unprofessional for particular EAs to strongly criticize other EAs.
This seems ~strictly worse to me than making a “Shakeel-Personal” account separate from “Shakeel-CEA”. It might be useful to have personal takes indexed separately (though I’d guess this is just not necessary, and would add friction and discourage people from sharing their real takes, which I want them to do more). But regardless, I don’t think it’s better to add even more of a fog of anonymity to EA Forum discussions, if someone’s willing to just say their stuff under their own name.
I’m glad anonymity is an option, but the number of anons in these discussions already makes it hard to know how much I might be double-counting views, makes it hard to contextualize comments by knowing what world-view or expertise or experience or they reflect, makes it hard to have sustained multi-month discussions with a specific person where we gradually converge on things, etc.
Idk I think it might be pretty hard to have a role like Head of Communications at CEA and then separately communicate your personal views about the same topics. Your position is rather unique for allowing that. I don’t see CEA becoming like MIRI in this respect. It comes across as though he’s saying this in his professional capacity when you hover over his account name and it says “Head of Communications at CEA”.
But the thing I think is most important about Shakeel’s job is that it means he should know better than to throw around and amplify allegations. A marked personal account would satisfy me but I would still hold it to a higher standard re:gossip since he’s supposed to know what’s appropriate. And I expect him to want EA orgs to succeed! I don’t think premature callouts for racism and demands to have already have apologized are good faith criticism to strengthen the community.
I mean, I want employees at EA orgs to try to make EA orgs succeed insofar as that does the most good, and try to make EA orgs fail insofar as that does the most good instead. Likewise, I want them to try to strengthen the EA community if their model says this is good, and to try to weaken it (or just ignore it) otherwise.
(Obviously, in each case I’d want them to be open and honest about what they’re trying to do; you can oppose an org you think is bad without doing anything unethical or deceptive.)
I’m not sure what I think CEA’s role should be in EA. I do feel more optimistic about EA succeeding if major EA orgs in general focus more on developing a model of the world and trying to do the most good under their idiosyncratic world-view, rather than trying to represent or reflect EA-at-large; and I feel more optimistic about EA if sending our best and brightest to work at EA orgs doesn’t mean that they have to do massively more self-censoring now.
Maybe CEA or CEA-comms is an exception, but I’m not sold yet. I do think it’s good to have high epistemic standards, but I see that as compatible with expressing personal feelings, criticizing other orgs, wanting specific EA orgs to fail, etc.
For what it’s worth, speaking as a non-comms person, I’m a big fan of Rob Bensinger style comms people. I like seeing him get into random twitter scraps with e/acc weirdos, or turning obnoxious memes into FAQs, or doing informal abstract-level research on the state of bioethics writing. I may be biased specifically because I like Rob’s contributions, and would miss them if he turned himself into a vessel of perfect public emptiness into which the disembodied spirit of MIRI’s preferred public image was poured, but, look, I also just find that type of job description obviously offputting. In general I liked getting to know the EAs I’ve gotten to know, and I don’t know Shakeel that well, but I would like to get to know him better. I certainly am averse to the idea of wrist slapping him back into this empty vessel to the extent that we are blaming him for carelessness even when he specifies very clearly that he isn’t speaking for his organization. I do think that his statement was hasty, but I also think we need to be forgiving of EAs whose emotions are running a bit hot right now, especially when they circle back to self-correct afterwards.
I think this would also just be logically inconsistent; MIRI’s preferred public image is that we not be the sort of org that turns people into vessels of perfect public emptiness into which the disembodied spirit of our preferred public image is poured.
I don’t agree with MIRI on everything, but yes, this is one of the things I like most about it
“My initial thought, however, is that it’s good for full-time EAs on the current margin to speak more from their personal views, and to do less “speaking for the organizations”. E.g., in the case of FTX, I think it would have been healthy for EAs working at full-time orgs to express their candid thoughts about SBF, both negative and positive; and for other professional EAs to give their real counter-arguments, and for a real discussion to thereby happen.”
This seems a little naive. “We were all getting millions of dollars from this guy with billions to come, he’s personal friends with all the movement leaders, but if we had had more open discussions we would not have taken the millions...really??”
also if you’re in line to get millions of $$$ from someone of course you are never going to share your candid thoughts about them publicly under your real name!
I didn’t say a specific prediction about what would have happened differently if EAs had discussed their misgivings about SBF more openly. What I’d say is that if you took a hundred SBF-like cases with lots of the variables randomized, outcomes will be a lot better if people discuss early serious warning signs and serious misgivings in public.
That will sometimes look like “turning down money”, sometimes like “more people poke around to learn more”, sometimes like “this person is less able to win others’ trust via their EA associations”, sometimes like “fewer EAs go work for this guy”.
Sometimes it won’t do anything at all, or will be actively counterproductive, because the world is complicated and messy. But I think talking about this stuff and voicing criticisms is the best general policy, if we’re picking a policy to apply across many different cases and not just using hindsight to ask what an omniscient person would do differently in the specific case of FTX.
I mean, Open Philanthropy is MIRI’s largest financial supporter, and