I wanted to say a bit about the “vibe” / thrust of this comment when it comes to community discourse norms...
(This is somewhat informed by your comments on twitter / facebook which themselves are phrased more strongly than this and are less specific in scope )
I suspect you and I agree that we should generally encourage posters to be charitable in their takes and reasonable in their requests—and it would be bad overall for discussions in general where this not the case. Being angry on the internet is often not at all constructive!
However, I think that being angry or upset where it seems like an organisation has done something egregious is very often an appropriate emotional response to feel. I think that the ideal amount of expressing that anger / upset that community norms endorse is non-zero! And yes when people are hurt they may go somewhat too far in what they request / suggest / speculate. But again the optimal amount of “too strong requests” is non-zero.
I think that expressing those feeling of hurt / anger / upset explicitly (or implicitly expressing them through the kinds of requests one is making) has many uses and there are costs to restricting it too much.
Some uses to expressing it:
Conveying the sheer seriousness or importance of the question to the poster. That can be useful information for the organisation under scrutiny about whether / how much people think they messed up (which itself is information about whether / how much they actually messed up). It will lead to better outcomes if organisation in fact get the information that some people are deeply hurt by their actions. If the people who are deeply hurt cannot / do not express this the organisation will not know.
Individuals within a community expressing values they hold dear (and which of those are strong enough to provoke the strongest emotional reaction) is part of how a community develops and maintains norms about behaviour that is / isn’t acceptable.
Some costs to restricting it:
People who have stronger emotional reactions are often closer to the issue. It is very hard when you feel really hurt by something to have to reformulate that in terms acceptable to people who are not at all affected by the thing.
If people who are really hurt by something get the impression from community norms that expressing their hurt is not welcome they may well not feel welcome in the community at all. This seems extra bad if you care about diversity in the community and certain issues affect certain groups more. (E.g. antisemitism, racism, sexism etc.)
If people who are really hurt by something do not post, the discourse will be selected towards people who aren’t hurt / don’t care as strongly. That will systematically skew the discussion towards a specific set of reactions and lead you further away from understanding what people across the community actually think about something.
I think that approaching online discussions on difficult topics is really really hard! I do not think I know what the ideal balance is. I have almost never before participated in such discussions and I’m personally finding my feet here. I am not arguing in favour of carte blanche for people making unreasonable angry demands.
But I want to push back pretty strongly against the idea that people should never be able to post hurt / upset comments or that the comments above seem very badly wrong. (Or that they warrant the things you said on facebook / twitter about EA discourse norms)
P.S. I’m wondering whether you would agree with me for all the above if the organisational behaviour was egregious enough by your / anyone’s lights? [Insert thought experiment here about shockingly beyond the pale behaviour by an organisation that people on the forum express angry comments about]. If yes, then we just disagree on where / how to draw the line not that there is a line at all. If not, then I think we have a more fundamental disagreement about how humans can be expected to communicate online.
I see “clearly expressing anger” and “posting when angry” as quite different things.
I endorse the former, but I rarely endorse the latter, especially in contexts like the EA Forum.
Let’s distinguish different stages of anger:
The “hot” kind—when one is not really thinking straight, prone to exaggeration and uncharitable interpretations, etc.
The “cool” kind—where one can think roughly as clearly about the topic as any other.
We could think of “hot” and “cold” anger as a spectrum.
Most people experience hot anger from time to time. But I think EA figures—especially senior figures—should model a norm of only posting on the EA Forum when fairly cool.
My impression is that, during the Bostrom and FLI incidents, several people posted with considerably more hot anger than I would endorse. In these cases, I think the mistake has been quite harmful, and may warrant public and private apologies.
As a positive example: Peter Hurford’s blog post, which he described as “angry”, showed a level of reasonableness and clarity that made it, in my mind, “above the bar” to publish. The text suggests a relatively cool anger. I disagree with some parts of the post, but I am glad he published it. At the meta-level, my impression is that Peter was well within the range of “appropriate states of mind” for a leadership figure to publish a message like that in public.
I’m not sure how I feel about this proposed norm. I probably think that senior EA figures should at least sometimes post when they’re feeling some version of “hot anger”, as opposed to literally never doing this.
The way you defined “cool vs. hot” here is that it’s about thinking straight vs. not thinking straight. Under that framing, I agree that you shouldn’t post comments when you have reason to suspect you might temporarily not be thinking straight. (Or you should find a way to flag this concern in the comment itself, e.g., with an epistemic status disclaimer or NVC-style language.)
But you also call these “different stages of anger”, which suggests a temporal interpretation: hot anger comes first, followed by cool. And the use of the words “hot” and “cool”, to my ear, also suggests something about the character of the feeling itself.
I feel comfortable suggesting that EAs self-censor under the “thinking straight?” interpretation. But if you’re feeling really intense emotion and it’s very close in time to the triggering event, but you think you’re nonetheless thinking straight — or you think you can add appropriate caveats and context so people can correct for the ways in which you’re not thinking straight — then I’m a lot more wary about adding a strong “don’t say what’s on your mind” norm here.
I suspect you and I agree that we should generally encourage posters to be charitable in their takes and reasonable in their requests
I think “charity” isn’t quite the right framing here, but I think we should encourage posters to really try to understand each other; to ask themselves “what does this other person think the physical world is like, and what evidence do I have that it’s not like that?”; to not exaggerate how negative their takes are; and to be mindful of biases and social dynamics that often cause people to have unrealistically negative beliefs about The Other Side.
However, I think that being angry or upset where it seems like an organisation has done something egregious is very often an appropriate emotional response to feel. I think that the ideal amount of expressing that anger / upset that community norms endorse is non-zero!
I 100% agree! I happened to write something similar here just before reading your comment. :)
From my perspective, the goal is more “have accurate models” and “be honest about what your models are”. In interpersonal contexts, the gold standard is often that you’re able to pass someone else’s ideological Turing Test.
Sometimes, your model really is that something is terrible! In cases like that, I think we should be pretty cautious about discouraging people from sharing what they really think about the terrible thing. (Like, I think “be civil all the time”, “don’t rock the boat”, “be very cautious about criticizing other EAs” is one of the main processes that got in the way of people like me hearing earlier about SBF’s bad track record — I think EAs in the know kept waaay too quiet about this information.)
It’s true that there are real costs to encouraging EAs to routinely speak up about their criticisms — it can make the space feel more negative and aversive to a lot of people, which I’d expect to contribute to burnout and to some people feeling less comfortable honestly expressing their thoughts and feelings.
I don’t know what the best solution is (though I think that tech like NVC can help a whole lot), but I’d be very surprised if the best solution involved EAs never expressing actually intense feelings in any format, no matter how much the context cries for it.
Sometimes shit’s actually just fucked up, and I’d rather a community where people can say as much (even if not everyone agrees) than one where we’re all performatively friendly and smiley all the time.
If people who are really hurt by something do not post, the discourse will be selected towards people who aren’t hurt / don’t care as strongly. That will systematically skew the discussion towards a specific set of reactions and lead you further away from understanding what people across the community actually think about something.
Seems right. Digging a bit deeper, I suspect we’d disagree about what the right tradeoff to make is in some cases, based on different background beliefs about the world and about how to do the most good.
Like, we can hopefully agree that it’s sometimes OK to pick the “talk in a way that hurts some people and thereby makes those people less likely to engage with EA” side of the tradeoff. An example of this is that some people find discussion of food or veg*nism triggering (e.g., because they have an ED).
We could choose to hide discussion of animal products from the EA Forum in order to be more inclusive to those people; but given the importance of this topic to a lot of what EA does today, it seems more reasonable to just accept that we’re going to exclude a few people (at least from spaces like the EA Forum and EA Global, where all the different cause areas are rubbing elbows and it’s important to keep the friction on starting animal-related topics very low).
If we agree that it’s ever OK to pick the “talk in way X even though it hurts some people” side of the tradeoff, then I think we have enough common ground that the remaining disagreements can be resolved (given enough time) by going back and forth about what sort of EA community we think has the best chance of helping the world (and about how questions of interpersonal ethics, integrity, etc. bear on what we should do in practice).
(Or that they warrant the things you said on facebook / twitter about EA discourse norms)
Oh, did I say something wrong? I was imagining that all the stuff I said above is compatible with what I’ve said on social media. I’d be curious which things you disagree with that I said elsewhere, since that might point at other background disagreements I’m not tracking.
Just a quick note to say thanks for such a thoughtful response! <3
I think you’re doing a great job here modelling discourse norms and I appreciate the substance of your points!
Ngl I was kinda trepidatious opening the forum… but the reasonableness of your reply and warmth of your tone is legit making me smile! (It probably doesn’t hurt that happily we agree more than I realised. :P )
I may well write a litte more substantial response at some point but will likely take a weekend break :)
P.S. Real quick re social media… Things I was thinking about were phrases from fb like “EAs f’d up” and the “fairly shameful initial response”- which I wondered if were stronger than you were expressing here but probably just you saying the same thing. And in this twitter thread you talk about the “cancel mob”—but I think you’re talking there are about a general case. You don’t have to justify yourself on those I’m happy to read it all via the lens of the comments you’ve written on this post.
Aw, that makes me really happy to hear. I’m surprised that it made such a positive difference, and I update that I should do it more!
(The warmth part, not the agreement part. I can’t really control the agreement part, if we disagree then we’re just fucked. 🙃😛)
Re the social media things: yeah, I stand by that stuff, though I basically always expect reasonable people to disagree a lot about exactly how big a fuck-up is, since natural language is so imprecise and there are so many background variables we could disagree on.
I feel a bit weird about the fact that I use such a different tone in different venues, but I think I like this practice for how my brain works, and plan to keep doing it. I definitely talk differently with different friends, and in private vs. public, so I like the idea of making this fact about me relatively obvious in public too.
I don’t want to have such a perfect and consistent public mask/persona that people think my public self exactly matches my private self, since then they might come away deceived about how much to trust (for example) that my tone in a tweet exactly matches the emotions I was feeling when I wrote it.
I want to be honest in my private and public communications, but (even more than that) I want to be meta-honest, in the sense of trying to make it easy for people to model what kind of person I am and what kinds of things I tend to be more candid about, what it might mean if I steer clear of a topic, etc.
Trying too hard to look like I’m an open book who always says what’s on his mind, never self-censors in order to look more polite on the EA Forum, etc. would systematically cause people to have falser beliefs about the delta between “what Rob B said” and “what Rob B is really thinking and feeling right now”. And while I don’t think I owe everyone a full print-out of my stream of consciousness, I do sorta feel like I owe it to people to not deliberately make it sound like I’m more transparent than I am.
This is maybe more of a problem for me than for other people: I’m constantly going on about what a big fan of candor and blurting I am, so I think there’s more risk of people thinking I’m a 100% open book, compared to the risk a typical EA faces.
So, to be clear: I don’t advocate that EAs be 100% open books. And separately, I don’t perfectly live up to my own stated ideals.
Like, I think an early comment like this would have been awesome (with apologies to Shakeel for using his comments as an example, and keeping in mind that this is me cobbling something together rather than something Shakeel endorses):
Note: The following is me expressing my own feelings and beliefs. Other people at CEA may feel differently or have different models, and I don’t mean to speak for them.
If this is true then I feel absolutely horrified. Supporting neo-Nazi groups is despicable, and I don’t think people who would do something like that ought to have any place in this community. [mention my priors about how reliable this sort of journalism tends to be] [mention my priors about FLI’s moral character, epistemics, and/or political views, or mention that I don’t know much about FLI and haven’t thought about them before] Given that, [rough description of how confident I feel that FLI would financially support a group that they knew had views like Holocaust-denialism].
But it’s hard to be confident about what happened based on a single news article, in advance of hearing FLI’s side of things; and there are many good reasons it can take time to craft a complete and accurate public statement that expresses the proper amount of empathy, properly weighs the PR and optics concerns, etc. So I commit to upvoting FLI’s official response when it releases one (even if I don’t like the response), to make it likelier that people see the follow-up and not just the initial claims.
I also want to encourage others to speak up if they disagree on any of this, including chiming in with views contrary to mine (which I’ll try to upvote at least enough to make it obviously socially accepted to express uncertainty or disagreement on this topic, while the facts are still coming in). But for myself, my immediate response to this is that I feel extremely upset.
For context: Coming on the heels of the Bostrom situation, I feel seriously concerned that some people in the EA community think of non-white people as inherently low-status, and I feel surprised and deeply hurt at the lack of empathy to non-white people many EAs have shown in their public comments. I feel profoundly disgusted at the thought of racist ideas and attitudes finding acceptance within EA, and though I’ll need to hear more about the case of FLI before I reach any confident conclusions about this case, my emotional reaction is one of anger at the possibility that FLI knowingly funded neo-Nazis, and a strong desire to tell EAs and non-EAs alike that this is not who we are.
The above hypothetical, not-Shakeel-authored comment meets a higher bar than what I think was required in this context — I think it’s fine for EAs to be a bit sloppier than that, even if they work at CEA — but hopefully it directionally points at what I mean when I say that there are epistemically good ways to express strong feelings. (Though I don’t think it’s easy, and I think there are hard tradeoffs here: demanding more rigor will always cause some number of comments to just not get written at all, which will cause some good ideas and perspectives to never be considered. In this case, I think a fair bit more rigor is worth the cost.)
The concreteness is helpful because I think my take is that, in general, writing something like this is emotionally exhausting (not to mention time consuming!) - especially so if you’ve got skin in the game and across your life you often come up across things like this to respond to and you keep having the pressure to force your feelings into a more acceptable format.
I reckon that crafting a message like that if I were upset about something could well take half a work day. And I’d have in my head all the being upset / being angry / being scared people on the forum would find me unreasonable / resentful that people might find me unreasonable / doubting myself the whole time. (Though I know plausibly that I’m in part just the describing the human condition there. Trying to do things is hard...!)
Overall, I think I’m just more worried than you that requiring comments to be too far in this direction has too much of a chilling effect on discourse and is too costly for the individuals involved. And it really just is a matter of degree here and what tradeoffs we’re willing to make.
(It makes me think it’d be an interesting excerise to write a number of hypothetical comments arrange them on a scale of how much they major on carefully explaining priors, caveating, communicating meta-level intention etc. and then see where we’d draw the line of acceptable / not!)
There’s an angry top-level post about evaporative cooling of group beliefs in EA that I haven’t written yet, and won’t until it would no longer be an angry one. That might mean that the best moment has passed, which will make me sad for not being strong enough to have competently written it earlier. You could describe this as my having been chilled out of the discourse, but I would instead describe it as my politely waiting until I am able and ready to explain my concerns in a collected and rational manner.
I am doing this because I care about carefully articulating what I’m worried about, because I think it’s important that I communicate it clearly. I don’t want to cause people to feel ambushed and embattled; I don’t want to draw battle lines between me and the people who agree with me on 99% of everything. I don’t want to engender offense that could fester into real and lasting animosity, in the very same people who if approached collaboratively would pull with me to solve our mutual problem out of mutual respect and love for the people who do good.
I don’t want to contribute to the internal divisions growing in EA. To the extent that it is happening, we should all prefer to nip the involution in the bud—if one has ever been on team Everyone Who Logically Tries To Do The Most Good, there’s nowhere to go but down.
I think that if I wrote an angry top-level post, it would deserve to be downvoted into oblivion, though I’m not sure it would be.
I think on the margin I’m fine with posts that will start fights being chilled. Angry infighting and polarization are poisonous to what we’re trying to do.
I barely give a gosh-guldarn about FLI or Tegmark outside of their (now reduced) capacity to reduce existential risk.
Obviously I’d rather bad things not happen to people and not happen to good people in particular, but I don’t specifically know anyone from FLI and they are a feather on the scales next to the full set of strangers who I care about.
If Tegmark or FLI was wronged in the way your comments and others imply, you are correct and justified in your beliefs. But if the apology or the current facts do not make that status clear, there’s an object level problem and it’s bad to be angry that they are wronged, or build further arguments on that belief.
I think it’s pretty obvious at this point that Tegmark and FLI was seriously wronged, but I barely care about any wrong done to them and am largely uninterested in the question of whether it was wildly disproportionate or merely sickeningly disproportionate.
I care about the consequences of what we’ve done to them.
I care about how, in order to protect themselves from this community, the FLI is
working hard to continue improving the structure and process of our grantmaking processes, including more internal and (in appropriate cases) external review. For starters, for organizations not already well-known to FLI or clearly unexceptionable (e.g. major universities), we will request and evaluate more information about the organization, its personnel, and its history before moving on to additional stages.
I care about how everyone who watched this happen will also realize the need to protect themselves from us by shuffling along and taking their own pulses. I care about the new but promising EAs who no one will take a chance on, the moonshots that won’t be funded even though they’d save lives in expectation, the good ideas with “bad optics” that won’t be acted on because of fear of backdraft on this forum. I care about the lives we can save if we don’t rush to conclusions, rush to anger, if we can give each other the benefit of the doubt for five freaking minutes and consider whether it’d make any sense whatsoever for the accusation de jour to be what it looks like.
If what happened was that Max Tegmark or FLI gets many dubious grant applications, and this particular application made it a few steps through FLI’s processes before it was caught, expo.se’s story and the negative response you object to on the EA forum would be bad, destructive and false. If this was what happened, it would absolutely deserve your disapproval and alarm.
I don’t think this isn’t true. What we know is:
An established (though hostile) newspaper gave an account with actual quotes from Tegmark that contradict his apparent actions
The bespoke funding letter, signed by Tegmark, explicitly promising funding, “approved a grant” conditional on registration of the charity
The hiring of the lawyer by Tegmark
When Tegmark edited his comment with more content, I’m surprised by how positive the reception of this edit got, which simply disavowed funding extremist groups.
I’m further surprised by the reaction and changing sentiment on the forum in reaction of this post, which simply presents an exonerating story. This story itself is directly contradicted by the signed statement in the letter itself.
Contrary to the top level post, it is false that it is standard practice to hand out signed declarations of financial support, with wording like “approved a grant” if substantial vetting remains. Also, it’s extremely unusual for any non-profit to hire a lawyer to explain that a prospective grantee failed vetting in the application process. We also haven’t seen any evidence that FLI actually communicated a rejection. Expo.se seems to have a positive record—even accepting the aesthetic here that newspapers or journalists are untrustworthy, it’s costly for an outlet to outright lie or misrepresent facts.
There’s other issues with Tegmark’s/FLI statements (e.g. deflections about the lack of direct financial benefit to his brother, not addressing the material support the letter provided for registration/the reasonable suspicion this was a ploy to produce the letter).
There’s much more that is problematic that underpin this. If I had more time, I would start a long thread explaining how funding and family relationships could interact really badly in EA/longtermism for several reasons, and another about Tegmark’s insertions into geopolitical issues, which are clumsy at best.
Another comment said the EA forum reaction contributed to actual harm to Tegmark/FLI in amplifying the false narrative. I think a look at Twitter, or how the story, which continues and has been picked up in Vice, suggests to me this isn’t this is true. Unfortunately, I think the opposite is true.
The concreteness is helpful because I think my take is that, in general, writing something like this is emotionally exhausting (not to mention time consuming!) - especially so if you’ve got skin in the game and across your life you often come up across things like this to respond to and you keep having the pressure to force your feelings into a more acceptable format.
Yep, I think it absolutely is.
It’s also not an accident that my version of the comment is a lot longer and covers more topics (and therefore would presumably have taken way longer for someone to write and edit in a way they personally endorsed).
I don’t think the minimally acceptable comment needed to be quite that long or cover quite that much ground (though I think it would be praiseworthy to do so), but directionally I’m indeed asking people to do a significantly harder thing. And I expect this to be especially hard in exactly the situations where it matters most.
I reckon that crafting a message like that if I were upset about something could well take half a work day. And I’d have in my head all the being upset / being angry / being scared people on the forum would find me unreasonable / resentful that people might find me unreasonable / doubting myself the whole time. (Though I know plausibly that I’m in part just the describing the human condition there. Trying to do things is hard...!)
❤
Yeah, that sounds all too realistic!
I’m also imagining that while the author is trying to put together their comment, they might be tracking the fact that others have already rushed out their own replies (many of which probably suck from your perspective), and discussion is continuing, and the clock is ticking before the EA Forum buries this discussion entirely.
(I wonder if there’s a way to tweak how the EA Forum works so that there’s less incentive to go super fast?)
One reason I think it’s worth trying to put in this extra effort is that it produces a virtuous cycle. If I take a bit longer to draft a comment I can more fully stand by, then other people will feel less pressure to rush out their own thoughts prematurely. Slowing down the discussion a little, and adding a bit more light relative to heat, can have a positive effect on all the other discussion that happens.
I’ve mentioned NVC a few times, but I do think NVC is a good example of a thing that can help a lot at relatively little time+effort cost. Quick easy hacks are very good here, exactly because this can otherwise be such a time suck.
A related hack is to put your immediate emotional reaction inside a ‘this is my immediate emotional reaction’ frame, and then say a few words outside that frame. Like:
“Here’s my immediate emotional reaction to the OP:
[indented italicized text]
And here are my first-pass thoughts about physical reality, which are more neutral but might also need to be revised after I learn more or have more time to chew on things:
[indented italicized text]”
This is kinda similar to some stuff I put in my imaginary Shakeel comment above, but being heavy-handed about it might be a lot easier and faster than trying to make it feel like an organic whole.
And I think it has very similar effects to the stuff I was going for, where you get to express the feeling at all, but it’s in a container that makes it (a) a bit less likely that you’ll trigger others and thereby get into a heated Internet fight, and (b) a bit less likely that your initial emotional reaction will get mistaken (by you or others) for an endorsed carefully-wordsmithed description of your factual beliefs.
Overall, I think I’m just more worried than you that requiring comments to be too far in this direction has too much of a chilling effect on discourse and is too costly for the individuals involved. And it really just is a matter of degree here and what tradeoffs we’re willing to make.
Yeah, this very much sounds to me like a topic where reasonable people can disagree a lot!
(It makes me think it’d be an interesting excerise to write a number of hypothetical comments arrange them on a scale of how much they major on carefully explaining priors, caveating, communicating meta-level intention etc. and then see where we’d draw the line of acceptable / not!)
Ooooo, this sounds very fun. :) Especially if we can tangent off into science and philosophy debates when it turns out that there’s a specific underlying disagreement that explains why we feel differently about a particular case. 😛
To be clear, my criticism of the EA Forum’s initial response to the Expo article was never “it’s wrong to feel strong emotions in a context like this, and EAs should never publicly express strong emotions”, and it also wasn’t “it should have been obviously in advance to all EAs that this wasn’t a huge deal”.
If you thought I was saying either of those things, then I probably fucked up in how I expressed myself; sorry about that!
My criticism of the EA Forum’s response was:
I think that EAs made factual claims about the world that weren’t warranted by the evidence at the time. (Including claims about what FLI and Tegmark did, claims about their motives, and claims about how likely it is that there are good reasons for an org to want more than a few hours or days to draft a proper public response to an incident like this.) We were overconfident and following poor epistemic practices (and I’d claim this was noticeable at the time, as someone who downvoted lots of comments at the time).
Part of this is, I suspect, just some level of naiveté about the press, about the base rate of good orgs bungling something or other, etc. Hopefully this example will help people calibrate their priors slightly better.
I think that at least some EAs deliberately leaned into bad epistemic practices here, out of a sense that prematurely and overconfidently condemning FLI would help protect EA’s reputation.
The EA Forum sort of “trapped” FLI, by simultaneously demanding that FLI respond extremely quickly, but also demanding that the response be pretty exhaustive (“a full explanation of what exactly happened here”, in Shakeel’s words) and across-the-board excellent (zero factual errors, excellent empathizing and excellent displays of empathy, good PR both for reaching EAs and for satisfying the larger non-EA public, etc.). This sort of trap is not a good way to treat anyone, including non-EAs.
I think that many EAs’ words and upvote patterns at the time created a social space in which expressing uncertainty, moderation, or counter-narrative beliefs and evidence was strongly discouraged. Basically, we did the classic cancel-culture echo chamber thing, where groups update more and more extremely toward a negative view of X because they keep egging each other on with new negative opinions and data points, while the people with alternative views stay quiet for fear of the social repercussions.
The more general version of this phenomenon is discussed in the Death Spirals sequence, and in videos like ContraPoints’ Canceling: there’s a general tendency for many different kinds of social network to push themselves toward more and more negative (or more and more positive) views of a thing, when groups don’t exert lots of deliberate and unusual effort to encourage dissent, voice moderation, explicitly acknowledge alternative perspectives or counter-narrative points, etc.
I think this is a special risk for EA discussions of heavily politicized topics, so if we want to reliably navigate to true beliefs on such topics — many of which will be a lot messier than the Tegmark case — we’ll need to try to be unusually allowing of dissent, disagreement, “but what if X?”, etc. on topics that are more emotionally charged. (Hard as that sounds!)
I wanted to say a bit about the “vibe” / thrust of this comment when it comes to community discourse norms...
(This is somewhat informed by your comments on twitter / facebook which themselves are phrased more strongly than this and are less specific in scope )
I suspect you and I agree that we should generally encourage posters to be charitable in their takes and reasonable in their requests—and it would be bad overall for discussions in general where this not the case. Being angry on the internet is often not at all constructive!
However, I think that being angry or upset where it seems like an organisation has done something egregious is very often an appropriate emotional response to feel. I think that the ideal amount of expressing that anger / upset that community norms endorse is non-zero! And yes when people are hurt they may go somewhat too far in what they request / suggest / speculate. But again the optimal amount of “too strong requests” is non-zero.
I think that expressing those feeling of hurt / anger / upset explicitly (or implicitly expressing them through the kinds of requests one is making) has many uses and there are costs to restricting it too much.
Some uses to expressing it:
Conveying the sheer seriousness or importance of the question to the poster. That can be useful information for the organisation under scrutiny about whether / how much people think they messed up (which itself is information about whether / how much they actually messed up). It will lead to better outcomes if organisation in fact get the information that some people are deeply hurt by their actions. If the people who are deeply hurt cannot / do not express this the organisation will not know.
Individuals within a community expressing values they hold dear (and which of those are strong enough to provoke the strongest emotional reaction) is part of how a community develops and maintains norms about behaviour that is / isn’t acceptable.
Some costs to restricting it:
People who have stronger emotional reactions are often closer to the issue. It is very hard when you feel really hurt by something to have to reformulate that in terms acceptable to people who are not at all affected by the thing.
If people who are really hurt by something get the impression from community norms that expressing their hurt is not welcome they may well not feel welcome in the community at all. This seems extra bad if you care about diversity in the community and certain issues affect certain groups more. (E.g. antisemitism, racism, sexism etc.)
If people who are really hurt by something do not post, the discourse will be selected towards people who aren’t hurt / don’t care as strongly. That will systematically skew the discussion towards a specific set of reactions and lead you further away from understanding what people across the community actually think about something.
I think that approaching online discussions on difficult topics is really really hard! I do not think I know what the ideal balance is. I have almost never before participated in such discussions and I’m personally finding my feet here. I am not arguing in favour of carte blanche for people making unreasonable angry demands.
But I want to push back pretty strongly against the idea that people should never be able to post hurt / upset comments or that the comments above seem very badly wrong. (Or that they warrant the things you said on facebook / twitter about EA discourse norms)
P.S. I’m wondering whether you would agree with me for all the above if the organisational behaviour was egregious enough by your / anyone’s lights? [Insert thought experiment here about shockingly beyond the pale behaviour by an organisation that people on the forum express angry comments about]. If yes, then we just disagree on where / how to draw the line not that there is a line at all. If not, then I think we have a more fundamental disagreement about how humans can be expected to communicate online.
I see “clearly expressing anger” and “posting when angry” as quite different things.
I endorse the former, but I rarely endorse the latter, especially in contexts like the EA Forum.
Let’s distinguish different stages of anger:
We could think of “hot” and “cold” anger as a spectrum.
Most people experience hot anger from time to time. But I think EA figures—especially senior figures—should model a norm of only posting on the EA Forum when fairly cool.
My impression is that, during the Bostrom and FLI incidents, several people posted with considerably more hot anger than I would endorse. In these cases, I think the mistake has been quite harmful, and may warrant public and private apologies.
As a positive example: Peter Hurford’s blog post, which he described as “angry”, showed a level of reasonableness and clarity that made it, in my mind, “above the bar” to publish. The text suggests a relatively cool anger. I disagree with some parts of the post, but I am glad he published it. At the meta-level, my impression is that Peter was well within the range of “appropriate states of mind” for a leadership figure to publish a message like that in public.
I’m not sure how I feel about this proposed norm. I probably think that senior EA figures should at least sometimes post when they’re feeling some version of “hot anger”, as opposed to literally never doing this.
The way you defined “cool vs. hot” here is that it’s about thinking straight vs. not thinking straight. Under that framing, I agree that you shouldn’t post comments when you have reason to suspect you might temporarily not be thinking straight. (Or you should find a way to flag this concern in the comment itself, e.g., with an epistemic status disclaimer or NVC-style language.)
But you also call these “different stages of anger”, which suggests a temporal interpretation: hot anger comes first, followed by cool. And the use of the words “hot” and “cool”, to my ear, also suggests something about the character of the feeling itself.
I feel comfortable suggesting that EAs self-censor under the “thinking straight?” interpretation. But if you’re feeling really intense emotion and it’s very close in time to the triggering event, but you think you’re nonetheless thinking straight — or you think you can add appropriate caveats and context so people can correct for the ways in which you’re not thinking straight — then I’m a lot more wary about adding a strong “don’t say what’s on your mind” norm here.
I think “charity” isn’t quite the right framing here, but I think we should encourage posters to really try to understand each other; to ask themselves “what does this other person think the physical world is like, and what evidence do I have that it’s not like that?”; to not exaggerate how negative their takes are; and to be mindful of biases and social dynamics that often cause people to have unrealistically negative beliefs about The Other Side.
I 100% agree! I happened to write something similar here just before reading your comment. :)
From my perspective, the goal is more “have accurate models” and “be honest about what your models are”. In interpersonal contexts, the gold standard is often that you’re able to pass someone else’s ideological Turing Test.
Sometimes, your model really is that something is terrible! In cases like that, I think we should be pretty cautious about discouraging people from sharing what they really think about the terrible thing. (Like, I think “be civil all the time”, “don’t rock the boat”, “be very cautious about criticizing other EAs” is one of the main processes that got in the way of people like me hearing earlier about SBF’s bad track record — I think EAs in the know kept waaay too quiet about this information.)
It’s true that there are real costs to encouraging EAs to routinely speak up about their criticisms — it can make the space feel more negative and aversive to a lot of people, which I’d expect to contribute to burnout and to some people feeling less comfortable honestly expressing their thoughts and feelings.
I don’t know what the best solution is (though I think that tech like NVC can help a whole lot), but I’d be very surprised if the best solution involved EAs never expressing actually intense feelings in any format, no matter how much the context cries for it.
Sometimes shit’s actually just fucked up, and I’d rather a community where people can say as much (even if not everyone agrees) than one where we’re all performatively friendly and smiley all the time.
Seems right. Digging a bit deeper, I suspect we’d disagree about what the right tradeoff to make is in some cases, based on different background beliefs about the world and about how to do the most good.
Like, we can hopefully agree that it’s sometimes OK to pick the “talk in a way that hurts some people and thereby makes those people less likely to engage with EA” side of the tradeoff. An example of this is that some people find discussion of food or veg*nism triggering (e.g., because they have an ED).
We could choose to hide discussion of animal products from the EA Forum in order to be more inclusive to those people; but given the importance of this topic to a lot of what EA does today, it seems more reasonable to just accept that we’re going to exclude a few people (at least from spaces like the EA Forum and EA Global, where all the different cause areas are rubbing elbows and it’s important to keep the friction on starting animal-related topics very low).
If we agree that it’s ever OK to pick the “talk in way X even though it hurts some people” side of the tradeoff, then I think we have enough common ground that the remaining disagreements can be resolved (given enough time) by going back and forth about what sort of EA community we think has the best chance of helping the world (and about how questions of interpersonal ethics, integrity, etc. bear on what we should do in practice).
Oh, did I say something wrong? I was imagining that all the stuff I said above is compatible with what I’ve said on social media. I’d be curious which things you disagree with that I said elsewhere, since that might point at other background disagreements I’m not tracking.
Just a quick note to say thanks for such a thoughtful response! <3
I think you’re doing a great job here modelling discourse norms and I appreciate the substance of your points!
Ngl I was kinda trepidatious opening the forum… but the reasonableness of your reply and warmth of your tone is legit making me smile! (It probably doesn’t hurt that happily we agree more than I realised. :P )
I may well write a litte more substantial response at some point but will likely take a weekend break :)
P.S. Real quick re social media… Things I was thinking about were phrases from fb like “EAs f’d up” and the “fairly shameful initial response”- which I wondered if were stronger than you were expressing here but probably just you saying the same thing. And in this twitter thread you talk about the “cancel mob”—but I think you’re talking there are about a general case. You don’t have to justify yourself on those I’m happy to read it all via the lens of the comments you’ve written on this post.
Aw, that makes me really happy to hear. I’m surprised that it made such a positive difference, and I update that I should do it more!
(The warmth part, not the agreement part. I can’t really control the agreement part, if we disagree then we’re just fucked. 🙃😛)
Re the social media things: yeah, I stand by that stuff, though I basically always expect reasonable people to disagree a lot about exactly how big a fuck-up is, since natural language is so imprecise and there are so many background variables we could disagree on.
I feel a bit weird about the fact that I use such a different tone in different venues, but I think I like this practice for how my brain works, and plan to keep doing it. I definitely talk differently with different friends, and in private vs. public, so I like the idea of making this fact about me relatively obvious in public too.
I don’t want to have such a perfect and consistent public mask/persona that people think my public self exactly matches my private self, since then they might come away deceived about how much to trust (for example) that my tone in a tweet exactly matches the emotions I was feeling when I wrote it.
I want to be honest in my private and public communications, but (even more than that) I want to be meta-honest, in the sense of trying to make it easy for people to model what kind of person I am and what kinds of things I tend to be more candid about, what it might mean if I steer clear of a topic, etc.
Trying too hard to look like I’m an open book who always says what’s on his mind, never self-censors in order to look more polite on the EA Forum, etc. would systematically cause people to have falser beliefs about the delta between “what Rob B said” and “what Rob B is really thinking and feeling right now”. And while I don’t think I owe everyone a full print-out of my stream of consciousness, I do sorta feel like I owe it to people to not deliberately make it sound like I’m more transparent than I am.
This is maybe more of a problem for me than for other people: I’m constantly going on about what a big fan of candor and blurting I am, so I think there’s more risk of people thinking I’m a 100% open book, compared to the risk a typical EA faces.
So, to be clear: I don’t advocate that EAs be 100% open books. And separately, I don’t perfectly live up to my own stated ideals.
Like, I think an early comment like this would have been awesome (with apologies to Shakeel for using his comments as an example, and keeping in mind that this is me cobbling something together rather than something Shakeel endorses):
Note: The following is me expressing my own feelings and beliefs. Other people at CEA may feel differently or have different models, and I don’t mean to speak for them.
If this is true then I feel absolutely horrified. Supporting neo-Nazi groups is despicable, and I don’t think people who would do something like that ought to have any place in this community. [mention my priors about how reliable this sort of journalism tends to be] [mention my priors about FLI’s moral character, epistemics, and/or political views, or mention that I don’t know much about FLI and haven’t thought about them before] Given that, [rough description of how confident I feel that FLI would financially support a group that they knew had views like Holocaust-denialism].
But it’s hard to be confident about what happened based on a single news article, in advance of hearing FLI’s side of things; and there are many good reasons it can take time to craft a complete and accurate public statement that expresses the proper amount of empathy, properly weighs the PR and optics concerns, etc. So I commit to upvoting FLI’s official response when it releases one (even if I don’t like the response), to make it likelier that people see the follow-up and not just the initial claims.
I also want to encourage others to speak up if they disagree on any of this, including chiming in with views contrary to mine (which I’ll try to upvote at least enough to make it obviously socially accepted to express uncertainty or disagreement on this topic, while the facts are still coming in). But for myself, my immediate response to this is that I feel extremely upset.
For context: Coming on the heels of the Bostrom situation, I feel seriously concerned that some people in the EA community think of non-white people as inherently low-status, and I feel surprised and deeply hurt at the lack of empathy to non-white people many EAs have shown in their public comments. I feel profoundly disgusted at the thought of racist ideas and attitudes finding acceptance within EA, and though I’ll need to hear more about the case of FLI before I reach any confident conclusions about this case, my emotional reaction is one of anger at the possibility that FLI knowingly funded neo-Nazis, and a strong desire to tell EAs and non-EAs alike that this is not who we are.
The above hypothetical, not-Shakeel-authored comment meets a higher bar than what I think was required in this context — I think it’s fine for EAs to be a bit sloppier than that, even if they work at CEA — but hopefully it directionally points at what I mean when I say that there are epistemically good ways to express strong feelings. (Though I don’t think it’s easy, and I think there are hard tradeoffs here: demanding more rigor will always cause some number of comments to just not get written at all, which will cause some good ideas and perspectives to never be considered. In this case, I think a fair bit more rigor is worth the cost.)
Haha this is a great hypothetical comment!
The concreteness is helpful because I think my take is that, in general, writing something like this is emotionally exhausting (not to mention time consuming!) - especially so if you’ve got skin in the game and across your life you often come up across things like this to respond to and you keep having the pressure to force your feelings into a more acceptable format.
I reckon that crafting a message like that if I were upset about something could well take half a work day. And I’d have in my head all the being upset / being angry / being scared people on the forum would find me unreasonable / resentful that people might find me unreasonable / doubting myself the whole time. (Though I know plausibly that I’m in part just the describing the human condition there. Trying to do things is hard...!)
Overall, I think I’m just more worried than you that requiring comments to be too far in this direction has too much of a chilling effect on discourse and is too costly for the individuals involved. And it really just is a matter of degree here and what tradeoffs we’re willing to make.
(It makes me think it’d be an interesting excerise to write a number of hypothetical comments arrange them on a scale of how much they major on carefully explaining priors, caveating, communicating meta-level intention etc. and then see where we’d draw the line of acceptable / not!)
There’s an angry top-level post about evaporative cooling of group beliefs in EA that I haven’t written yet, and won’t until it would no longer be an angry one. That might mean that the best moment has passed, which will make me sad for not being strong enough to have competently written it earlier. You could describe this as my having been chilled out of the discourse, but I would instead describe it as my politely waiting until I am able and ready to explain my concerns in a collected and rational manner.
I am doing this because I care about carefully articulating what I’m worried about, because I think it’s important that I communicate it clearly. I don’t want to cause people to feel ambushed and embattled; I don’t want to draw battle lines between me and the people who agree with me on 99% of everything. I don’t want to engender offense that could fester into real and lasting animosity, in the very same people who if approached collaboratively would pull with me to solve our mutual problem out of mutual respect and love for the people who do good.
I don’t want to contribute to the internal divisions growing in EA. To the extent that it is happening, we should all prefer to nip the involution in the bud—if one has ever been on team Everyone Who Logically Tries To Do The Most Good, there’s nowhere to go but down.
I think that if I wrote an angry top-level post, it would deserve to be downvoted into oblivion, though I’m not sure it would be.
I think on the margin I’m fine with posts that will start fights being chilled. Angry infighting and polarization are poisonous to what we’re trying to do.
I think you are upset because FLI or Tegmark was wronged. Would you consider hearing another perspective about this?
I barely give a gosh-guldarn about FLI or Tegmark outside of their (now reduced) capacity to reduce existential risk.
Obviously I’d rather bad things not happen to people and not happen to good people in particular, but I don’t specifically know anyone from FLI and they are a feather on the scales next to the full set of strangers who I care about.
If Tegmark or FLI was wronged in the way your comments and others imply, you are correct and justified in your beliefs. But if the apology or the current facts do not make that status clear, there’s an object level problem and it’s bad to be angry that they are wronged, or build further arguments on that belief.
I think it’s pretty obvious at this point that Tegmark and FLI was seriously wronged, but I barely care about any wrong done to them and am largely uninterested in the question of whether it was wildly disproportionate or merely sickeningly disproportionate.
I care about the consequences of what we’ve done to them.
I care about how, in order to protect themselves from this community, the FLI is
I care about how everyone who watched this happen will also realize the need to protect themselves from us by shuffling along and taking their own pulses. I care about the new but promising EAs who no one will take a chance on, the moonshots that won’t be funded even though they’d save lives in expectation, the good ideas with “bad optics” that won’t be acted on because of fear of backdraft on this forum. I care about the lives we can save if we don’t rush to conclusions, rush to anger, if we can give each other the benefit of the doubt for five freaking minutes and consider whether it’d make any sense whatsoever for the accusation de jour to be what it looks like.
Getting to one object level issue:
If what happened was that Max Tegmark or FLI gets many dubious grant applications, and this particular application made it a few steps through FLI’s processes before it was caught, expo.se’s story and the negative response you object to on the EA forum would be bad, destructive and false. If this was what happened, it would absolutely deserve your disapproval and alarm.
I don’t think this isn’t true. What we know is:
An established (though hostile) newspaper gave an account with actual quotes from Tegmark that contradict his apparent actions
The bespoke funding letter, signed by Tegmark, explicitly promising funding, “approved a grant” conditional on registration of the charity
The hiring of the lawyer by Tegmark
When Tegmark edited his comment with more content, I’m surprised by how positive the reception of this edit got, which simply disavowed funding extremist groups.
I’m further surprised by the reaction and changing sentiment on the forum in reaction of this post, which simply presents an exonerating story. This story itself is directly contradicted by the signed statement in the letter itself.
Contrary to the top level post, it is false that it is standard practice to hand out signed declarations of financial support, with wording like “approved a grant” if substantial vetting remains. Also, it’s extremely unusual for any non-profit to hire a lawyer to explain that a prospective grantee failed vetting in the application process. We also haven’t seen any evidence that FLI actually communicated a rejection. Expo.se seems to have a positive record—even accepting the aesthetic here that newspapers or journalists are untrustworthy, it’s costly for an outlet to outright lie or misrepresent facts.
There’s other issues with Tegmark’s/FLI statements (e.g. deflections about the lack of direct financial benefit to his brother, not addressing the material support the letter provided for registration/the reasonable suspicion this was a ploy to produce the letter).
There’s much more that is problematic that underpin this. If I had more time, I would start a long thread explaining how funding and family relationships could interact really badly in EA/longtermism for several reasons, and another about Tegmark’s insertions into geopolitical issues, which are clumsy at best.
Another comment said the EA forum reaction contributed to actual harm to Tegmark/FLI in amplifying the false narrative. I think a look at Twitter, or how the story, which continues and has been picked up in Vice, suggests to me this isn’t this is true. Unfortunately, I think the opposite is true.
Yep, I think it absolutely is.
It’s also not an accident that my version of the comment is a lot longer and covers more topics (and therefore would presumably have taken way longer for someone to write and edit in a way they personally endorsed).
I don’t think the minimally acceptable comment needed to be quite that long or cover quite that much ground (though I think it would be praiseworthy to do so), but directionally I’m indeed asking people to do a significantly harder thing. And I expect this to be especially hard in exactly the situations where it matters most.
❤
Yeah, that sounds all too realistic!
I’m also imagining that while the author is trying to put together their comment, they might be tracking the fact that others have already rushed out their own replies (many of which probably suck from your perspective), and discussion is continuing, and the clock is ticking before the EA Forum buries this discussion entirely.
(I wonder if there’s a way to tweak how the EA Forum works so that there’s less incentive to go super fast?)
One reason I think it’s worth trying to put in this extra effort is that it produces a virtuous cycle. If I take a bit longer to draft a comment I can more fully stand by, then other people will feel less pressure to rush out their own thoughts prematurely. Slowing down the discussion a little, and adding a bit more light relative to heat, can have a positive effect on all the other discussion that happens.
I’ve mentioned NVC a few times, but I do think NVC is a good example of a thing that can help a lot at relatively little time+effort cost. Quick easy hacks are very good here, exactly because this can otherwise be such a time suck.
A related hack is to put your immediate emotional reaction inside a ‘this is my immediate emotional reaction’ frame, and then say a few words outside that frame. Like:
“Here’s my immediate emotional reaction to the OP:
[indented italicized text]
And here are my first-pass thoughts about physical reality, which are more neutral but might also need to be revised after I learn more or have more time to chew on things:
[indented italicized text]”
This is kinda similar to some stuff I put in my imaginary Shakeel comment above, but being heavy-handed about it might be a lot easier and faster than trying to make it feel like an organic whole.
And I think it has very similar effects to the stuff I was going for, where you get to express the feeling at all, but it’s in a container that makes it (a) a bit less likely that you’ll trigger others and thereby get into a heated Internet fight, and (b) a bit less likely that your initial emotional reaction will get mistaken (by you or others) for an endorsed carefully-wordsmithed description of your factual beliefs.
Yeah, this very much sounds to me like a topic where reasonable people can disagree a lot!
Ooooo, this sounds very fun. :) Especially if we can tangent off into science and philosophy debates when it turns out that there’s a specific underlying disagreement that explains why we feel differently about a particular case. 😛
To be clear, my criticism of the EA Forum’s initial response to the Expo article was never “it’s wrong to feel strong emotions in a context like this, and EAs should never publicly express strong emotions”, and it also wasn’t “it should have been obviously in advance to all EAs that this wasn’t a huge deal”.
If you thought I was saying either of those things, then I probably fucked up in how I expressed myself; sorry about that!
My criticism of the EA Forum’s response was:
I think that EAs made factual claims about the world that weren’t warranted by the evidence at the time. (Including claims about what FLI and Tegmark did, claims about their motives, and claims about how likely it is that there are good reasons for an org to want more than a few hours or days to draft a proper public response to an incident like this.) We were overconfident and following poor epistemic practices (and I’d claim this was noticeable at the time, as someone who downvoted lots of comments at the time).
Part of this is, I suspect, just some level of naiveté about the press, about the base rate of good orgs bungling something or other, etc. Hopefully this example will help people calibrate their priors slightly better.
I think that at least some EAs deliberately leaned into bad epistemic practices here, out of a sense that prematurely and overconfidently condemning FLI would help protect EA’s reputation.
The EA Forum sort of “trapped” FLI, by simultaneously demanding that FLI respond extremely quickly, but also demanding that the response be pretty exhaustive (“a full explanation of what exactly happened here”, in Shakeel’s words) and across-the-board excellent (zero factual errors, excellent empathizing and excellent displays of empathy, good PR both for reaching EAs and for satisfying the larger non-EA public, etc.). This sort of trap is not a good way to treat anyone, including non-EAs.
I think that many EAs’ words and upvote patterns at the time created a social space in which expressing uncertainty, moderation, or counter-narrative beliefs and evidence was strongly discouraged. Basically, we did the classic cancel-culture echo chamber thing, where groups update more and more extremely toward a negative view of X because they keep egging each other on with new negative opinions and data points, while the people with alternative views stay quiet for fear of the social repercussions.
The more general version of this phenomenon is discussed in the Death Spirals sequence, and in videos like ContraPoints’ Canceling: there’s a general tendency for many different kinds of social network to push themselves toward more and more negative (or more and more positive) views of a thing, when groups don’t exert lots of deliberate and unusual effort to encourage dissent, voice moderation, explicitly acknowledge alternative perspectives or counter-narrative points, etc.
I think this is a special risk for EA discussions of heavily politicized topics, so if we want to reliably navigate to true beliefs on such topics — many of which will be a lot messier than the Tegmark case — we’ll need to try to be unusually allowing of dissent, disagreement, “but what if X?”, etc. on topics that are more emotionally charged. (Hard as that sounds!)