Thank you for making the apology, you have my approval for that! I also like your apology on the other thread – your words are hopeful for CEA going in a good direction.
Some feedback/reaction from me that I hope is helpful. In describing your motivation for the FLI comment, you say that it was not to throw FLI under the bus, but because of your fear that some people would think EA is racist, and you wanted to correct that. To me, that is a political motivation, not much different from a PR motivation.
To gesture at the difference (in my ontology) between PR/political motivations and truth-seeking motivations:
PR/political
you want people to believe a certain thing (even if it’s something you yourself sincerely believe), in this case, that EA is not racist
it’s about managing impressions and reputations (e.g. EA’s reputation as not racist)
Your initial comment (and also the Bostrom email statement) both struck me as “performative” in how they demonstrated really harsh and absolute condemnation (“absolutely horrifying”, “[no] place in this community”, “recklessly flawed and reprehensible” – granted that you said “if true”, but the tone and other comments seemed to suggest you did think it was true). That tone and manner of speaking as the first thing you say on a topic[1] feels pretty out of place to me within EA, and certainly isn’t what I want in the EA I would design.
Extreme condemnation pattern matches to someone signaling that they too punish the taboo thing (to be clear, I agree that racism should not be tolerated at all), as is seen on the lot of the Internet, and it feels pretty toxic. It feels like it’s coming from a place of needing to demonstrate “I/we are not the bad thing”.
So even if your motivation was “do your bit to make it clear that EA isn’t racist”, that does strike me as still political/PR (even if you sincerely believe it).
(And I don’t mean to doubt your upsetness! It is very reasonable to be upset if you think something will cause harm to others, and harm to the cause you are dedicating yourself to, and harm to your own reputation through association. Upsetness is real and caring about reputation can come from a really good place.)
I could write more on my feelings about PR/political stuff, because my view is not that it’s outright “bad/evil” or anything, more that caution is required.
Truth-seeking / info-propagation Such comments more focus on sharing the author’s beliefs (not performing them)[2] and explaining how they reached them, e.g. “this is what I think happened, this is why I think that” and inferences they’re making, and what makes sense. They tally uncertainty, and they leave open room for the chance they’re mistaken.
To me, the ideal spirit is “let me add my cognition to the collective so we all arrive at true beliefs” rather than “let me tug the collective beliefs in the direction I believe is correct” or “I need to ensure people believe the correct thing” (and especially not “I need people to believe the correct thing about me”).
My ideal CEA comms strategy would conceive of itself as having the goal of causing people to have accurate beliefs foremost, even when that makes EA look bad. That is the job – not to ensure EA looks good, but to ensure EA is perceived accurately, warts and all.
(And I’m interested in attracting to EA people who can appreciate that large movements have warts and who can tolerate weirdness in beliefs, and gets that movement leaders make mistakes. I want the people who see past that to the ideas and principles that make sense, and the many people (including you, I’d wager) are working very hard to make the world better.)
Encouragement I don’t want to respond to step in the right direction (a good apology) with something that feels negative, but it feels important to me that this distinction is deeply understood by CEA and EA in general, hence me writing it up for good measure. I hope this is helpful.
To me, the ideal spirit is “let me add my cognition to the collective so we all arrive at true beliefs” rather than “let me tug the collective beliefs in the direction I believe is correct” or “I need to ensure people believe the correct thing.”
I like this a lot.
I’ll add that you can just say out loud “I wish other people believed X” or “I think the correct collective belief here would be X”, in addition to saying your personal belief Y.
(An example of a case where this might make sense: You think another person or group believes Z, and you think they rationally should believe X instead, given the evidence available to them. You yourself believe a more-extreme proposition Y, but you don’t think others have enough evidence to believe Y yet—e.g., your belief may be based on technical expertise or hard-won life-experience that the other parties don’t have.)
It’s possible to care about the group’s beliefs, and try to intervene on them, in a way that’s honest and clear about what you’re doing.
“absolutely horrifying”, “[no] place in this community”, “recklessly flawed and reprehensible”
[...]
That tone and manner of speaking as the first thing you say on a topic[fn] feels pretty out of place to me within EA, and certainly isn’t what I want in the EA I would design.
Speaking locally to this point: I don’t think I agree! My first-pass take is that if something’s horrible, reprehensible, flawed, etc., then I think EAs should just say so. That strikes me as the default truth-seeking approach.[1]
There might be second-order reasons to be more cautious about when and how you report extreme negative evaluations (e.g., to keep forum discussions from degenerating as people emotionally trigger each other), but I would want to explicitly flag that this is us locally departing from the naive truth-seeking approach (“just say what seems true to you”) in the hope that the end result will be more truth-seeky via people having an easier time keeping a cool head.
(Note that I’m explicitly responding to the ‘extreme language’ side of this, not the ‘was this to some extent performative or strategic?’ side of things.)
With the caveat that maybe evaluative judgments in general get in the way of truth-seeking, unless they’re “owned” NVC-style, because of common confusions like “thinking my own evaluations are mind-independent properties of the world”. But if we’re allowing mild evaluative judgments like “OK” or “fine”, then I think there’s less philosophical basis for banning more extreme judgments like “awesome” or “terrible”.
I think I agree with your clarification and was in fact conflating the mere act of speaking with strong emotion with speaking in a way that felt more like a display. Yeah, I do think it’s a departure from naive truth-seeking.
In practice, I think it is hard, though I do think it is hard for the second order reasons you give and others. Perhaps an ideal is people share strong emotion when they feel it, but in some kind of format/container/manner that doesn’t shut down discussion or get things heated. “NVC” style, perhaps, as you suggest.
Fwiw, I do think “has no place in the community” without being owned as “no place in my community” or “shouldn’t have a place in the community” is probably too high a simulacrum level by default (though this isn’t necessarily a criticism of Shakeel, I don’t remember what exactly his original comment said.)
Cool. :) I think we broadly agree, and I don’t feel confident about what the ideal way to do this is, though I’d be pretty sad and weirded out by a complete ban on expressing strong feelings in any form.
you want people to believe a certain thing (even if it’s something you yourself sincerely believe), in this case that EA is not racist
it’s about managing impressions and reputations (e.g. EA’s reputation as not racist)
Your initial comment (and also the Bostrom email statement) both struck me as “performative” in how they demonstrated really harsh and absolute condemnation (“absolutely horrifying”, “[no] place in this community”, “recklessly flawed and reprehensible” – granted that you said “if true”, but the tone and other comments seemed to suggest you did think it was true). That tone and manner of speaking as the first thing you say on a topic[1] feels pretty out of place to me within EA, and certainly isn’t what I want in the EA I would design.
Extreme condemnation pattern matches to someone signaling that they too punish the taboo thing (to be clear, I agree that racism should not be tolerated at all), as is seen on the lot of the Internet, and feels pretty toxic. It feels like it’s coming from a place of needing to demonstrate “I/we are not the bad thing”.
So even if your motivation was “do your bit to make it clear that EA isn’t racist”, that does strike me as still political/PR (even if you sincerely believe it)
(And I don’t mean to doubt your upsetness! It is very reasonable to upset if you think something will cause harm to others, and harm to the cause you are dedicating yourself to. Upsetness is real and caring about reputation can come from a really good place.)
I could write more on my feelings about PR/political stuff, because my view is not that it’s outright “bad/evil” or anything, more that caution is required.
IMO, I think this is an area EA needs to be way better in. For better or worse, most of the world runs on persuasion, and PR matters. The nuanced truth doesn’t matter that much for social reality, and EA should ideally be persuasive and control social reality.
For better or worse, most of the world runs on persuasion, and PR matters. The nuanced truth doesn’t matter that much for social reality, and EA should ideally be persuasive and control social reality.
I think the extent to which nuanced truth does not matter to “most of the world” is overstated.
I additionally think that EA should not be optimizing for deceiving people who belong to the class “most of the world”.
Both because it wouldn’t be useful if it worked (realistically most of the world has very little they are offering) and because it wouldn’t work.
I additionally think think that trying to play nitwit political games at or around each hecking other would kill EA as a community and a movement dead, dead, dead.
Hey Shakeel,
Thank you for making the apology, you have my approval for that! I also like your apology on the other thread – your words are hopeful for CEA going in a good direction.
Some feedback/reaction from me that I hope is helpful. In describing your motivation for the FLI comment, you say that it was not to throw FLI under the bus, but because of your fear that some people would think EA is racist, and you wanted to correct that. To me, that is a political motivation, not much different from a PR motivation.
To gesture at the difference (in my ontology) between PR/political motivations and truth-seeking motivations:
PR/political
you want people to believe a certain thing (even if it’s something you yourself sincerely believe), in this case, that EA is not racist
it’s about managing impressions and reputations (e.g. EA’s reputation as not racist)
Your initial comment (and also the Bostrom email statement) both struck me as “performative” in how they demonstrated really harsh and absolute condemnation (“absolutely horrifying”, “[no] place in this community”, “recklessly flawed and reprehensible” – granted that you said “if true”, but the tone and other comments seemed to suggest you did think it was true). That tone and manner of speaking as the first thing you say on a topic[1] feels pretty out of place to me within EA, and certainly isn’t what I want in the EA I would design.
Extreme condemnation pattern matches to someone signaling that they too punish the taboo thing (to be clear, I agree that racism should not be tolerated at all), as is seen on the lot of the Internet, and it feels pretty toxic. It feels like it’s coming from a place of needing to demonstrate “I/we are not the bad thing”.
So even if your motivation was “do your bit to make it clear that EA isn’t racist”, that does strike me as still political/PR (even if you sincerely believe it).
(And I don’t mean to doubt your upsetness! It is very reasonable to be upset if you think something will cause harm to others, and harm to the cause you are dedicating yourself to, and harm to your own reputation through association. Upsetness is real and caring about reputation can come from a really good place.)
I could write more on my feelings about PR/political stuff, because my view is not that it’s outright “bad/evil” or anything, more that caution is required.
Truth-seeking / info-propagation
Such comments more focus on sharing the author’s beliefs (not performing them)[2] and explaining how they reached them, e.g. “this is what I think happened, this is why I think that” and inferences they’re making, and what makes sense. They tally uncertainty, and they leave open room for the chance they’re mistaken.
To me, the ideal spirit is “let me add my cognition to the collective so we all arrive at true beliefs” rather than “let me tug the collective beliefs in the direction I believe is correct” or “I need to ensure people believe the correct thing” (and especially not “I need people to believe the correct thing about me”).
My ideal CEA comms strategy would conceive of itself as having the goal of causing people to have accurate beliefs foremost, even when that makes EA look bad. That is the job – not to ensure EA looks good, but to ensure EA is perceived accurately, warts and all.
(And I’m interested in attracting to EA people who can appreciate that large movements have warts and who can tolerate weirdness in beliefs, and gets that movement leaders make mistakes. I want the people who see past that to the ideas and principles that make sense, and the many people (including you, I’d wager) are working very hard to make the world better.)
Encouragement
I don’t want to respond to step in the right direction (a good apology) with something that feels negative, but it feels important to me that this distinction is deeply understood by CEA and EA in general, hence me writing it up for good measure. I hope this is helpful.
ETA: Happy to clarify more here or chat sometime.
I think that after things have been clarified and the picture is looking pretty clear, then indeed, such condemnation might be appropriate.
The LessWrong frontpage commenting guidelines are “aim to explain, not persuade”.
I like this a lot.
I’ll add that you can just say out loud “I wish other people believed X” or “I think the correct collective belief here would be X”, in addition to saying your personal belief Y.
(An example of a case where this might make sense: You think another person or group believes Z, and you think they rationally should believe X instead, given the evidence available to them. You yourself believe a more-extreme proposition Y, but you don’t think others have enough evidence to believe Y yet—e.g., your belief may be based on technical expertise or hard-won life-experience that the other parties don’t have.)
It’s possible to care about the group’s beliefs, and try to intervene on them, in a way that’s honest and clear about what you’re doing.
Speaking locally to this point: I don’t think I agree! My first-pass take is that if something’s horrible, reprehensible, flawed, etc., then I think EAs should just say so. That strikes me as the default truth-seeking approach.[1]
There might be second-order reasons to be more cautious about when and how you report extreme negative evaluations (e.g., to keep forum discussions from degenerating as people emotionally trigger each other), but I would want to explicitly flag that this is us locally departing from the naive truth-seeking approach (“just say what seems true to you”) in the hope that the end result will be more truth-seeky via people having an easier time keeping a cool head.
(Note that I’m explicitly responding to the ‘extreme language’ side of this, not the ‘was this to some extent performative or strategic?’ side of things.)
With the caveat that maybe evaluative judgments in general get in the way of truth-seeking, unless they’re “owned” NVC-style, because of common confusions like “thinking my own evaluations are mind-independent properties of the world”. But if we’re allowing mild evaluative judgments like “OK” or “fine”, then I think there’s less philosophical basis for banning more extreme judgments like “awesome” or “terrible”.
I think I agree with your clarification and was in fact conflating the mere act of speaking with strong emotion with speaking in a way that felt more like a display. Yeah, I do think it’s a departure from naive truth-seeking.
In practice, I think it is hard, though I do think it is hard for the second order reasons you give and others. Perhaps an ideal is people share strong emotion when they feel it, but in some kind of format/container/manner that doesn’t shut down discussion or get things heated. “NVC” style, perhaps, as you suggest.
Fwiw, I do think “has no place in the community” without being owned as “no place in my community” or “shouldn’t have a place in the community” is probably too high a simulacrum level by default (though this isn’t necessarily a criticism of Shakeel, I don’t remember what exactly his original comment said.)
Cool. :) I think we broadly agree, and I don’t feel confident about what the ideal way to do this is, though I’d be pretty sad and weirded out by a complete ban on expressing strong feelings in any form.
Really appreciated a bunch about this comment. I think it’s that it:
flags where it comes from clearly, both emotionally and cognitively
expresses a pragmatism around PR and appreciation for where it comes from that to my mind has been underplayed
Does a lot of “my ideal EA”, “I” language in a way that seems good for conversation
Adds good thoughts to the “what is politics” discussion
IMO, I think this is an area EA needs to be way better in. For better or worse, most of the world runs on persuasion, and PR matters. The nuanced truth doesn’t matter that much for social reality, and EA should ideally be persuasive and control social reality.
I think the extent to which nuanced truth does not matter to “most of the world” is overstated.
I additionally think that EA should not be optimizing for deceiving people who belong to the class “most of the world”.
Both because it wouldn’t be useful if it worked (realistically most of the world has very little they are offering) and because it wouldn’t work.
I additionally think think that trying to play nitwit political games at or around each hecking other would kill EA as a community and a movement dead, dead, dead.