I’d like to move towards an inclusive community that doesn’t damage the valuable aspects of EA. I think this post mostly did a good job of suggesting things in that vein (I was heartened to see “don’t stop being weird” as an item), but I’d like to push on the point a bit more.
For example, I’m hugely in favour of collaborative discussions over combative discussions, but I find it very helpful to have discussions that stylistically appear combative while actually being collaborative. For example: frequent, direct criticism of ideas put forward by other people is a hallmark of combative discussion, but can be fine so long as everyone is on an even footing and “you are not your ideas” is common knowledge. If we ban this, then we make some parts of our discourse worse. Overly zealous pursuit of formalized markers can destroy a lot of value.
Of course, the solution is “don’t do that”, but the most obvious approach to “have more X” is “pick some formal markers of X and select for them”. Doing better is harder, perhaps something like “have workshops/talks on good disagreement”, “praise people who’re known for being excellent at this” etc.
I agree with others that there are too many suggestions in this post. They’re also a bit of a grab bag. I can see a few categories:
Miscellaneous criticisms, many of which seem plausible, but aren’t obviously any more important for diversity than for their other benefits (collaborative discussions, humility, less hero-worship, better interpersonal interactions etc.).
Larger-scale shifts of uncertain effect (head vs heart, jargon, caution over “free speech”, etc.). A lot of these are unclear to me, and I think we’d want to take a clear-headed look at the costs and benefits.
More specific diversity-boosting measures (female speakers, try to counteract bias, mentor people etc.). These seem clearest to me, and hopefully we can look and see what’s worked well in other places vs the costs.
I think the miscellaneous improvements could (and should!) go stand on their own; the larger-scale shifts are perhaps best discussed individually; and what I think a diversity criticism is uniquely placed to bring is more of the third kind of thing.
Regarding discussion style: I think several EAs are great at discussions where they’re fully critical of each other but aren’t combative (e.g. they don’t raise their voices, go ad hominem, tear apart one aspect of an argument to dismiss the rest, or downvote comments that signal an identity that theirs is constructed in opposition to). I think it’s possible to get all the benefit of criticism and disagreement without negative emotions clouding our judgement.
I think the key may be to work against the impulse to be right, or the impulse that someone who disagrees with you is your enemy. I’m much better than I used to be at seeing disagreement as the route to everyone in the discussion getting closer to the truth, though unfortunately that takes a constant drive to improve. (It does help a lot to just remind myself that the person I’m disagreeing with --
in most cases at least—is on my team in the bigger picture.) Doing more to penalize combative behavior and reward constructive behavior—like how downvotes and upvotes are supposed to be used in this forum—seems like a feasible solution.
Regarding the grab-bag: That was my intention, to get the ball rolling. I hope for others to bring in their own thinking on prioritization and implementation.
As I said, I’m totally in favour of collaborative discussions, i.e. this stuff
they don’t raise their voices, go ad hominem, tear apart one aspect of an argument to dismiss the rest, or downvote comments that signal an identity that theirs is constructed in opposition to
(except possibly raised voices), but I wanted to argue that sometimes things that look like combative discussion aren’t. Imagine:
A:
B: I think that’s a pretty bad argument because . seems much better.
A: No, you didn’t understand what I’m saying, I said .
This could be a snippet of a tense combative argument, or just a vigorous collaborative brainstorming session. A might feel unfairly dismissed by B, or might not even notice it. If we were trying to combat combtiveness by calling out people abruptly shooting down other people’s ideas, then we might prevent people from doing this particular style of rapid brainstorming.
(Sorry, this stuff is hard to talk about because it’s very contextual. I should probably have picked a better example :))
What I’m trying to say is that we just need to be a little bit careful how we shoot for our goals.
For example, I’m hugely in favour of collaborative discussions over combative discussions, but I find it very helpful to have discussions that stylistically appear combative while actually being collaborative. For example: frequent, direct criticism of ideas put forward by other people is a hallmark of combative discussion, but can be fine so long as everyone is on an even footing and “you are not your ideas” is common knowledge.
Yeah, we have already gone too far with condemning combaticism on the EA forum in my opinion. Demanding that everyone stop and rephrase their language in careful flowery terms is pretty alienating and marginalizing to people who aren’t accustomed to that kind of communication, so you’re not going to be able to please everyone.
I’m confused, you mean people should be expected to explicitly signal that they are being collaborative?
In my view the basic structure of a “combative” debate need not entail any negative connotation of hostility or interpersonal trouble. Point/counterpoint is just a standard, default, acceptable mode of discussion. So ideally, when you see people talking like that, as long as things are reasonably civil then you don’t feel a need to worry about it. It’s a problem that some people don’t see “combative” discussions in this way, but I don’t think there is any better solution in the long run. If you try to evolve norms to avoid the uncertainty and negative perceptions then you run along a treadmill—like the story with politically correct terminology. It’s okay to have a combative structure as long as you stick within the mainstream window of professional and academic discourse, and I think EA is mostly fine at that.
Whether a discussion proceeds as collaborative or combative depends on how the participants interpret what the other parties say. This is all heavily contextual, but as with many things involving conversational implicature, you can often spend some effort to clarify your implicature.
The internet is notoriously bad for conveying the unconscious signals that we usually use to pick up on implicature, and I think this is one of the reasons that internet discussions often turn hostile and combative.
So it’s worth putting in more signals of your intent into the text itself, since that’s all you have.
The right approach is to only look at actual points being made, and not try to infer implications in the first place.
When someone reacts to an implication, the appropriate response is to say “but I/they didn’t say anything about that,” ignore their complaints and move on.
You only have control over your own actions: you can’t control whether your interlocutor over-interprets you or not.
Your “right approach”, which is about how to behave as a listener, is compatible with Michael_PJ’s, which is about how to behave as a speaker: I don’t see why we can’t do both.
But I can control whether I am priming people to get accustomed to over-interpreting.
That sounds potentially important. Could you give an example of a failure mode?
Because my approach is not merely about how to behave as a listener. It’s about speaking without throwing in unnecessary disclaimers.
Consider how my question “Could you give an example...?” reads if I didn’t precede it with the following signal of collaborativeness: “That sounds potentially important.” At least to me (YMMV), I would be like 15% less likely to feel defensive in the case where I precede it with such a signal, instead of leaping into the question—which I would be likely (on a System 1y day) to read as “Oh yeah? Give me ONE example.” Same applies to the phrase “At least to me (YMMV)”: I’m chucking in a signal that I’m willing to listen to your point of view.
Those are examples of disclaimers. I argue these kinds of signals are helpful for promoting a productive atomsphere; do they fall into the category you’re calling “unnecessary disclaimers”? Or is it only something more overt that you’d find counterproductive?
I take the point that different people have different needs with regards to this concern. I hope we can both steer clear of typical-minding everyone else. I think I might be particularly oversensitive to anything resembling conflict, and you are over on the other side of the bell curve in that respect.
That sounds potentially important. Could you give an example of a failure mode?
The failure mode where people over-interpret things that other people say, and then come up with wrong interpretations.
I argue these kinds of signals are helpful for promoting a productive atomsphere; do they fall into the category you’re calling “unnecessary disclaimers”?
Well you should probably signal however friendly you are actually feeling, but I’m not really talking about showing how friendly you are, I’m talking about going out of your way to say “of course I don’t mean X” and so on.
I’m not really talking about showing how friendly you are
It looks like we were talking at cross purposes. I was picking up on the admittedly months-old conversation about “signalling collaborativeness” and [anti-]”combaticism”, which is a separate conversation to the one on value signals. (Value signals are probably a means of signalling collaborativeness though.)
you should probably signal however friendly you are actually feeling
I think politeness serves a useful function (within moderation, of course). ‘Forcing’ people to behave more friendly than they feel saves time and energy.
I think EA has a problem with undervaluing social skills such as basic friendliness. If a community such as EA wants to keep people coming back and contributing their insights, the personal benefits of taking part need to outweigh the personal costs.
I think EA has a problem with undervaluing social skills such as basic friendliness. If a community such as EA wants to keep people coming back and contributing their insights, the personal benefits of taking part need to outweigh the personal costs.
Not if people aren’t attracted to such friendliness. Lots of successful social movements and communities are less friendly than EA.
Sorry, that was me being unclear! The situation I’m envisaging is:
We want more X.
We can’t detect X directly, so we’ll pick some marker for X that looks like X (that’s what I was going at with “formal”, “relating to the form of”), and then aim for that.
Oops our markers don’t capture X, or even exclude some important bits of X.
I like Michael’s distinction between the style and core of an argument. I’m editing this paragraph to clarify the way in which I’m using a few words. When I talk about whether an argument is actually combative or collaborative, I mean to indicate whether it is more effective at goal-oriented problem-solving or at achieving political ends. By politics, I mean something like “social maneuvers taken to redistribute credit, affirmation, etc. in a way that is expected to yield selfish benefit”. For example, questioning the validity of sources would be combative if the basic points of an argument held regardless of the validity of those sources.
Claims like “EA would attract many additional high quality people if women were respected” or “social justice policing would discourage many good people from joining EA” are, while true, basically all combative, and the framing of effectiveness is just helping people self-deceive into thinking they’re motivated by impact or truth. They’re using a collaborative style (the style of caring about impact/truth) to do a combative thing (politics, in the wide definition of that word).
Ultimately, I can spin the observation that these things are combative into a stylistically collaborative but actually combative argument for my own agenda, so everything I’m saying is suspect. To illustrate: the EA phrase “politics is the mindkiller” is typically combatively used in this way, and I have the ability to do something similar here. “Politics is the mindkiller” is the mindkiller, but recognizing this won’t solve the problem, in the same way recognizing politics is the “mindkiller” doesn’t.
People can smell this, and they’d be right to distrust your movement’s ability to think clearly about impact, if you’re using claims of impact and clearer thinking to justify your own politics. People who are bright enough to figure this out are typically the ones I’d want to be working with.
Yeah, you all have a problem with how you treat women and other minority groups. Kelly did a lot of work in order to point out a real phenomenon, and I don’t see anyone taking her very seriously. You let people who want to disparage women get away with doing so by using a collaborative “impact and truth” discussion style to achieve combative, political aims. That’s just the way the social balance of power lies in EA. People would use “impact and inclusivity” as a collaborative style to achieve political aims if the balance of power were flipped. Plausibly there’s an intermediate sweet spot where this happens less overall, though shifting the balance of power to such spots is never a complete solution. I suspect a better approach would be to get rid of the politics first; this will make it easier to make progress on inclusivity.
The norm of letting people stylize politics with talk of impact and truth is deeply ingrained in EA. It’s best to work outside the social edifice of EA, if you want to think clearly about impact and truth. Which feels like a shame, but isn’t too bad if you take the view that good people will eventually be drawn to you if you’re doing excellent work. That was GiveWell’s strategy, and it worked.
“Kelly did a lot of work in order to point out a real phenomenon, and I don’t see anyone taking her very seriously.”—Kelly put in a lot of work, but there were a lot of issues with the original post. I think this was inevitable to an extent, unless you’re already a policy expert producing high quality work really needs a group of people or multiple feedback cycles. I think that it is especially important to maintain high standards of evidence in regards to this issue because the increasing political polarisation means that both sides of the spectrum are dropping their own standards.
I don’t see many people who want to figure out how much of a problem there is, and then apply e.g. utilitarianism to decide what to do about that. That would count as acting seriously.
I’d like to move towards an inclusive community that doesn’t damage the valuable aspects of EA. I think this post mostly did a good job of suggesting things in that vein (I was heartened to see “don’t stop being weird” as an item), but I’d like to push on the point a bit more.
For example, I’m hugely in favour of collaborative discussions over combative discussions, but I find it very helpful to have discussions that stylistically appear combative while actually being collaborative. For example: frequent, direct criticism of ideas put forward by other people is a hallmark of combative discussion, but can be fine so long as everyone is on an even footing and “you are not your ideas” is common knowledge. If we ban this, then we make some parts of our discourse worse. Overly zealous pursuit of formalized markers can destroy a lot of value.
Of course, the solution is “don’t do that”, but the most obvious approach to “have more X” is “pick some formal markers of X and select for them”. Doing better is harder, perhaps something like “have workshops/talks on good disagreement”, “praise people who’re known for being excellent at this” etc.
I agree with others that there are too many suggestions in this post. They’re also a bit of a grab bag. I can see a few categories:
Miscellaneous criticisms, many of which seem plausible, but aren’t obviously any more important for diversity than for their other benefits (collaborative discussions, humility, less hero-worship, better interpersonal interactions etc.).
Larger-scale shifts of uncertain effect (head vs heart, jargon, caution over “free speech”, etc.). A lot of these are unclear to me, and I think we’d want to take a clear-headed look at the costs and benefits.
More specific diversity-boosting measures (female speakers, try to counteract bias, mentor people etc.). These seem clearest to me, and hopefully we can look and see what’s worked well in other places vs the costs.
I think the miscellaneous improvements could (and should!) go stand on their own; the larger-scale shifts are perhaps best discussed individually; and what I think a diversity criticism is uniquely placed to bring is more of the third kind of thing.
Regarding discussion style: I think several EAs are great at discussions where they’re fully critical of each other but aren’t combative (e.g. they don’t raise their voices, go ad hominem, tear apart one aspect of an argument to dismiss the rest, or downvote comments that signal an identity that theirs is constructed in opposition to). I think it’s possible to get all the benefit of criticism and disagreement without negative emotions clouding our judgement.
I think the key may be to work against the impulse to be right, or the impulse that someone who disagrees with you is your enemy. I’m much better than I used to be at seeing disagreement as the route to everyone in the discussion getting closer to the truth, though unfortunately that takes a constant drive to improve. (It does help a lot to just remind myself that the person I’m disagreeing with -- in most cases at least—is on my team in the bigger picture.) Doing more to penalize combative behavior and reward constructive behavior—like how downvotes and upvotes are supposed to be used in this forum—seems like a feasible solution.
Regarding the grab-bag: That was my intention, to get the ball rolling. I hope for others to bring in their own thinking on prioritization and implementation.
As I said, I’m totally in favour of collaborative discussions, i.e. this stuff
(except possibly raised voices), but I wanted to argue that sometimes things that look like combative discussion aren’t. Imagine:
A:
B: I think that’s a pretty bad argument because . seems much better.
A: No, you didn’t understand what I’m saying, I said .
This could be a snippet of a tense combative argument, or just a vigorous collaborative brainstorming session. A might feel unfairly dismissed by B, or might not even notice it. If we were trying to combat combtiveness by calling out people abruptly shooting down other people’s ideas, then we might prevent people from doing this particular style of rapid brainstorming.
(Sorry, this stuff is hard to talk about because it’s very contextual. I should probably have picked a better example :))
What I’m trying to say is that we just need to be a little bit careful how we shoot for our goals.
I see, we’re just thinking of “combative” differently.
Yeah, we have already gone too far with condemning combaticism on the EA forum in my opinion. Demanding that everyone stop and rephrase their language in careful flowery terms is pretty alienating and marginalizing to people who aren’t accustomed to that kind of communication, so you’re not going to be able to please everyone.
I do think that there should be higher bars for overtly signalling collaborativeness online, because so many other cues are missing.
I’m confused, you mean people should be expected to explicitly signal that they are being collaborative?
In my view the basic structure of a “combative” debate need not entail any negative connotation of hostility or interpersonal trouble. Point/counterpoint is just a standard, default, acceptable mode of discussion. So ideally, when you see people talking like that, as long as things are reasonably civil then you don’t feel a need to worry about it. It’s a problem that some people don’t see “combative” discussions in this way, but I don’t think there is any better solution in the long run. If you try to evolve norms to avoid the uncertainty and negative perceptions then you run along a treadmill—like the story with politically correct terminology. It’s okay to have a combative structure as long as you stick within the mainstream window of professional and academic discourse, and I think EA is mostly fine at that.
Whether a discussion proceeds as collaborative or combative depends on how the participants interpret what the other parties say. This is all heavily contextual, but as with many things involving conversational implicature, you can often spend some effort to clarify your implicature.
The internet is notoriously bad for conveying the unconscious signals that we usually use to pick up on implicature, and I think this is one of the reasons that internet discussions often turn hostile and combative.
So it’s worth putting in more signals of your intent into the text itself, since that’s all you have.
The right approach is to only look at actual points being made, and not try to infer implications in the first place.
When someone reacts to an implication, the appropriate response is to say “but I/they didn’t say anything about that,” ignore their complaints and move on.
You only have control over your own actions: you can’t control whether your interlocutor over-interprets you or not.
Your “right approach”, which is about how to behave as a listener, is compatible with Michael_PJ’s, which is about how to behave as a speaker: I don’t see why we can’t do both.
But I can control whether I am priming people to get accustomed to over-interpreting.
Because my approach is not merely about how to behave as a listener. It’s about speaking without throwing in unnecessary disclaimers.
That sounds potentially important. Could you give an example of a failure mode?
Consider how my question “Could you give an example...?” reads if I didn’t precede it with the following signal of collaborativeness: “That sounds potentially important.” At least to me (YMMV), I would be like 15% less likely to feel defensive in the case where I precede it with such a signal, instead of leaping into the question—which I would be likely (on a System 1y day) to read as “Oh yeah? Give me ONE example.” Same applies to the phrase “At least to me (YMMV)”: I’m chucking in a signal that I’m willing to listen to your point of view.
Those are examples of disclaimers. I argue these kinds of signals are helpful for promoting a productive atomsphere; do they fall into the category you’re calling “unnecessary disclaimers”? Or is it only something more overt that you’d find counterproductive?
I take the point that different people have different needs with regards to this concern. I hope we can both steer clear of typical-minding everyone else. I think I might be particularly oversensitive to anything resembling conflict, and you are over on the other side of the bell curve in that respect.
The failure mode where people over-interpret things that other people say, and then come up with wrong interpretations.
Well you should probably signal however friendly you are actually feeling, but I’m not really talking about showing how friendly you are, I’m talking about going out of your way to say “of course I don’t mean X” and so on.
https://www.overcomingbias.com/2018/05/skip-value-signals.html
It looks like we were talking at cross purposes. I was picking up on the admittedly months-old conversation about “signalling collaborativeness” and [anti-]”combaticism”, which is a separate conversation to the one on value signals. (Value signals are probably a means of signalling collaborativeness though.)
I think politeness serves a useful function (within moderation, of course). ‘Forcing’ people to behave more friendly than they feel saves time and energy.
I think EA has a problem with undervaluing social skills such as basic friendliness. If a community such as EA wants to keep people coming back and contributing their insights, the personal benefits of taking part need to outweigh the personal costs.
Not if people aren’t attracted to such friendliness. Lots of successful social movements and communities are less friendly than EA.
Can you say what you mean by ‘formal markers’? I’ve never heard this term before.
Sorry, that was me being unclear! The situation I’m envisaging is:
We want more X.
We can’t detect X directly, so we’ll pick some marker for X that looks like X (that’s what I was going at with “formal”, “relating to the form of”), and then aim for that.
Oops our markers don’t capture X, or even exclude some important bits of X.
I like Michael’s distinction between the style and core of an argument. I’m editing this paragraph to clarify the way in which I’m using a few words. When I talk about whether an argument is actually combative or collaborative, I mean to indicate whether it is more effective at goal-oriented problem-solving or at achieving political ends. By politics, I mean something like “social maneuvers taken to redistribute credit, affirmation, etc. in a way that is expected to yield selfish benefit”. For example, questioning the validity of sources would be combative if the basic points of an argument held regardless of the validity of those sources.
Claims like “EA would attract many additional high quality people if women were respected” or “social justice policing would discourage many good people from joining EA” are, while true, basically all combative, and the framing of effectiveness is just helping people self-deceive into thinking they’re motivated by impact or truth. They’re using a collaborative style (the style of caring about impact/truth) to do a combative thing (politics, in the wide definition of that word).
Ultimately, I can spin the observation that these things are combative into a stylistically collaborative but actually combative argument for my own agenda, so everything I’m saying is suspect. To illustrate: the EA phrase “politics is the mindkiller” is typically combatively used in this way, and I have the ability to do something similar here. “Politics is the mindkiller” is the mindkiller, but recognizing this won’t solve the problem, in the same way recognizing politics is the “mindkiller” doesn’t.
People can smell this, and they’d be right to distrust your movement’s ability to think clearly about impact, if you’re using claims of impact and clearer thinking to justify your own politics. People who are bright enough to figure this out are typically the ones I’d want to be working with.
Yeah, you all have a problem with how you treat women and other minority groups. Kelly did a lot of work in order to point out a real phenomenon, and I don’t see anyone taking her very seriously. You let people who want to disparage women get away with doing so by using a collaborative “impact and truth” discussion style to achieve combative, political aims. That’s just the way the social balance of power lies in EA. People would use “impact and inclusivity” as a collaborative style to achieve political aims if the balance of power were flipped. Plausibly there’s an intermediate sweet spot where this happens less overall, though shifting the balance of power to such spots is never a complete solution. I suspect a better approach would be to get rid of the politics first; this will make it easier to make progress on inclusivity.
The norm of letting people stylize politics with talk of impact and truth is deeply ingrained in EA. It’s best to work outside the social edifice of EA, if you want to think clearly about impact and truth. Which feels like a shame, but isn’t too bad if you take the view that good people will eventually be drawn to you if you’re doing excellent work. That was GiveWell’s strategy, and it worked.
“Kelly did a lot of work in order to point out a real phenomenon, and I don’t see anyone taking her very seriously.”—Kelly put in a lot of work, but there were a lot of issues with the original post. I think this was inevitable to an extent, unless you’re already a policy expert producing high quality work really needs a group of people or multiple feedback cycles. I think that it is especially important to maintain high standards of evidence in regards to this issue because the increasing political polarisation means that both sides of the spectrum are dropping their own standards.
I don’t see many people who want to figure out how much of a problem there is, and then apply e.g. utilitarianism to decide what to do about that. That would count as acting seriously.