You’d expect having a wider range of speakers to increase intellectual diversity — but only as long as hosting Speaker A doesn’t lead Speakers B and C to avoid talking to you, or people from backgrounds D and E to avoid joining your community. The people I referred to in the last section feel that some people might feel alienated and unwelcome by the presence of Robin as a speaker; they raised concerns about both his writing and his personal behavior, though the latter points were vague enough that I wound up not including them in the post.
But isn’t it basically impossible to build an intellectually diverse community out of people who are unwilling to be associated with people they find offensive or substantially disagree with? It seems really clear that if Speaker B and C avoid talking to you, only because you associated with Speaker A, then they are following a strategy where they are generally not willing to engage with parties that espouse ideas they find offensive, which makes it really hard to create any high level of diversity out of people who follow that strategy (since they will either conform or splinter).
That is why it’s so important to not give into those people’s demands, because building a space where lots of interesting ideas are considered is incompatible with having lots of people who stop engaging with you when you ever believe anything they don’t like. I am much more fine with losing out on a speaker who is unwilling to associate with people they disagree with, than I am with losing out on a speaker who is willing to tolerate real intellectual diversity, since I actually have a chance to build an interesting community out of people of the second type, and trying to build anything interesting out of the first type seems pretty doomed.
Obviously this is oversimplified, but I think the general gist of the argument carries a lot of weight.
I am much more fine with losing out on a speaker who is unwilling to associate with people they disagree with, than I am with losing out on a speaker who is willing to tolerate real intellectual diversity, since I actually have a chance to build an interesting community out of people of the second type, and trying to build anything interesting out of the first type seems pretty doomed.
I’d be curious how many people you think are not willing to “tolerate real intellectual diversity”. I’m not sure if you are saying
“Sure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so it’s worth the cost,” or
“Anyone who is upset by intellectual diversity isn’t someone we want to attract anyway, so losing them isn’t a real cost.”
(Presumably you are saying something between these two points, but I’m not sure where.)
No, what I am saying is that unless you want to also enforce conformity, you cannot have a large community of people with different viewpoints who also all believe that you shouldn’t associate with people they think are wrong. So the real choice is not between “having all the people who think you shouldn’t associate with people who think they are wrong” and “having all the weird intellectually independent people”, it is instead between “having an intellectually uniform and conformist slice of the people who don’t want to be associated with others they disagree with” and “having a quite intellectually diverse crowd of people who are tolerating dissenting opinions”, with the second possibly actually being substantially larger, though generally I don’t think size is the relevant constraint to look at here.
I think you’re unintentionally dodging both Aaron’s and Ben’s points here, by focusing on the generic idea of intellectual diversity and ignoring the specifics of this case. It simply isn’t the case that disagreeing about *anything* can get you no-platformed/cancelled/whatever. Nobody seeks 100% agreement with every speaker at an event; for one thing that sounds like a very dull event to attend! But there are specific areas people are particularly sensitive to, this is one of them, and Aaron gave a stylised example of the kind of person we can lose here immediately after the section you quoted. It really doesn’t sound like what you’re talking about.
> A German survivor of sexual abuse is interested in EA Munich’s events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on “gentle silent rape” and find it viscerally unpleasant. They’ve seen other discussion spaces where ideas like Hanson’s were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.
Like Ben, I understand you as either saying that this person is sufficiently uncommon that their loss is worth the more-valuable conversation, or that we don’t care about someone who would distance themselves from EA for this reason anyway (it’s not an actual ‘loss’). And I’m not sure which it is or (if the first) what percentages you would give.
The thing that I am saying is that in order to make space for someone who tries to enforce such norms, we would have to kick out many other people out of the community, and stop many others from joining. It is totally fine for people to not attend events if they just happen to hit on a topic that they are sensitive to, but for someone to completely disengage from a community and avoid talking to anyone in that community because a speaker at some event had some opinions that they were sensitive to, that wasn’t even the topic of the announced talk, is obviously going to exert substantial pressure on what kind of discourse is possible with them.
This doesn’t seem to fit nicely into the dichotomy you and Ben are proposing here, which just has two options:
1. They are uncommon
2. They are not valuable
I am proposing a third option which is:
3. They are common and potentially valuable on their own, but also they impose costs on others that outweigh the benefits of their participation, and that make it hard to build an intellectually diverse community out of people like that. And it’s really hard to integrate them into a discourse that might come to unintuitive conclusions if they systematically avoid engaging with any individuals that have expressed any ideas at some point in their public history that they are particularly sensitive to.
It seems to me that the right strategy to run if you are triggered by specific topics, is to simply avoid engaging with those topics (if you really have no way of overcoming your triggeredness, or if doing so is expensive), but it seems very rarely the right choice to avoid anyone who ever has said anything public about the topic that is triggering you! It seems obvious how that makes it hard for you to be part of an intellectually diverse community.
[EDIT: As Oli’s next reponse notes, I’m misinterpreting him here. His claim is that the movement would be overall larger in a world where we lose this group but correspondingly pick up others (like Robin, I assume), or at least that the direction of the effect on movement size is not obvious.]
***
Thanks for the response. Contrary to your claim that you are proposing a third option, I think your (3) cleanly falls under mine and Ben’s first option, since it’s just a non-numeric write-up of what Ben said:
Sure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so it’s worth the cost
I assume you would give different percentages, like 30% and 2x or something, but the structure of your (3) appears identical.
***
At that point my disagreement with you on this specific case becomes pretty factual; the number of sexual abuse survivors is large, my expected percentage of them that don’t want to engage with Robin Hanson is high, the number of people in the community with on-the-record statements or behaviour that are comparably or more unpleasant to those people is small, and so I’m generally willing to distance from the latter in order to be open to the former. That’s from a purely cold-blooded ‘maximise community output’ perspective, never mind the human element.
Other than that, I have a number of disagremeents with things you wrote, and for brevity I’m not going to go through them all; you may assume by default that everything you think is obvious I do not think is obvious. But the crux of the disagreement is here I think:
it seems very rarely the right choice to avoid anyone who ever has said anything public about the topic that is triggering you
I disagree with the non-hyperbolic version of this, and think it significantly underestimates the extent to which someone repeatedly saying or doing public things that you find odious is a predictor of them saying or doing unpleasant things to you in person, in a fairly straightforward ‘leopards don’t change their spots’ way.
I can’t speak to the sexual abuse case directly, but if someone has a long history of making overtly racist statements I’m not likely to attend a small-group event that I know they will attend, because I put high probability that they will act in an overtly racist way towards me and I really can’t be bothered to deal with that. I’m definitely not bringing my children to that event. It’s not a matter of being ‘triggered’ per se, I just have better things to do with my evening than cutting some obnoxious racist down to size. But even then, I’m very privileged in a number of ways and so very comfortable defending my corner and arguing back if attacked; not everybody has (or should have) the ability and/or patience to do that.
There’s also a large second-order effect that communities which tolerate such behaviour are much more likely to contain other individuals who hold those views and merely haven’t put them in writing on the internet, which increases the probability of such an experience considerably. Avoidance of such places is the right default policy here, at an individual level at least.
No. How does my (3) match up to that option? The thing I am saying is not that we will lose 95% of the people, the thing I am saying is we are going to lose a large fraction of people either way, and the world where you have tons of people who follow the strategy of distancing themselves from anyone who says things they don’t like is a world where you both won’t have a lot of people, and you will have tons of polarization and internal conflict.
How is your summary at all compatible with what I said, given that I explicitly said:
with the second (the one where we select on tolerance) possibly actually being substantially larger
That by necessity means that I expect the strategy you are proposing to not result in a larger community, at least in the long run. We can have a separate conversation about the exact balance of tradeoffs here, but please recognize that I am not saying the thing you are summarizing me as saying.
I am specifically challenging the assumption that this is a tradeoff of movement size, using some really straightforward logic of “if you have lots people who have a propensity to distance themselves from others, they will distance themselves and things will splinter apart”. You might doubt that such a general tendency exists, you might doubt that the inference here is valid and that there are ways to keep such a community of people together either way, but in either case, please don’t claim that I am saying something I am pretty clearly not saying.
Thank you for explicitly saying that you think your proposed approach would lead to a larger movement size in the long run, I had missed that. Your actual self-quote is an extremely weak version of this, since ‘this might possibly actually happen’ is not the same as explicitly saying ‘I think this will happen’. The latter certainly does not follow from the former ‘by necessity’.
Still, I could have reasonably inferred that you think the latter based on the rest of your commentary, and should at least have asked if that is in fact what you think, so I apologise for that and will edit my previous post to reflect the same.
That all said, I believe my previous post remains an adequate summary of why I disagree with you on the object level question.
Your actual self-quote is an extremely weak version of this, since ‘this might possibly actually happen’ is not the same as explicitly saying ‘I think this will happen’. The latter certainly does not follow from the former ‘by necessity’.
Yeah, sorry, I do think the “by necessity” was too strong.
But isn’t it basically impossible to build an intellectually diverse community out of people who are unwilling to be associated with people they find offensive or substantially disagree with? It seems really clear that if Speaker B and C avoid talking to you, only because you associated with Speaker A, then they are following a strategy where they are generally not willing to engage with parties that espouse ideas they find offensive, which makes it really hard to create any high level of diversity out of people who follow that strategy (since they will either conform or splinter).
That is why it’s so important to not give into those people’s demands, because building a space where lots of interesting ideas are considered is incompatible with having lots of people who stop engaging with you when you ever believe anything they don’t like. I am much more fine with losing out on a speaker who is unwilling to associate with people they disagree with, than I am with losing out on a speaker who is willing to tolerate real intellectual diversity, since I actually have a chance to build an interesting community out of people of the second type, and trying to build anything interesting out of the first type seems pretty doomed.
Obviously this is oversimplified, but I think the general gist of the argument carries a lot of weight.
I’d be curious how many people you think are not willing to “tolerate real intellectual diversity”. I’m not sure if you are saying
“Sure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so it’s worth the cost,” or
“Anyone who is upset by intellectual diversity isn’t someone we want to attract anyway, so losing them isn’t a real cost.”
(Presumably you are saying something between these two points, but I’m not sure where.)
No, what I am saying is that unless you want to also enforce conformity, you cannot have a large community of people with different viewpoints who also all believe that you shouldn’t associate with people they think are wrong. So the real choice is not between “having all the people who think you shouldn’t associate with people who think they are wrong” and “having all the weird intellectually independent people”, it is instead between “having an intellectually uniform and conformist slice of the people who don’t want to be associated with others they disagree with” and “having a quite intellectually diverse crowd of people who are tolerating dissenting opinions”, with the second possibly actually being substantially larger, though generally I don’t think size is the relevant constraint to look at here.
I think you’re unintentionally dodging both Aaron’s and Ben’s points here, by focusing on the generic idea of intellectual diversity and ignoring the specifics of this case. It simply isn’t the case that disagreeing about *anything* can get you no-platformed/cancelled/whatever. Nobody seeks 100% agreement with every speaker at an event; for one thing that sounds like a very dull event to attend! But there are specific areas people are particularly sensitive to, this is one of them, and Aaron gave a stylised example of the kind of person we can lose here immediately after the section you quoted. It really doesn’t sound like what you’re talking about.
> A German survivor of sexual abuse is interested in EA Munich’s events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on “gentle silent rape” and find it viscerally unpleasant. They’ve seen other discussion spaces where ideas like Hanson’s were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.
Like Ben, I understand you as either saying that this person is sufficiently uncommon that their loss is worth the more-valuable conversation, or that we don’t care about someone who would distance themselves from EA for this reason anyway (it’s not an actual ‘loss’). And I’m not sure which it is or (if the first) what percentages you would give.
The thing that I am saying is that in order to make space for someone who tries to enforce such norms, we would have to kick out many other people out of the community, and stop many others from joining. It is totally fine for people to not attend events if they just happen to hit on a topic that they are sensitive to, but for someone to completely disengage from a community and avoid talking to anyone in that community because a speaker at some event had some opinions that they were sensitive to, that wasn’t even the topic of the announced talk, is obviously going to exert substantial pressure on what kind of discourse is possible with them.
This doesn’t seem to fit nicely into the dichotomy you and Ben are proposing here, which just has two options:
1. They are uncommon
2. They are not valuable
I am proposing a third option which is:
3. They are common and potentially valuable on their own, but also they impose costs on others that outweigh the benefits of their participation, and that make it hard to build an intellectually diverse community out of people like that. And it’s really hard to integrate them into a discourse that might come to unintuitive conclusions if they systematically avoid engaging with any individuals that have expressed any ideas at some point in their public history that they are particularly sensitive to.
It seems to me that the right strategy to run if you are triggered by specific topics, is to simply avoid engaging with those topics (if you really have no way of overcoming your triggeredness, or if doing so is expensive), but it seems very rarely the right choice to avoid anyone who ever has said anything public about the topic that is triggering you! It seems obvious how that makes it hard for you to be part of an intellectually diverse community.
[EDIT: As Oli’s next reponse notes, I’m misinterpreting him here. His claim is that the movement would be overall larger in a world where we lose this group but correspondingly pick up others (like Robin, I assume), or at least that the direction of the effect on movement size is not obvious.]
***
Thanks for the response. Contrary to your claim that you are proposing a third option, I think your (3) cleanly falls under mine and Ben’s first option, since it’s just a non-numeric write-up of what Ben said:
I assume you would give different percentages, like 30% and 2x or something, but the structure of your (3) appears identical.
***
At that point my disagreement with you on this specific case becomes pretty factual; the number of sexual abuse survivors is large, my expected percentage of them that don’t want to engage with Robin Hanson is high, the number of people in the community with on-the-record statements or behaviour that are comparably or more unpleasant to those people is small, and so I’m generally willing to distance from the latter in order to be open to the former. That’s from a purely cold-blooded ‘maximise community output’ perspective, never mind the human element.
Other than that, I have a number of disagremeents with things you wrote, and for brevity I’m not going to go through them all; you may assume by default that everything you think is obvious I do not think is obvious. But the crux of the disagreement is here I think:
I disagree with the non-hyperbolic version of this, and think it significantly underestimates the extent to which someone repeatedly saying or doing public things that you find odious is a predictor of them saying or doing unpleasant things to you in person, in a fairly straightforward ‘leopards don’t change their spots’ way.
I can’t speak to the sexual abuse case directly, but if someone has a long history of making overtly racist statements I’m not likely to attend a small-group event that I know they will attend, because I put high probability that they will act in an overtly racist way towards me and I really can’t be bothered to deal with that. I’m definitely not bringing my children to that event. It’s not a matter of being ‘triggered’ per se, I just have better things to do with my evening than cutting some obnoxious racist down to size. But even then, I’m very privileged in a number of ways and so very comfortable defending my corner and arguing back if attacked; not everybody has (or should have) the ability and/or patience to do that.
There’s also a large second-order effect that communities which tolerate such behaviour are much more likely to contain other individuals who hold those views and merely haven’t put them in writing on the internet, which increases the probability of such an experience considerably. Avoidance of such places is the right default policy here, at an individual level at least.
No. How does my (3) match up to that option? The thing I am saying is not that we will lose 95% of the people, the thing I am saying is we are going to lose a large fraction of people either way, and the world where you have tons of people who follow the strategy of distancing themselves from anyone who says things they don’t like is a world where you both won’t have a lot of people, and you will have tons of polarization and internal conflict.
How is your summary at all compatible with what I said, given that I explicitly said:
That by necessity means that I expect the strategy you are proposing to not result in a larger community, at least in the long run. We can have a separate conversation about the exact balance of tradeoffs here, but please recognize that I am not saying the thing you are summarizing me as saying.
I am specifically challenging the assumption that this is a tradeoff of movement size, using some really straightforward logic of “if you have lots people who have a propensity to distance themselves from others, they will distance themselves and things will splinter apart”. You might doubt that such a general tendency exists, you might doubt that the inference here is valid and that there are ways to keep such a community of people together either way, but in either case, please don’t claim that I am saying something I am pretty clearly not saying.
Thank you for explicitly saying that you think your proposed approach would lead to a larger movement size in the long run, I had missed that. Your actual self-quote is an extremely weak version of this, since ‘this might possibly actually happen’ is not the same as explicitly saying ‘I think this will happen’. The latter certainly does not follow from the former ‘by necessity’.
Still, I could have reasonably inferred that you think the latter based on the rest of your commentary, and should at least have asked if that is in fact what you think, so I apologise for that and will edit my previous post to reflect the same.
That all said, I believe my previous post remains an adequate summary of why I disagree with you on the object level question.
Yeah, sorry, I do think the “by necessity” was too strong.