Responses to Manifest are not about Manifest
This is obvious in one way, but I think forgotten in a lot of the details about these arguments: People do not actually care very much about whether Manifest invited Hanania, they care about the broader trend.
And what I mean by that is specifically that the group that argues that people like Hanania should not be invited to events like Manifest are scared of things like:
They care about whether minorities are being excluded and made unwelcome in EA spaces.
They care about an identity they view as very important being connected to racists
More broadly, they are ultimately scared about the world returning to the sort of racism that led to the Holocaust, to segregation, and they are scared that if they do not act now, to stop this they will be part of maintaining the current system of discrimination and racial injustice.
They feel like they don’t belong in a place where people like Hanania are accepted
I apologize if I did not characterize the fears correctly, I am part of the other group, and my model of what motivates the people I disagree with is almost always going to be worse than my model of what motivates me. I am scared of things like:
Making a policy that people like Hanania should never be invited to speak is pushing society in a direction that leads to things like Maoist struggle sessions, McCarthyism (I think we are currently at the level of badness that McCarthyism represented), and at an actual extreme, the thought police from 1984.
The norms cancel culture embraces functionally involve powerful groups being allowed to silence those they dislike. This is still the case no matter what the details of the arguments for the positions are.
Assuming a priori that we know that a certain person’s policy arguments or causal model is false leads us to have stupider opinions on average.
I don’t belong in a place where adults are not be allowed to read whichever arguments they are interested in about controversial topics, and then form their own opinions, even if those opinions disagree with social orthodoxy.
The biggest point I want to make is that none of these things are arguments against each other.
Cancel culture norms might be creating a tool for power, and make minorities more welcome.
This might push society to be more like a McCarthyist or Maoist place where people are punished for thinking about the wrong questions and having the wrong friends, and at the same time it might prevent backsliding on racial justice, and lead to improvements in equality between racial groups.
Perhaps McCarthy actually made the US meaningfully safer from communist takeover. Most of the arguments that McCarthy was terrible that I recall from university seemed to just take as a given that there was no real risk of a communist takeover, but even if the odds of that were low, making those odds even lower was worth doing things that had costs elsewhere (unless, of course, you think that a communist revolution would have been a good thing).
If we are facing a situation where the policy favored by side A leads to costs that side B is very conscious of, and vice versa, it is likely that if instead of arguing with each other, we attempted to build ideas that addressed each others core concerns, we might come up with ideas that let each side get more of what they want at a smaller cost to what the other side wants.
The second point I’d like to make, is that arguing passionately, with better and better thought experiments that try to trigger the intuitions underlying your position, while completely ignoring the things that actually led the people you are arguing with to the positions they hold, is unlikely to be productive.
Engage with their actual fears if you want to convince, even though it is very hard to think yourself into a mindset that takes [ridiculous thing your conversational opponent is worried about] seriously.
I think you didn’t. My fear isn’t, first and foremost, about some theoretical future backsliding, creating safe spaces, or protecting reputations (although given the TESCREAL discourse, I think these are issues). My fear is:
Multiple people at Manifest witnessed and/or had racist encounters.
Racism has been, and continues to be, very insidious and very harmful.
EA is meant to be a force for good in the world; even more than that, EA aims to benefit others as much as possible.
So the bar for EA needs to be a lot higher than “only some of our ‘special guests’ say racist stuff on a regular basis” and “not everyone experienced racism at our event.”
I am bolstered by the fact that Manifest is not Rationalism and Rationalism is not EA. But I am frustrated that articulating the above position is seen as even remotely in the realm of “pushing society in a direction that leads to things like… the thought police from 1984.” This strikes me as uncharitable pearl-clutching, given that organizers have an easy, non-speech-infringing way of reducing the likelihood that their events elicit and incite racism: not listing Hanania, who wasn’t even a speaker, as a special guest on their website, while still allowing him to attend if he so chooses.
I think if you had a person invited who is known at events to get drunk and go to to people and comment negatively about their least flattering physical feature (e.g. your pimples are gross) it would not be a worry if that person was not invited. This is not about politics but about inappropriate behaviour.
Yeah, to be clear, I think inappropriate interpersonal behavior can absolutely warrant banning people from attending events, and this whole situation has given me more respect for how CEA strikes this balance with respect to EAGs.
I was mainly responding to the point that “we might come up with ideas that let each side get more of what they want at a smaller cost to what the other side wants,” by suggesting that, at a minimum, the organizers could’ve done things that would’ve involved ~no costs.
I believe this is not a valid analogy. If you uninvite someone from events for making rude comments about other attendees’ appearances, that only applies to that one rude person, or to people who behave rudely. If you disinvite someone for holding political views you’re uncomfortable with, that has a chilling effect on all uncommon political views, and is harmful to everyone’s epistemics.
The inappropriate behavior here is being a person who holds particular political beliefs about the world and expresses them.
It is definitely also about politics.
Fair point. Where to draw the line between what is and isn’t politics isn’t clear cut or as Thomas Mann put it: “Everything is politics.” Perhaps pimples is less political than comments that relate to e.g. religion or something else “structural”. I guess where I feel like there is something in my comment is one then concludes something like “it is ok to offend someone as long as the offence ties into power structures”. I guess this would theoretically mean then that it is ok for someone to comment on someone with lower income on e.g. their cheap clothing (or pick your physical proxy for class). That does not seem right so I still think I think that people acting offensively regarding race should be encouraged to change their behavior to be less offensive. And if there is a need to discuss something offensive (e.g. in nuclear weapons discussions discuss the horror that followed the bombing of Hiroshima, maybe make this clear to participants in advance so they can avoid the event/part of the event if that is a challenging topic for them).
So I think it is totally fine for a group to ban particular controversial topics during meetings. What I think causes the problems I am worried about is banning people who have known controversial opinions that are expressed elsewhere.
If a specific person is unwilling to refrain from talking about their favorite subject at the meeting, I am then fine with banning them for that specific behavior (so long as it is done with a reasonable process, involving warnings and requiring people expressing the opposite point of view to also not start the arguments)
I am a bit more unsure about this but I also thinks this cuts the other way—if someone at an event loudly went around advocating for forcefully taking (e.g. by nationalising their wealth in an unprecedented and somewhat aggressive way ) rich people’s money to fund egalitarian project X, I think one could also argue that such people make others uncomfortable enough that their attendance is undesirable.
Eh, and I just think that should straightforwardly be allowed as on topic.
I mean part of me thinks we should do that, at least with the tax revenues already being collected from rich people, like normal Americans.
If it’s a terrible idea, it would be better within my model for the conversation to happen.
I’m probably pretty close to you but my fears are different:
We decide that anyone who the median EA thinks might be racist cannot been celebrated in any way by a related conference.
Topics of race and genetics are not merely rarely discussed and heavy taxed (as currently) but banned.
Like many other discussions, these very hard to discuss topics never get easier to discuss or come to consensus on. Other discussions include Owen cotton Barrett, bostrom, whether sexism in EA is abnormally bad, whether competence and representation trade off, leverage and CEA drama, icky statusy things within EA, whether Will and Nick made grave errors, why hasn’t there been an FTX investigation and if global health is actually a waste of money compared to other options.
Somewhere in this ball of topics is something that would be really important to discuss well. Or somehow learning to discuss it would give us needed competence around coordination.
We miss out on better decisionmaking because of this.
Like i’d be okay if we picked 3 things you couldn’t discuss. But then when we added a new one, we took one away. It’s that it feels like the list is growing that i dislike.
(merely because i can name a big cluster of costly topics doesn’t mean that i could cheaply write an article or that the forum would discuss them well if it came up)
A more general formulation of this is of ‘1 way vs 2-way doors’ sometimes reference by Jeff Bezos. Here from some article (not by Bezos):
To me a big part of the disagreement is which parts of this are 1-way or 2-way decisions.
I guess that to some, having racist speakers anywhere near EA feels like a 1-way decision. Once it starts, it won’t unhappen and then there will just be racists around sometimes.
To me, the repeated action of banning topics, often without full agreements on the facts involved (we don’t even agree on Hanania, let alone other speakers and attendees) is a 1-way decision. Once we start doing it, it becomes a tool that we can use that is hard to un-use. I want to be more cautious before we start being like “this person is unacceptable on grounds we don’t all agree on”.
Here then the question is how parties can both have these be 2 way doors. Perhaps we could agree to revisit the issue of racism in EA in a year and see where we think we are, with the power to renegotiate. If I were confident of that, I would be less worried about some more pushy moves now.
I strongly support local bans on particular topics, so long as they are done in a way that doesn’t involve endorsing one side and then refusing to let people who disagree talk.
I am at least pretty relaxed on such bans as long as they are bounded. CEA can ban what they like from EAGs and i feel less bad than if EAs are attempting to ban things generally.
And I would be more relaxed on those things being banned if it were credible that they would be unbanned or that the amount of banned topics overall would be static.
This feels somewhat uncharitable.
Huh—this both feels like something I’m sympathetic to worrying about and matches what I’ve seen people say about similar issues around the internet. Why does it seem uncharitable to you?
I think given that his own example uses McCarthyism, while it might be incorrect he seems to at least not be attempting hyperbole—both examples end up in outcomes many people consider at least disasterous.
People can and should read whoever and whatever they want! But who a conference chooses to platform/invite reflects on the values of the conference organizers, and any funders and communities adjacent to that conference.
Ultimately, I think that almost all of us would agree that it would be bad for a group we’re associated with to platform/invite open Nazis. I.e. almost no one is an absolutist on this issue. If you agree, then you’re not in principle opposed to exlcuding people based on the content of their beliefs, so the question just becomes: where do you draw the line? (This is not a claim that anyone at Manifest actually qualifies as an open Nazi, more just a reductio to illustrate the point.)
Answering this question requires looking at the actual specifics: what views do people hold? Were those views legible to the event organizers? I fear that a lot of the discourse is getting bogged down in euphemism, abstraction, and appeals to “truth-seeking,” when the debate is actually: what kind of people and worldviews do we give status to and what effects does that have on related communities.
If you think that EA adjacent orgs/venues should platform open Nazis, as long as they use similar jargon, then I simply disagree with you, but at least you’re being consistent.
Part 1
“I fear that a lot of the discourse is getting bogged down in euphemism, abstraction, and appeals to “truth-seeking,” when the debate is actually: what kind of people and worldviews do we give status to and what effects does that have on related communities.”
This is precisely the sort of attitude which I see as fundamentally opposed to my own view that truth seeking actually happens, and that we should be rewarding status to people and worldviews that are better at getting us closer to the truth, according to our best judgement.
It also I think is a very clear example of what I was talking about in my original post, where someone arguing for one side ignores the fears and actual argument of the other side when expressing their position. You put ‘truth seeking’, in quotations, because it has nothing to do with what you are claiming yourself to care about. You are caring about status shifts amongst communities, and then you are trying to say I don’t actually care about ‘truth seeking’—not arguing I don’t, because that is obviously ridiculous—but insinuating that I actually want to make racists higher status and more acceptable by the way you wrote this sentence.
Obviously this does nothing to convince me, whatever impact it may have on the general audience. Which based on the four agree votes, and three disagree votes that I see right now, is that it gets people to think what they already thought about the issue.
Part 2
I suppose through trying to think through how I’d reply to your underlying fear, I found that I am not actually really sure what the bad thing that you think will happen if an open Nazi is platformed by an EA adjacent organization/venue is.
To give context to my confusion, I was imagining a thought experiment where the main platforms for sharing information about AI safety topics at a professional level was supported by an AI org. Further in this thought experiment there is a brilliant ai safety researcher, who happens to also be openly a Nazi—in fact he went into alignment research because he thought that untrammelled AI capabilities was being driven by Jewish scientists, and he wanted to stop them from killing everyone. If this man comes up with an important alignment advance, that will actually reduce the odds of human extinction meaningfully, it seems to me transparently obvious that his alignment research should be platformed by EA adjacent organizations.
I’m confident that you will have something say about why this is a bad thought experiment that you disagree with, but I’m not quite sure what you would say, while also taking the idea seriously.
The idea that important researchers who actually make useful advances in one area might also believe stupid and terrible things in other fields is something that has happened far too often for you to say that the possibility should be ignored.
Perhaps the policy I’m advocating, of simply looking at the value of the paper in its field, and ignoring everything else would impose costs from outside observers attacking the organization doing this that are too high to justify publishing the man who has horrible beliefs, since we can’t be certain that his advance actually is important ahead of time.
But I’d say in this case the outside observers are acting to damage the future of mankind, and should be viewed as enemies, not as reasonable people.
Of course their own policy probably also makes sense in act utilitarian terms.
So maybe you just are saying that a blanket policy of this sort, without ever looking at the specifics of the case, is the best act utilitarian policy, and should not be understood as saying there are not cases where your heuristic fails catastrophically.
But I feel as though the discussion I just engaged in is far too bloodless to capture what you actually think is bad about publishing a scientist who made an advance that will make the world better if it is published, and who is also an open Nazi.
Anyways the general possibility that open Nazis might be right about something very important that is relevant to us is sufficient to explain why I would not endorse a blanket ban of the sort you are describing.
(On the dog walk, I realized, what I’d forgotten, that the obvious answer was that doing this will raise the status of Nazis, which would actually be bad)
Really? I imagine the first negative consequence is that many Jews and other minorities would withdraw from the event, because a “safe space” for non-judgementally listening to Nazis is a very dangerous space for Jewish people. I also imagine a number of people who are not Jews or directly threatened by the Nazis would also withdraw, particularly if they were doing somewhat adjacent work. If I was a researcher into, say transhumanism, trying to argue that my controversial innovations weren’t about dangerous experimentation or engineering certain races out of existence, the last person I’d want to see on the speaker list next to me would be a disciple of Josef Mengele. Or I might just not like hanging out with neo-Nazis socially.
Of course, if you see the participation of Jews and people that find Nazis repugnant to be of very low value compared with the participation of people who are Nazis or enthusiastic about hearing from them, this might still not be a net bad, but I strongly suspect that it isn’t the case. When the world wanted to make use of significant scientific advances developed by people who had actually been members of the Nazi party, they generally did it by actually using the research, not by naming them as “special guests” at events to attract likeminded people.
In this case, we’re talking about the selection of a conference lineup where organisers promoted the status of an angry culture warrior notable mainly for expressing his enthusiasm for suppressing minorities in an intemperate manner, and apparently didn’t select anyone particularly associated with opposing those politics, never mind listen to their previously-expressed fears and arguments against granting Hanania special status. I think that conference organizers are well within their rights to promote people whose views they like and not platform people whose views they don’t, but deserve to be judged by the types of echo-chambers they create. On the other hand it’s unclear why you would associate platforming Hanania and fringe geneticists and apparently not extending the same platform to even fairly uncontroversial mainstream representatives of the political left or academic genetics with “truth seeking”
I mean, I am pretty sure you don’t have a terribly clear idea of what Hanania actually talks about.
So I am in fact someone who actually reads Hanania regularly, and I’ve been paying attention to the posts I read from him while this conversation was going on to see if what he was saying in it actually matches the way he is described as being in the anti platforming Hanania posts here.
And it simply does not. He is not talking about minorities at all most of the time. And when he does, he is usually actually talking about the way the politics of the far right groups he dislikes think about them, and not about the minorities themselves.
I strongly suspect that an underappreciated difference between the organizers and their critics is that the organizers who invited him actually read Hanania, and are thus judging him on their experience of his work, ie on 99% of what he writes. Everyone else who does not read him is judging him on either things he has disavowed from when he was in his early twenties, or on the worst things he’s said lately, usually a bit divorced from their actual context.
“Of course, if you see the participation of Jews and people that find Nazis repugnant to be of very low value compared with the participation of people who are Nazis or enthusiastic about hearing from them, this might still not be a net bad, but I strongly suspect that it isn’t the case.”
Anyways [insert strong insult here questioning your moral character]. My wife is Jewish. My daughter is Jewish. My daughter’s great grandparents had siblings who died in the Holocaust. [insert strong insult questioning your moral character here].
I evidently don’t read Hanania as regularly as you do. On the other hand, it hasn’t escaped my notice that the first of his two books, called “the Origins of Woke”, is an extended argument in favour of the abolition of civil rights laws citing “wokeness” as the problem they caused that must be eliminated. Or that even many people who enjoy his long form reads agree that his Tweets—how he promotes himself to a wider audience—are frequently obnoxious and culture warr-y.
As for his Substack, that’s been widely discussed elsewhere, and when the defence of an article entitled “Why Do I Hate Pronouns More Than Genocide” containing lines like “I’ve hated wokeness so much, and so consistently over such a long period of my life, that I’ve devoted a large amount of time and energy to reading up on its history and legal underpinnings and thinking about how to destroy it” is that he acknowledges this preoccupation might not be entirely rational and genocide might actually be worse, I think we can safely say he belongs in the culture warrior category.[1]
So whilst I agree that not everything Hanania has ever written is concerned with culture wars, I don’t think it’s at all accurate to suggest that 99% of what Hanania writes is unconnected with culture wars or to imply he’s actually some truth-seeking intellectual who’s said a few things that are taken out of context. On the contrary, “hating wokeness”—to use his own terms—seems to be central to his public persona, and certainly central to why his name on the poster makes some people who would actually enjoy an event about prediction markets less likely to attend.
Of course, he also writes relatively nonpartisan stuff about prediction markets which might be interesting to the organizers but so do lots of people who don’t blog or tweet about their hatred of gender expression or the alleged innate intellectual inferiority of black people. So I’m not sure there’s any essential truth being lost by not putting Hanania’s name on the poster, particularly as there were numerous other relatively or entirely uncontroversial figures giving actual talks on prediction markets from a pro-market, right-leaning perspective there already.
If you’re really worried that people might not discover certain truths or be deterred from speaking them by the Manifest lineup selection criteria, it’s really not Hanania’s quadrant of the political spectrum that’s lacking representation. I don’t actually think people’s willingness to seek truth is governed by their chances of headlining Manifest or that the organizers have any obligation to provide a platform to anyone if they don’t want to, but the flip side of that is in a world with free speech and free association, the quality of the lineup and compatibility of it with a movement that seeks to do the most good is open to debate too. “Too boring” and “not particularly positive about prediction markets” seem like perfectly good reasons not to promote people on their poster, but so does “extremely offensive towards numerous people who might otherwise enjoy our event” .
In that case, I find it all the more extraordinary that you wrote the sentence “I am not actually really sure what the bad thing that you think will happen if an open Nazi is platformed by an EA adjacent organization/venue is”
If you have trouble believing that any harm could come from promoting an actual open Nazi at a conference coming from me, perhaps you will find some of your family members more convincing.[2] Even if you have strong safeguards in place to stop the open Nazi or the people they attract talking about specifically Nazi stuff and don’t care at all about the external reputation of the organization or wider movement, it seems almost certain to deter a lot of other people from participating, which strikes me as a very bad thing except in the unlikely event that none of their contributions are as valuable.
[I’m not really sure why you would want to insert a strong insult questioning my moral character. For the avoidance of doubt I’m not questioning the moral character of your posts earlier in this subthread, I’m questioning the judgement][3]
Similarly, his Stop Talking About Race and IQ article sometimes cited as an indication that his white supremacist days are long behind him starts off not by questioning whether the theory that black people are innately intellectually inferior might not be settled science, but by expressing concern that if they succeed in converting leftists to the importance of IQ gaps, they might actually take action to try to close them!
Your family probably has as wide a range of views on politics as anyone else’s family, but I’d imagine at least some members don’t struggle to see any downsides to putting Nazis on pedestals...
and FWIW I’m not among the people who downvoted your post either
I appreciate the tone and my read of the aim of this—feels like it’s trying to clarify what’s at stake and cause more understanding and better discussion.
Could you help us understand some of your fears better? Although various positions have been expressed, the common core seems to be ~ we object to extending special-guest and/or speaker status to certain individuals at an EA-adjacent conference. I’m struggling to understand how that assertion strongly implies stuff like “pushing society in a direction that leads to” McCarthyism, embracing “cancel culture” norms more generally, or not “allow[ing adults] to read whichever arguments they are interested in about controversial topics . . . .” For example, I don’t recall seeing anyone here say that Hanania et al. should get canceled by whoever is hosting their websites, that they should lose their jobs, etc. (although I don’t recall every single comment).
To me, cancel culture is more “I find this offensive, and I desire and aim to shut down the person’s ability to communicate that offensive message” while the response here has been more “I find this offense, and do not want it associated with me or my community. While this may have the incidental effect of making it somewhat harder for the speaker to convey his message or would-be listeners to hear it, that is not the aim of the objection).” I could see a few possible cruxes here: one could think there is no practical difference between these positions, or one could think that the objectors are actually in the first camp. Do either of these potential cruxes ring true to you?
So I certainly pattern match the things being said in this discussion as the things said by people who want to get Substack to remove Hanania, want people with his opinions who have a normal employer to lose their jobs, and then after they have lost their jobs, they want to have the financial system refuse to process payments to them by someone who wants to help them survive now that they’ve lost their job, since after all it is important to stop people from funneling money to Nazis.
I can’t speak for everyone, but I think the crux is that I tend to think the objectors are actually in the first camp, and that they need to be fought on that basis. And so moving forward towards agreement would creating trust that the objectors actually aren’t.
But I think there is also an important difference on the question of what it means to invite someone as a speaker—ie does it mean that you are endorsing in some sense what they say, or are you just saying that they are someone that enough attendees will find interesting to make it worth giving them a speaking slot.
A culture in which we try to stop people from getting a chance to listen to people who they find interesting, because we dislike things they believe, seems to me to be the essence of the thing I think is bad. Giving someone an opportunity to speak is not endorsement in my head, and it is a very bad norm to treat it like it is.
This also, incidentally, is where the people running Manifest were coming from: They fundamentally don’t see inviting Hanania as endorsing his most controversial views, and they certainly don’t see it as endorsing the views he held in his twenties that he now loudly claims to reject.
While the deplatforming side seems to think that a culture where people who believe bad things are given platforms to speak just because the people deciding who will speak think they are interesting is terrible because it is implicitly endorsing the bad things they believe.
To give a different example, if I was running a major EA event, and I could get Emile Torres to speak at it, I definitely would, even though I think he is often arguing in bad faith, and even though I vehemently disagree with both much of his model of the world and the values he seems to espouse. I think enough people would find him interesting enough to be worth listening to, so it makes sense to ‘extend him special-guest/or speaker status’.
I would be sad to see Emile Torres offered a speaking slot at an EA conference as this would reward bad faith criticism. I wouldn’t join a social pressure campaign to cancel him—sometimes people will make decisions I consider unwise and I’ll make decisions that they consider unwise—but I would caution someone considering doing this that they were making an unwise decision by inviting someone who often acts in bad faith and I would strongly recommend that they consider alternate names before resorting to Emile (I don’t think it would be hard to find equally interesting critics without the bad faith, his mind just immediately springs to mind due to availability bias).
I suppose I don’t see listening to him as a reward to him, but something I do or don’t do because it is good for me. The relevance of him saying things in bad faith is that it means you have to be more careful about trusting anything he says, and thus listening him is unusually likely to leave you with more inaccurate beliefs than you started with.
I suppose to explore the difference further, do you think it would be a bad idea to read something he wrote or to subscribe to his Twitter (which I do). Or is it specifically that you don’t want to invite him to talk.
And in the case of invitation, is it because you are worried that people will get bad beliefs from listening to him, or primarily because you dislike that it would seem like a positive thing for him?
I think we should use talk invitations to nudge people towards acting in good faith.
I think that’s going to make it significantly harder to make progress here. An assertion that people who have asserted X, and denied Y, actually believe Y implies that those people are misrepresenting their position for tactical advantage. Few people who actually believe X and not Y are going to be receptive to an expectation that they move toward W to prove they don’t believe Y. I get that there are in fact individuals in society on these issues who actually cloak their belief in Y by asserting only X, but it doesn’t follow that the particular X-believers in front of you are doing that.
This goes for both sides, by the way—I believe there are individuals in society who defend platforming speech I find to be vile because they like the contents of the speech rather than out of free-speech principles as they claim. There may (or may not) be such individuals on the Forum. But: it wouldn’t be fair or helpful for me to assume that any particular person was attempting to deceive me about the reasons for their support of Manifund’s decision.
Moreover, the proffered reasons for the various positions either have merit, or they lack merit, irrespective of the subjective motivations of the people offering those positions.
Yeah, that’s a critical crux here. I think there are at least a couple of axes going on here:
I think one difference is the extent to which we think of norms as something we rationally work out within this community, and the extent to which we think they exist apart from this community. It might well be true that waving a magic wand to get everyone to follow a no-inference norm would be ideal if we had that power. But it might also be true that we actually have ~zero influence on meanings that people outside the relevant communities ascribe to certain actions, and lack the ability to rewire deeply-engrained reactions many people have from being born and raised in broader society.
For instance, some (probably most!) people are going to feel unwelcomed at a conference at which people are presenting about how their ethnic group is less intelligent than others, and some will feel unwelcomed in the broader community and associated communities. Saying that people “shouldn’t” feel that way doesn’t address those harms.
There exists a wide range of potential meanings that can be ascribed to action in relation to a speaker. While “no meaning whatsoever” and “whole-hearted endorsement of all of the speaker’s most controversial ideas” are the poles, there is a continuum of potential meanings. For instance: “this person has serious ideas worth listening to.” I think both sides need to be careful to specify with more precision what meaning(s) they think are reasonable/unreasonable.
For instance, someone suggesting “endorsement” by a conference organizer should be clear on what exactly they think is being endorsed. My guess is that many bald references to endorsement actually refer to endorsement that the person has a serious idea worth listening to, rather than that the person is correct.
Likewise, my guess is that at least some people stating “no endorsement” positions may actually accept a very limited view of endorsement. For example, they might view it as worthy of criticism to platform (e.g.) someone who favored the death penalty for sex outside of heterosexual marriage, blasphemy, and other things the speaker considered immoral. (Sadly, these people actually exist.) In other words, we should not merely assume no-endorsement advocates are biting the bullet all the way unless they specifically endorse that position.
For some of us, the appropriate meaning to ascribe depends on context. For instance, many people would assign a much lower level of meaning (if any) to a payment processor than to [edit: Manifest, not Manifold] than to an issue-advocacy group or a political party. The broader context of the entity’s platforming decisions may also play a role—e.g., if an entity platforms a range of speakers on X issue, then certain meanings become logically incoherent or at least much less plausible.
The previous two points suggest to me that this is at least a little bit about Manifest. “What is the purpose of Manifest?” seems somewhat relevant to this discussion. If the purpose of Manifest is bring speakers “that enough attendees will find interesting to make it worth giving them a speaking slot,” then that’s one thing. If it’s to have important conversations worth having, then that does imply something more about speaker selection to me. In a sense, this is judging platforming decisions by the standard the platform has set for itself.
I don’t assume that (e.g.) executives at most US TV networks endorse anything about the speakers they platform beyond being not-abhorrent and being profitable. On the other hand, there’s a flipside to that. I expect them to own up to their predominately profit-seeking as opposed to truth-seeking / socially valuable mission, and not claim to be more than primarily the bread-and-circuses delivery services they are.
Finally, for most viewpoints, some degree of object-level assessment of the speech is going to be necessary. For a silly example, platforming speakers who believe the world is flat and carried around on a turtle isn’t consistent with a platform’s professed goal of hosting important conversations that matter. Neither is platforming speakers who make speeches about how terrible an entity the New England Patriots are (no matter how true this is!)
FYI: Emile Torres is using they/them pronouns. I think you should edit your comment to use their preferred pronouns.
For what it’s worth, I was one of the most anti-Hanania/Manifest people in the original big thread, and I don’t think I’m all that “cancel-y” overall. I’m opposed to people being fired from universities for edgy right-wing opinions on empirical matters, and I’m definitely opposed to them being cut off from all jobs. I do think people should not hire open neo-Nazis (or for that matter left-wingers who believe in genuinely deranged antisemitic conspiracy theories) for normal jobs, but I don’t think any of the Manifest speakers fell in that category. But I see a difference between the role of universities-find out the truth no matter what by permitting very broad debate-and the role of a group like EA that has a particular viewpoint and no obligation to invite in people who disagree with it.
For what it’s worth, calling for deplatforming people based on thinking they’re racist based on what they wrote from when they were younger and uncharitable interpretations of a couple of tweets feels pretty “cancel-y” to me.
I feel sad that you’re getting downvoted. Whether or not your position is correct (I personally disagree more than I agree), it seems to me that this is content which will be helpful for moving people towards mutual understanding of where the disagreements are.
I think it’s fine to have a narrower idea of what views should be platformed within your own community than in society at large. Different communities have different purposes, and different purposes will point to different sets of ideas and speakers to be platformed. That is as it should be. I want to suggest two other possible cruxes which do seem somewhat cruxy to me:
1. deplatforming these particular speakers is not what the ideals of EA dictate. The core thing that EA is about is creating an intellectually open space to explore strange fringe ideas about how to make the world better, and these speakers fit that purpose.
2. Manifest is not an EA event. That is part of what attracted me to it. It belongs to the forecasting community, which is a distinct thing, even if the membership is overlapping. So when EAs try to deplatform speakers at Manifest, they are reaching out beyond their own community and trying to dictate what can be said in someone elses community, which sure makes it look a lot more like your idea of cancel culture.
I would not agree with that. I view the core idea as actually making the world better (i.e., as conducting altruism effectively), and exploring ideas as an instrumental goal toward that end. I do not think focus on an idea that in my view has—at best—a very tenuous link to any plausible theory of doing good in the world is actually instrumental toward the core idea. Too much emphasis on free-expression ideology risks making freedom of expression an end in itself, similar to how scratching the ideological itches of traditional charity donors and executives became an end in itself. And while I think free expression is intrinsically valuable to human beings, I do not think it intrinsically valuable to EA in the same way.
The organizers advertised here; I think that makes it our business to criticize what they advertised where warranted.
I find the criticism “this wasn’t in your community” and the criticism “you’re trying to dictate to another community” to be somewhat at odds here. I, like most commenters here, have zero power in the forecasting community. Trying to “dictate” what people do in a community over which I have zero power sounds like a colossal waste of time. My lack of power also implies that my criticism would not cause any concrete injury to the forecasting community. To the extent that individual commenters do have some power or influence in the forecasting community, that’s a hint that they are in fact associated with that community to some extent.
I also don’t agree more generally that criticizing actions of another community is “dictating” anything to them. Under a broad definition where expressing disapproval of decisions relating to speech constitutes dictating, I think there are a number of communities to which the vast majority of EAs would like to “dictate” things!
To your first point, fair. I think the crux is just very object level assessments of the individual speakers and the ideas they hold, and I don’t want to go down that road here.
To your second point, your argument seems to imply that it is ok to exercise influence by calling people “racist” anywhere you can. That seems to imply that literally nothing would be “cancel culture” to you, which is not where you started a couple of comments ago.
Can I add one more fear—mischaracterising the scientific credibility of scientific racism/ HBD.
Having these voices like Hanania & Razib Khan at Manifest (with no counterbalance) is going to make people think that there is more scientific support for the “race realist/ HBD” position than there actually is.
In actual fact, the opposite is true.
Response by evolutionary biologists responded to Nicholas Wade’s book: https://www.nytimes.com/2014/08/10/books/review/letters-a-troublesome-inheritance.html the
American Society of Human Genetics statement https://www.cell.com/ajhg/pdf/S0002-9297(18)30363-X.pdf and the American Association of Biological Anthropologists statement https://bioanth.org/about/position-statements/aapa-statement-race-and-racism-2019/
Further papers on Race & IQ https://r.jordan.im/download/racism/bird2021.pdf https://onlinelibrary.wiley.com/doi/abs/10.1002/bies.202100204?
I appreciate the papers link, but the existence of discussions like this is why statements by official bodies concerned about reputation cannot be taken as strong evidence.
Or more if the evidence cited for the socially required position in the official statement is fairly weak and hedged it becomes actually weak counter evidence (I’m not saying that’s the case, I haven’t yet read the statements).
Basically threatening scholars with deplatforming for expressing the wrong beliefs damages the link between what scientific groups say and what the best processes for evaluating the evidence will tell us. This is an example of why speech control makes us collectively stupider.
Note, this is not an infinitely strong effect, if it was really clear from the evidence that HBD was true, I would not expect these statements, but I would expect them for any range between HBD is definitely false to some form HBD is the most likely explanation, but with strong counter arguments that can’t be dismissed easily.
Executive summary: Debates about inviting controversial speakers to events reflect broader concerns about societal trends, and productive dialogue requires engaging with the underlying fears on both sides rather than arguing past each other.
Key points:
Those opposed to inviting controversial speakers fear exclusion of minorities, association with racism, and backsliding on racial justice.
Those in favor fear censorship, McCarthyism, and loss of intellectual freedom.
These concerns are not mutually exclusive—policies could have both positive and negative effects.
Productive dialogue requires addressing core concerns on both sides rather than arguing past each other.
Focusing solely on triggering intuitions supporting one’s position is unlikely to convince opponents.
A more effective approach is to engage seriously with opponents’ fears, even if they seem ridiculous.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.