Part 1
“I fear that a lot of the discourse is getting bogged down in euphemism, abstraction, and appeals to “truth-seeking,” when the debate is actually: what kind of people and worldviews do we give status to and what effects does that have on related communities.”
This is precisely the sort of attitude which I see as fundamentally opposed to my own view that truth seeking actually happens, and that we should be rewarding status to people and worldviews that are better at getting us closer to the truth, according to our best judgement.
It also I think is a very clear example of what I was talking about in my original post, where someone arguing for one side ignores the fears and actual argument of the other side when expressing their position. You put ‘truth seeking’, in quotations, because it has nothing to do with what you are claiming yourself to care about. You are caring about status shifts amongst communities, and then you are trying to say I don’t actually care about ‘truth seeking’—not arguing I don’t, because that is obviously ridiculous—but insinuating that I actually want to make racists higher status and more acceptable by the way you wrote this sentence.
Obviously this does nothing to convince me, whatever impact it may have on the general audience. Which based on the four agree votes, and three disagree votes that I see right now, is that it gets people to think what they already thought about the issue.
Part 2
I suppose through trying to think through how I’d reply to your underlying fear, I found that I am not actually really sure what the bad thing that you think will happen if an open Nazi is platformed by an EA adjacent organization/venue is.
To give context to my confusion, I was imagining a thought experiment where the main platforms for sharing information about AI safety topics at a professional level was supported by an AI org. Further in this thought experiment there is a brilliant ai safety researcher, who happens to also be openly a Nazi—in fact he went into alignment research because he thought that untrammelled AI capabilities was being driven by Jewish scientists, and he wanted to stop them from killing everyone. If this man comes up with an important alignment advance, that will actually reduce the odds of human extinction meaningfully, it seems to me transparently obvious that his alignment research should be platformed by EA adjacent organizations.
I’m confident that you will have something say about why this is a bad thought experiment that you disagree with, but I’m not quite sure what you would say, while also taking the idea seriously.
The idea that important researchers who actually make useful advances in one area might also believe stupid and terrible things in other fields is something that has happened far too often for you to say that the possibility should be ignored.
Perhaps the policy I’m advocating, of simply looking at the value of the paper in its field, and ignoring everything else would impose costs from outside observers attacking the organization doing this that are too high to justify publishing the man who has horrible beliefs, since we can’t be certain that his advance actually is important ahead of time.
But I’d say in this case the outside observers are acting to damage the future of mankind, and should be viewed as enemies, not as reasonable people.
Of course their own policy probably also makes sense in act utilitarian terms.
So maybe you just are saying that a blanket policy of this sort, without ever looking at the specifics of the case, is the best act utilitarian policy, and should not be understood as saying there are not cases where your heuristic fails catastrophically.
But I feel as though the discussion I just engaged in is far too bloodless to capture what you actually think is bad about publishing a scientist who made an advance that will make the world better if it is published, and who is also an open Nazi.
Anyways the general possibility that open Nazis might be right about something very important that is relevant to us is sufficient to explain why I would not endorse a blanket ban of the sort you are describing.
(On the dog walk, I realized, what I’d forgotten, that the obvious answer was that doing this will raise the status of Nazis, which would actually be bad)
I mean, I am pretty sure you don’t have a terribly clear idea of what Hanania actually talks about.
So I am in fact someone who actually reads Hanania regularly, and I’ve been paying attention to the posts I read from him while this conversation was going on to see if what he was saying in it actually matches the way he is described as being in the anti platforming Hanania posts here.
And it simply does not. He is not talking about minorities at all most of the time. And when he does, he is usually actually talking about the way the politics of the far right groups he dislikes think about them, and not about the minorities themselves.
I strongly suspect that an underappreciated difference between the organizers and their critics is that the organizers who invited him actually read Hanania, and are thus judging him on their experience of his work, ie on 99% of what he writes. Everyone else who does not read him is judging him on either things he has disavowed from when he was in his early twenties, or on the worst things he’s said lately, usually a bit divorced from their actual context.
“Of course, if you see the participation of Jews and people that find Nazis repugnant to be of very low value compared with the participation of people who are Nazis or enthusiastic about hearing from them, this might still not be a net bad, but I strongly suspect that it isn’t the case.”
Anyways [insert strong insult here questioning your moral character]. My wife is Jewish. My daughter is Jewish. My daughter’s great grandparents had siblings who died in the Holocaust. [insert strong insult questioning your moral character here].