I want to remind people that there are severe downsides of having these race and eugenics discussions like the ones linked on the EA forum.
1. It makes the place uncomfortable for minorities and people concerned about racism, which could someday trigger a death spiral where non-racists leave, making the place more racist on average, causing mor non-racists to leave, etc.
2. It creates an acrimonious atmosphere in general, by starting heated discussions about deeply held personal topics.
3. It spreads ideas that could potentially cause harm, and lead uninformed people down racist rabbitholes by linking to biased racist sources.
4. It creates bad PR for EA in general, and provides easy ammunition for people who want to attack EA.
5. In my opinion, the evidence and arguments are generally bad and rely on flawed and often racist sources.
6. In my opinion, most forms of eugenics (and especially anything involving race) is extremely unlikely to be an actually effective cause area in the near future, given the backlash, unclear benefit, potential to create mass strife and inequality, etc
Now, this has to be balanced against a desire to entertain unusual ideas and to protect freedom of speech. But these views can still be discussed, debated, and refuted elsewhere. It seems like a clearly foolish move to host them on this forum. If EA is trying to do the most good, letting people like Ives post their misinformed stuff here seems like a clear mistake.
I agree in terms of random discussions of race, but this one was related to a theory of impact, so it does seem relevant for this forum.
I don’t think we need to fear this discussion, the arguments can be judged on their own merit. If they are wrong, we will find them to be wrong.
If anything, I think on difficult topics those of us with the energy should take time to argue carefully so that those who find the topic more difficult don’t have to.
But I’m not in favour of banning discussion of theories of impact, however we look upon them.
But you can couch almost anything in terms of a theory of impact, at least tenuously, including stuff a lot worse than this. The standard can’t be “anything goes, as long as the author makes some attempt to tie to some theory of impact.”
No online discussion space can be all things to all people (cf. titotal’s first and second points).
Among other things, I don’t think that solution scales well.
As the voting history for this post shows, people with these kinds of views may have some voting power at their disposal (whether that be from allies or brigadiers). So we’d need a significant amount of voting power to quickly downvote this kind of content out of sight. As someone with a powerful strong downvote, I try to keep the standards for deploying that pretty high—to use a legal metaphor, I tend to give a poster a lot of “due process” before strong downvoting because a −9 can often contribute to the effect of squelching someone’s voice.
If we rely on voters to downvote content like this, that feels like either asking them to devote their time to careful reading of distasteful stuff they have no interest in, or asking them to actively and reflexively downvote stuff that looks off-base based on a quick scan. As to the first, few if any of us get paid for this. I think the latter is actually worse than an appropriate content ban—it risks burying content that should have been allowed to show on the frontpage for a while.
If we don’t deploy strongvotes on fairly short notice, the content is going to be on the front page for a while and the problems that @titotal brought up with strongly apply.
Finally, I am very skeptical that there would be any actionable, plausibly cost-effective actions for EAs to take even if we accepted much of the argument here (or on other eugenics & race topics). That does further reassure me that there is no great loss expecting those who wish to have those discussions to do so in their own space. The Forum software is open-source; they can run their own server.
I expect an increase in malicious actors as AI develops, both because of greater acute conflict with people with a vested interest in weakening EA, and because AI assistance will lower the barrier to plausible malicious content. I think it would take time and effort to develop consensus on community rules related to this kind of content, and so would rather not wait until the problem was acutely upon us.
This person is creating a discussion of race and eugenics and trying to make me look very bad by highlighting extremely offensive but unrelated content. Quotations from cited authors or people who run a journal are quite irrelevant to my argument which is aligned with EA values. These sorts of attacks distort your intuitions and make you feel moral disgust, but are largely irrelevant to my core argument. The author took a quote from an argument where I was trying to emphasize how much of a rights violation restrictions on immigration are and presented it in a misleading way, see Nathan Youngs comment. Right after that I reveal I am against closed borders and birth restrictions (with the extreme exception of something like brother-sister marriage).
It seems the efforts to throw mud on me are what is actually inflammatory. The original post is not inflammatory in tone. Nor does it dive into race. It is the attackers of the post that are bringing up the upsetting content to tarnish my reputation. There is a similar attack pattern against EA, which aims to associate it with crypto-fraud. Many people in EA recognize these attacks as unfair as the core mission of EA is virtuous. If you are actually worried about about optics, then trying to broadcast to everyone how EA is hosting “white supremacists” aggressively and posting offensive (and unrelated) quotes does not seem to be helping.
I feel this is a wildly unfair attack. And it seems like people don’t want me to defend myself, my reputation, or my article. They just want me to go away for optics reasons but that let’s censors win and incentivizes this sort of behavior of digging up quotes and smearing people.
In my opinion, the evidence and arguments are generally bad and rely on flawed and often racist sources.
The arguments are generally good. What can I do to defend against mere assertion but ask that people read the article and think for themselves?
If EA is trying to do the most good, letting people like Ives post their misinformed stuff here seems like a clear mistake.
I am not misinformed. I worked hard on my article. Many people are not even reading what was written or engaging seriously with it except to claim that citations are racist.
I think any discussion of race that doesn’t take the equality of races as a given will be considered inflammatory. And regardless of the merits of the arguments, they can make people uncomfortable and choose not to associate with EA.
If EA is trying to do the most good, letting people like Ives post their misinformed stuff here seems like a clear mistake.
Disagree because it is at −36.
Happy to consider your points on the merits if you have an example of an objectionable post with positive upvotes.
That said: part of me feels that Effective Altruism shouldn’t be afraid of controversial discussion, whilst another part of me wants to shift it to Less Wrong. I suppose I’d have to have a concrete example in front of me to figure out how to balance these views.
Which (if any) of titotal’s six numbered points only apply and/or have force if the post’s net karma is positive, as Mr. Parr’s have been at certain points in time?
I want to remind people that there are severe downsides of having these race and eugenics discussions like the ones linked on the EA forum.
1. It makes the place uncomfortable for minorities and people concerned about racism, which could someday trigger a death spiral where non-racists leave, making the place more racist on average, causing mor non-racists to leave, etc.
2. It creates an acrimonious atmosphere in general, by starting heated discussions about deeply held personal topics.
3. It spreads ideas that could potentially cause harm, and lead uninformed people down racist rabbitholes by linking to biased racist sources.
4. It creates bad PR for EA in general, and provides easy ammunition for people who want to attack EA.
5. In my opinion, the evidence and arguments are generally bad and rely on flawed and often racist sources.
6. In my opinion, most forms of eugenics (and especially anything involving race) is extremely unlikely to be an actually effective cause area in the near future, given the backlash, unclear benefit, potential to create mass strife and inequality, etc
Now, this has to be balanced against a desire to entertain unusual ideas and to protect freedom of speech. But these views can still be discussed, debated, and refuted elsewhere. It seems like a clearly foolish move to host them on this forum. If EA is trying to do the most good, letting people like Ives post their misinformed stuff here seems like a clear mistake.
I agree in terms of random discussions of race, but this one was related to a theory of impact, so it does seem relevant for this forum.
I don’t think we need to fear this discussion, the arguments can be judged on their own merit. If they are wrong, we will find them to be wrong.
If anything, I think on difficult topics those of us with the energy should take time to argue carefully so that those who find the topic more difficult don’t have to.
But I’m not in favour of banning discussion of theories of impact, however we look upon them.
But you can couch almost anything in terms of a theory of impact, at least tenuously, including stuff a lot worse than this. The standard can’t be “anything goes, as long as the author makes some attempt to tie to some theory of impact.”
No online discussion space can be all things to all people (cf. titotal’s first and second points).
Sure, and I think that we should discuss anything with such a theory of impact. Or scan it and downvote it.
Here the system worked as it should, I think.
Among other things, I don’t think that solution scales well.
As the voting history for this post shows, people with these kinds of views may have some voting power at their disposal (whether that be from allies or brigadiers). So we’d need a significant amount of voting power to quickly downvote this kind of content out of sight. As someone with a powerful strong downvote, I try to keep the standards for deploying that pretty high—to use a legal metaphor, I tend to give a poster a lot of “due process” before strong downvoting because a −9 can often contribute to the effect of squelching someone’s voice.
If we rely on voters to downvote content like this, that feels like either asking them to devote their time to careful reading of distasteful stuff they have no interest in, or asking them to actively and reflexively downvote stuff that looks off-base based on a quick scan. As to the first, few if any of us get paid for this. I think the latter is actually worse than an appropriate content ban—it risks burying content that should have been allowed to show on the frontpage for a while.
If we don’t deploy strongvotes on fairly short notice, the content is going to be on the front page for a while and the problems that @titotal brought up with strongly apply.
Finally, I am very skeptical that there would be any actionable, plausibly cost-effective actions for EAs to take even if we accepted much of the argument here (or on other eugenics & race topics). That does further reassure me that there is no great loss expecting those who wish to have those discussions to do so in their own space. The Forum software is open-source; they can run their own server.
Seems like that solution has worked well for years. Why is it not scaling now? It’s not like the forum is loads bigger than a year ago.
I expect an increase in malicious actors as AI develops, both because of greater acute conflict with people with a vested interest in weakening EA, and because AI assistance will lower the barrier to plausible malicious content. I think it would take time and effort to develop consensus on community rules related to this kind of content, and so would rather not wait until the problem was acutely upon us.
This person is creating a discussion of race and eugenics and trying to make me look very bad by highlighting extremely offensive but unrelated content. Quotations from cited authors or people who run a journal are quite irrelevant to my argument which is aligned with EA values. These sorts of attacks distort your intuitions and make you feel moral disgust, but are largely irrelevant to my core argument. The author took a quote from an argument where I was trying to emphasize how much of a rights violation restrictions on immigration are and presented it in a misleading way, see Nathan Youngs comment. Right after that I reveal I am against closed borders and birth restrictions (with the extreme exception of something like brother-sister marriage).
It seems the efforts to throw mud on me are what is actually inflammatory. The original post is not inflammatory in tone. Nor does it dive into race. It is the attackers of the post that are bringing up the upsetting content to tarnish my reputation. There is a similar attack pattern against EA, which aims to associate it with crypto-fraud. Many people in EA recognize these attacks as unfair as the core mission of EA is virtuous. If you are actually worried about about optics, then trying to broadcast to everyone how EA is hosting “white supremacists” aggressively and posting offensive (and unrelated) quotes does not seem to be helping.
I feel this is a wildly unfair attack. And it seems like people don’t want me to defend myself, my reputation, or my article. They just want me to go away for optics reasons but that let’s censors win and incentivizes this sort of behavior of digging up quotes and smearing people.
The arguments are generally good. What can I do to defend against mere assertion but ask that people read the article and think for themselves?
I am not misinformed. I worked hard on my article. Many people are not even reading what was written or engaging seriously with it except to claim that citations are racist.
This is sad to see EAs advocate for censorship.
I think any discussion of race that doesn’t take the equality of races as a given will be considered inflammatory. And regardless of the merits of the arguments, they can make people uncomfortable and choose not to associate with EA.
Disagree because it is at −36.
Happy to consider your points on the merits if you have an example of an objectionable post with positive upvotes.
That said: part of me feels that Effective Altruism shouldn’t be afraid of controversial discussion, whilst another part of me wants to shift it to Less Wrong. I suppose I’d have to have a concrete example in front of me to figure out how to balance these views.
Which (if any) of titotal’s six numbered points only apply and/or have force if the post’s net karma is positive, as Mr. Parr’s have been at certain points in time?