I agree in terms of random discussions of race, but this one was related to a theory of impact, so it does seem relevant for this forum.
I don’t think we need to fear this discussion, the arguments can be judged on their own merit. If they are wrong, we will find them to be wrong.
If anything, I think on difficult topics those of us with the energy should take time to argue carefully so that those who find the topic more difficult don’t have to.
But I’m not in favour of banning discussion of theories of impact, however we look upon them.
But you can couch almost anything in terms of a theory of impact, at least tenuously, including stuff a lot worse than this. The standard can’t be “anything goes, as long as the author makes some attempt to tie to some theory of impact.”
No online discussion space can be all things to all people (cf. titotal’s first and second points).
Among other things, I don’t think that solution scales well.
As the voting history for this post shows, people with these kinds of views may have some voting power at their disposal (whether that be from allies or brigadiers). So we’d need a significant amount of voting power to quickly downvote this kind of content out of sight. As someone with a powerful strong downvote, I try to keep the standards for deploying that pretty high—to use a legal metaphor, I tend to give a poster a lot of “due process” before strong downvoting because a −9 can often contribute to the effect of squelching someone’s voice.
If we rely on voters to downvote content like this, that feels like either asking them to devote their time to careful reading of distasteful stuff they have no interest in, or asking them to actively and reflexively downvote stuff that looks off-base based on a quick scan. As to the first, few if any of us get paid for this. I think the latter is actually worse than an appropriate content ban—it risks burying content that should have been allowed to show on the frontpage for a while.
If we don’t deploy strongvotes on fairly short notice, the content is going to be on the front page for a while and the problems that @titotal brought up with strongly apply.
Finally, I am very skeptical that there would be any actionable, plausibly cost-effective actions for EAs to take even if we accepted much of the argument here (or on other eugenics & race topics). That does further reassure me that there is no great loss expecting those who wish to have those discussions to do so in their own space. The Forum software is open-source; they can run their own server.
I expect an increase in malicious actors as AI develops, both because of greater acute conflict with people with a vested interest in weakening EA, and because AI assistance will lower the barrier to plausible malicious content. I think it would take time and effort to develop consensus on community rules related to this kind of content, and so would rather not wait until the problem was acutely upon us.
I agree in terms of random discussions of race, but this one was related to a theory of impact, so it does seem relevant for this forum.
I don’t think we need to fear this discussion, the arguments can be judged on their own merit. If they are wrong, we will find them to be wrong.
If anything, I think on difficult topics those of us with the energy should take time to argue carefully so that those who find the topic more difficult don’t have to.
But I’m not in favour of banning discussion of theories of impact, however we look upon them.
But you can couch almost anything in terms of a theory of impact, at least tenuously, including stuff a lot worse than this. The standard can’t be “anything goes, as long as the author makes some attempt to tie to some theory of impact.”
No online discussion space can be all things to all people (cf. titotal’s first and second points).
Sure, and I think that we should discuss anything with such a theory of impact. Or scan it and downvote it.
Here the system worked as it should, I think.
Among other things, I don’t think that solution scales well.
As the voting history for this post shows, people with these kinds of views may have some voting power at their disposal (whether that be from allies or brigadiers). So we’d need a significant amount of voting power to quickly downvote this kind of content out of sight. As someone with a powerful strong downvote, I try to keep the standards for deploying that pretty high—to use a legal metaphor, I tend to give a poster a lot of “due process” before strong downvoting because a −9 can often contribute to the effect of squelching someone’s voice.
If we rely on voters to downvote content like this, that feels like either asking them to devote their time to careful reading of distasteful stuff they have no interest in, or asking them to actively and reflexively downvote stuff that looks off-base based on a quick scan. As to the first, few if any of us get paid for this. I think the latter is actually worse than an appropriate content ban—it risks burying content that should have been allowed to show on the frontpage for a while.
If we don’t deploy strongvotes on fairly short notice, the content is going to be on the front page for a while and the problems that @titotal brought up with strongly apply.
Finally, I am very skeptical that there would be any actionable, plausibly cost-effective actions for EAs to take even if we accepted much of the argument here (or on other eugenics & race topics). That does further reassure me that there is no great loss expecting those who wish to have those discussions to do so in their own space. The Forum software is open-source; they can run their own server.
Seems like that solution has worked well for years. Why is it not scaling now? It’s not like the forum is loads bigger than a year ago.
I expect an increase in malicious actors as AI develops, both because of greater acute conflict with people with a vested interest in weakening EA, and because AI assistance will lower the barrier to plausible malicious content. I think it would take time and effort to develop consensus on community rules related to this kind of content, and so would rather not wait until the problem was acutely upon us.