Among other things, I don’t think that solution scales well.
As the voting history for this post shows, people with these kinds of views may have some voting power at their disposal (whether that be from allies or brigadiers). So we’d need a significant amount of voting power to quickly downvote this kind of content out of sight. As someone with a powerful strong downvote, I try to keep the standards for deploying that pretty high—to use a legal metaphor, I tend to give a poster a lot of “due process” before strong downvoting because a −9 can often contribute to the effect of squelching someone’s voice.
If we rely on voters to downvote content like this, that feels like either asking them to devote their time to careful reading of distasteful stuff they have no interest in, or asking them to actively and reflexively downvote stuff that looks off-base based on a quick scan. As to the first, few if any of us get paid for this. I think the latter is actually worse than an appropriate content ban—it risks burying content that should have been allowed to show on the frontpage for a while.
If we don’t deploy strongvotes on fairly short notice, the content is going to be on the front page for a while and the problems that @titotal brought up with strongly apply.
Finally, I am very skeptical that there would be any actionable, plausibly cost-effective actions for EAs to take even if we accepted much of the argument here (or on other eugenics & race topics). That does further reassure me that there is no great loss expecting those who wish to have those discussions to do so in their own space. The Forum software is open-source; they can run their own server.
I expect an increase in malicious actors as AI develops, both because of greater acute conflict with people with a vested interest in weakening EA, and because AI assistance will lower the barrier to plausible malicious content. I think it would take time and effort to develop consensus on community rules related to this kind of content, and so would rather not wait until the problem was acutely upon us.
Among other things, I don’t think that solution scales well.
As the voting history for this post shows, people with these kinds of views may have some voting power at their disposal (whether that be from allies or brigadiers). So we’d need a significant amount of voting power to quickly downvote this kind of content out of sight. As someone with a powerful strong downvote, I try to keep the standards for deploying that pretty high—to use a legal metaphor, I tend to give a poster a lot of “due process” before strong downvoting because a −9 can often contribute to the effect of squelching someone’s voice.
If we rely on voters to downvote content like this, that feels like either asking them to devote their time to careful reading of distasteful stuff they have no interest in, or asking them to actively and reflexively downvote stuff that looks off-base based on a quick scan. As to the first, few if any of us get paid for this. I think the latter is actually worse than an appropriate content ban—it risks burying content that should have been allowed to show on the frontpage for a while.
If we don’t deploy strongvotes on fairly short notice, the content is going to be on the front page for a while and the problems that @titotal brought up with strongly apply.
Finally, I am very skeptical that there would be any actionable, plausibly cost-effective actions for EAs to take even if we accepted much of the argument here (or on other eugenics & race topics). That does further reassure me that there is no great loss expecting those who wish to have those discussions to do so in their own space. The Forum software is open-source; they can run their own server.
Seems like that solution has worked well for years. Why is it not scaling now? It’s not like the forum is loads bigger than a year ago.
I expect an increase in malicious actors as AI develops, both because of greater acute conflict with people with a vested interest in weakening EA, and because AI assistance will lower the barrier to plausible malicious content. I think it would take time and effort to develop consensus on community rules related to this kind of content, and so would rather not wait until the problem was acutely upon us.