I expect an increase in malicious actors as AI develops, both because of greater acute conflict with people with a vested interest in weakening EA, and because AI assistance will lower the barrier to plausible malicious content. I think it would take time and effort to develop consensus on community rules related to this kind of content, and so would rather not wait until the problem was acutely upon us.
Seems like that solution has worked well for years. Why is it not scaling now? It’s not like the forum is loads bigger than a year ago.
I expect an increase in malicious actors as AI develops, both because of greater acute conflict with people with a vested interest in weakening EA, and because AI assistance will lower the barrier to plausible malicious content. I think it would take time and effort to develop consensus on community rules related to this kind of content, and so would rather not wait until the problem was acutely upon us.