@Toby Tremlett🔹 This content seems to be AI written, and also (relatedly?) to be misunderstanding the post. Are there any plans to implement a policy on LLMs, like the one on LessWrong?
Thanks May. Regarding this specific comment—I think it adds value, and mod action isn’t needed. But the mods have been discussing an LLM policy, and this is a valuable bump! We’ve been getting more substantially AI comments recently, and we’ve rate-limited and spoken with the individuals. But in my view, we definitely need a more scalable solution, and clearer norms on this. Stay tuned.
For what it’s worth, I don’t think it’s AI written. But even if it is, it’s fine with me. It makes information dense points, that one might agree or disagree with.
@Toby Tremlett🔹 This content seems to be AI written, and also (relatedly?) to be misunderstanding the post. Are there any plans to implement a policy on LLMs, like the one on LessWrong?
(Edit: referring to Eli Nathan’s comment)
Thanks May. Regarding this specific comment—I think it adds value, and mod action isn’t needed. But the mods have been discussing an LLM policy, and this is a valuable bump! We’ve been getting more substantially AI comments recently, and we’ve rate-limited and spoken with the individuals. But in my view, we definitely need a more scalable solution, and clearer norms on this. Stay tuned.
For what it’s worth, I don’t think it’s AI written. But even if it is, it’s fine with me. It makes information dense points, that one might agree or disagree with.
I agree it seems to be misunderstanding the post.