What are the norms on the EA Forum about ChatGPT-generated content?
If I see a forum post that looks like it was generated by a LLM generative AI tool, it is rude to write a comment asking “Was this post written by generative AI?” I’m not sure what the community’s expectations are, and I want to be cognizant of not assuming my own norms/preferences are the appropriate ones.
It seems to me that the proof is in the pudding. The content can be evaluated on what it brings to the discourse and the tools used in producing it are only relevant insofar as these tools result in undesirable content. Rather than questioning whether the post was written by generative AI, I would give feedback as to what aspects of the content you are criticizing.
While I am not aware of any norms or consensus, I would be okay with that. My own view is that use of generative AI should be proactively disclosed where the AI could fairly be considered the primary author of the post/comment. I am unsure how much support this view has, though.
IMO, if the content is good we shouldn’t bring it up. If an author is producing bad content more than once a month and it seems generated by LLMs they should be warned then banned if it continues.
I suspect any comment threads about whether content is LLM-generated aren’t worth reading and thus aren’t worthwhile writing.
What are the norms on the EA Forum about ChatGPT-generated content?
If I see a forum post that looks like it was generated by a LLM generative AI tool, it is rude to write a comment asking “Was this post written by generative AI?” I’m not sure what the community’s expectations are, and I want to be cognizant of not assuming my own norms/preferences are the appropriate ones.
It seems to me that the proof is in the pudding. The content can be evaluated on what it brings to the discourse and the tools used in producing it are only relevant insofar as these tools result in undesirable content. Rather than questioning whether the post was written by generative AI, I would give feedback as to what aspects of the content you are criticizing.
While I am not aware of any norms or consensus, I would be okay with that. My own view is that use of generative AI should be proactively disclosed where the AI could fairly be considered the primary author of the post/comment. I am unsure how much support this view has, though.
IMO, if the content is good we shouldn’t bring it up. If an author is producing bad content more than once a month and it seems generated by LLMs they should be warned then banned if it continues.
I suspect any comment threads about whether content is LLM-generated aren’t worth reading and thus aren’t worthwhile writing.