This policy does not apply to anything posted before this post’s time of publication.
New policy:
You are welcome to use AI to help you write posts, but we ask that you disclose it when you do. Not disclosing that your post is AI-assisted could mean a rate-limit or a ban.[1] We won’t enforce this policy for comments and quick takes, though we’d appreciate a norm of disclosure there as well.
We are (and have been) moving more low quality or off-topic writing off of the Frontpage. Fully AI written text is (at time of writing) overwhelmingly likely to be one of these.
More detail:
Disclosure: If it is likely that there is substantial portions of AI-generated text[2] in your post at time of publishing, you must note this at the top of your post. You are not required to note if you used AI for research/ ideation. If you read a post which seems AI-generated, and you don’t see a disclosure — please report it.
To help get you started, we added a button to our post editor that provides some example disclosure statements. Like so:
If you’re unsure if your case applies, feel free to ask the Forum team before publishing.
Removing content from the Frontpage.
When a new user writes their first comment or post, it is reviewed by the mods. At this point we decide whether to allow the post on the Frontpage (the majority of non-spam posts go here), to put it on personal blog (a minority go here), or to ban the user (this is mostly used for clear spam).
Over the past year, we’ve also been moving low-quality and off-topic posts to personal blog more often, so that Forum users don’t have to spend time reading them. LLM-generated posts are more likely than human-written ones to be moved to personal blog, due to their lower quality.
Reasoning:
This policy represents a decision not to go for an authoritarian or a laissez-faire option to LLM-generated text.
The authoritarian option would be to ban LLM-generated text. To find a good text detector and to remove from the Frontpage all posts which contain generated text.
A laissez-faire option would be to allow LLM-generated text and hope that the forces of karma, upvotes, and downvotes would take care of the low-quality content that would likely result.
Both options are flawed. Taking the authoritarian route, and banning all LLM text is flawed because some writers find LLMs very helpful for getting their ideas across, and some readers don’t mind reading LLM-generated text[3].
The laissez-faire option is flawed because LLM-generated writing is increasingly difficult to detect. There are posts (I’ve seen a lot of these) which have the form of a good quality post which is worth reading, but on closer analysis turn out not to contain any ideas, or just to contain a couple of bullet points’ worth of ideas, surrounded by a lot of fluff and repetition. This leads to quite a large waste of time for the reader.
We’re opting for what I’d call the ‘liberal’ option. We’ll discourage LLM generated content by lowering the visibility of low quality posts and enforcing disclosure of LLM use, but we’ll ultimately be leaving the decision of whether to read LLM-generated content down to the Forum audience[4].
LessWrong is doing something reasonably similar, but with some caveats. Under their new policy, AI-written text must be published within labelled sections in a post. Our policy is therefore a chunk less onerous. I see our policies a little like this[5]:
We’re trying to have the best of both worlds, and I hope that we can. However, if it turns out that increasing amounts of content on the Forum is low-effort AI slop, or if valued authors find the Forum increasingly less valuable because of AI generated content, we are prepared to change our policy.
Good and bad uses of LLMs
Note that this section goes slightly beyond our policy, and into what we’d like to promote/ discourage. Treat these as strong recommendations, rather than laws.
Examples of recommended use of LLMs
A user uses an LLM to track down statistics on laws about gestation crates by country. They check the sources provided by the LLM, conclude that the statistics are accurate, and reference them in their post.
A user sends a draft of a forum post to an LLM asking it for feedback. They make edits to their post in response to its feedback.
A user who speaks English non-natively sends a draft of a forum post to an LLM asking it to correct any grammatical issues, and corrects the grammatical issues it raised.
[requires disclosure] Alternatively, they allow the LLM to redraft their post, and include a note at the top of their post explaining that they have done so.
A user creates a post discussing evaluation awareness in LLMs, in which they include several quotes from LLMs that appear to indicate evaluation awareness.
[requires disclosure] A user has an idea for a forum post, then co-writes it with an LLM, turning a verbal mind-dump into bullet points, into an essay, into bullets again, etc… It turns out good.
Examples of discouraged use of LLMs
A user wants to grow their reputation on the forum, so they feed popular forum posts into an LLM and ask it to write a thoughtful reply. This would lead to a ban under our existing rules against spam.
[requires disclosure] A user sends a list of bullet points to an LLM, asks the LLM to write a post based on their outline, and posts the content to the EA forum without making any edits. The post is bad. This post would be moved to personal blog.
A user uses an LLM to track down statistics on laws about gestation crates by country, but makes no effort to verify whether the information provided by the LLM is accurate.
As always, if you have any questions about our forum norms, feel free to contact the moderation team. Also, you are very welcome to share feedback on this policy below. I’m open to changing the policy if you change my mind.
PS—Thanks to the entire moderation and facilitation team for multiple rounds of feedback and discussion about this policy, and especially to @Francis for writing the first draft.
If we suspect that your post is AI-assisted and you did not include a disclosure at the top, we may hide it from readers, for example by moving it back to your drafts. You’re welcome to re-publish if you add a disclosure, or contact us if you think we made a mistake.
I was personally surprised by how many people had this view on Nick Laing’s poll. Though a fellow mod points out that the poll was interpreted by some as a hypothetical ‘if AI could write as well as humans…’ and others were thinking of current models.
New EA Forum LLM-use policy
This policy does not apply to anything posted before this post’s time of publication.
New policy:
You are welcome to use AI to help you write posts, but we ask that you disclose it when you do. Not disclosing that your post is AI-assisted could mean a rate-limit or a ban.[1] We won’t enforce this policy for comments and quick takes, though we’d appreciate a norm of disclosure there as well.
We are (and have been) moving more low quality or off-topic writing off of the Frontpage. Fully AI written text is (at time of writing) overwhelmingly likely to be one of these.
More detail:
Disclosure: If it is likely that there is substantial portions of AI-generated text[2] in your post at time of publishing, you must note this at the top of your post. You are not required to note if you used AI for research/ ideation. If you read a post which seems AI-generated, and you don’t see a disclosure — please report it.
To help get you started, we added a button to our post editor that provides some example disclosure statements. Like so:
If you’re unsure if your case applies, feel free to ask the Forum team before publishing.
Removing content from the Frontpage.
When a new user writes their first comment or post, it is reviewed by the mods. At this point we decide whether to allow the post on the Frontpage (the majority of non-spam posts go here), to put it on personal blog (a minority go here), or to ban the user (this is mostly used for clear spam).
Over the past year, we’ve also been moving low-quality and off-topic posts to personal blog more often, so that Forum users don’t have to spend time reading them. LLM-generated posts are more likely than human-written ones to be moved to personal blog, due to their lower quality.
Reasoning:
This policy represents a decision not to go for an authoritarian or a laissez-faire option to LLM-generated text.
The authoritarian option would be to ban LLM-generated text. To find a good text detector and to remove from the Frontpage all posts which contain generated text.
A laissez-faire option would be to allow LLM-generated text and hope that the forces of karma, upvotes, and downvotes would take care of the low-quality content that would likely result.
Both options are flawed. Taking the authoritarian route, and banning all LLM text is flawed because some writers find LLMs very helpful for getting their ideas across, and some readers don’t mind reading LLM-generated text[3].
The laissez-faire option is flawed because LLM-generated writing is increasingly difficult to detect. There are posts (I’ve seen a lot of these) which have the form of a good quality post which is worth reading, but on closer analysis turn out not to contain any ideas, or just to contain a couple of bullet points’ worth of ideas, surrounded by a lot of fluff and repetition. This leads to quite a large waste of time for the reader.
We’re opting for what I’d call the ‘liberal’ option. We’ll discourage LLM generated content by lowering the visibility of low quality posts and enforcing disclosure of LLM use, but we’ll ultimately be leaving the decision of whether to read LLM-generated content down to the Forum audience[4].
LessWrong is doing something reasonably similar, but with some caveats. Under their new policy, AI-written text must be published within labelled sections in a post. Our policy is therefore a chunk less onerous. I see our policies a little like this[5]:
We’re trying to have the best of both worlds, and I hope that we can. However, if it turns out that increasing amounts of content on the Forum is low-effort AI slop, or if valued authors find the Forum increasingly less valuable because of AI generated content, we are prepared to change our policy.
Good and bad uses of LLMs
Note that this section goes slightly beyond our policy, and into what we’d like to promote/ discourage. Treat these as strong recommendations, rather than laws.
Examples of recommended use of LLMs
A user uses an LLM to track down statistics on laws about gestation crates by country. They check the sources provided by the LLM, conclude that the statistics are accurate, and reference them in their post.
A user sends a draft of a forum post to an LLM asking it for feedback. They make edits to their post in response to its feedback.
A user who speaks English non-natively sends a draft of a forum post to an LLM asking it to correct any grammatical issues, and corrects the grammatical issues it raised.
[requires disclosure] Alternatively, they allow the LLM to redraft their post, and include a note at the top of their post explaining that they have done so.
A user creates a post discussing evaluation awareness in LLMs, in which they include several quotes from LLMs that appear to indicate evaluation awareness.
[requires disclosure] A user has an idea for a forum post, then co-writes it with an LLM, turning a verbal mind-dump into bullet points, into an essay, into bullets again, etc… It turns out good.
Examples of discouraged use of LLMs
A user wants to grow their reputation on the forum, so they feed popular forum posts into an LLM and ask it to write a thoughtful reply. This would lead to a ban under our existing rules against spam.
[requires disclosure] A user sends a list of bullet points to an LLM, asks the LLM to write a post based on their outline, and posts the content to the EA forum without making any edits. The post is bad. This post would be moved to personal blog.
A user uses an LLM to track down statistics on laws about gestation crates by country, but makes no effort to verify whether the information provided by the LLM is accurate.
As always, if you have any questions about our forum norms, feel free to contact the moderation team. Also, you are very welcome to share feedback on this policy below. I’m open to changing the policy if you change my mind.
PS—Thanks to the entire moderation and facilitation team for multiple rounds of feedback and discussion about this policy, and especially to @Francis for writing the first draft.
If we suspect that your post is AI-assisted and you did not include a disclosure at the top, we may hide it from readers, for example by moving it back to your drafts. You’re welcome to re-publish if you add a disclosure, or contact us if you think we made a mistake.
Specifically, if more than 10% of your post is the output of a chatbot.
I was personally surprised by how many people had this view on Nick Laing’s poll. Though a fellow mod points out that the poll was interpreted by some as a hypothetical ‘if AI could write as well as humans…’ and others were thinking of current models.
We are considering adding an option so that users can filter out AI-assisted content if they prefer. Currently we are testing pangram for accuracy.
NB- I edited this section after a message from Habryka. He pointed out that our policies were a little closer than I’d thought.