I’m drafting new rules and norms around AI usage at the moment. It’s especially difficult because of this critique—i.e. AI can genuinely help people express ideas when they otherwise wouldn’t have the time.
However, there is an effect where clearly AI generated text causes me (and other readers) to stop reading because the majority of AI generated text on the internet is low quality/​ overlong/​ contains too few ideas/​ etc…
You can get around this by removing clear signs of AI writing (for example, condense this page and put it in your system prompt), or rewriting the AI’s writing in your own words (when I write the EA Newsletter, I often write a bad draft, get AI to rewrite it, and then rewrite it again myself, using some good elements from the AI version).
The bottom line for me is that if a post is good quality and contains valuable ideas, it doesn’t matter who (or what) wrote it. But many AI-written posts (especially ones written without custom stylistic prompts) would (currently) be better off as a series of bullet-points written by the prompter, not the AI.
I think the answer is not to have specific rules regarding the use of AI to get around the understandable prejudice of you and other readers regarding such content, but rather to evaluate the content generated on its own merits. It is understandable for people to use proxies that have an imperfect causal relationship to quality to evaluate where to spend their time, but codifying this prejudice seems quite pernicious.
I am curious if your assessment (I should have had this post as a series of bullet points), would apply to my recent post or this quick take (this response to you is unaided by any AI). I don’t know that a custom prompt is needed, particularly when there is significant back and forth between the AI (or multiple LLMs, as I did in this post) and the human.
I think this particular quick take would have benefitted from being shorter—for example just the first two paragraphs get across your main point. Maybe another sentence to represent the corollary point about chilling effects for other AI users. I don’t mean all posts should be bulletpoints, just that I often see AI written content which was clearly generated based on a few bulletpoints worth of info, and would have been better off remaining as such (not sure your post was in that category, it was well received as it was).
I’d always recommend custom prompting your AI, it does a lot to make the tone more sensible, and can work well to force it to be concise especially.
BTW- The current rough plan is not to ban AI content and indeed to evaluate it based on its merits. I’m mostly wondering what to do about the middle ground content which is valuable, but a bit too taxing for the reader purely because it is written by AI.
I appreciate that you have a different judgment call regarding conciseness. When I was reviewing it, I thought there were a number of distinct points that warranted discussion: initial observation re celebrated comments criticizing AI, discussion of the process and counterfactual, isolated demand for rigor, effect of criticism in chilling contributions, illustration of this chilling, and the point that we should evaluate based on quality, not provenance or process.
I am glad that the plan is not to categorically ban AI content, but creating a extra scrutiny on the grounds of moderation (de jure, rather than de facto disparate treatment) does not make much sense to me.
On second thought, AI significantly reduces the costs for the writers and in the pure human context, the costs for the writer are something of a safeguard against the overproduction of bad content (i.e., if the writer wastes the readers’ time, he/​she is wasting their own). I would still think a light touch would be prudent, given how effective AI can be to help proliferate good ideas/​insights.
I’m drafting new rules and norms around AI usage at the moment. It’s especially difficult because of this critique—i.e. AI can genuinely help people express ideas when they otherwise wouldn’t have the time.
However, there is an effect where clearly AI generated text causes me (and other readers) to stop reading because the majority of AI generated text on the internet is low quality/​ overlong/​ contains too few ideas/​ etc…
You can get around this by removing clear signs of AI writing (for example, condense this page and put it in your system prompt), or rewriting the AI’s writing in your own words (when I write the EA Newsletter, I often write a bad draft, get AI to rewrite it, and then rewrite it again myself, using some good elements from the AI version).
The bottom line for me is that if a post is good quality and contains valuable ideas, it doesn’t matter who (or what) wrote it. But many AI-written posts (especially ones written without custom stylistic prompts) would (currently) be better off as a series of bullet-points written by the prompter, not the AI.
I think the answer is not to have specific rules regarding the use of AI to get around the understandable prejudice of you and other readers regarding such content, but rather to evaluate the content generated on its own merits. It is understandable for people to use proxies that have an imperfect causal relationship to quality to evaluate where to spend their time, but codifying this prejudice seems quite pernicious.
I am curious if your assessment (I should have had this post as a series of bullet points), would apply to my recent post or this quick take (this response to you is unaided by any AI). I don’t know that a custom prompt is needed, particularly when there is significant back and forth between the AI (or multiple LLMs, as I did in this post) and the human.
https://​​forum.effectivealtruism.org/​​posts/​​u9WzAcyZkBhgWAew5/​​your-sacrifice-portfolio-is-probably-terrible
I think this particular quick take would have benefitted from being shorter—for example just the first two paragraphs get across your main point. Maybe another sentence to represent the corollary point about chilling effects for other AI users. I don’t mean all posts should be bulletpoints, just that I often see AI written content which was clearly generated based on a few bulletpoints worth of info, and would have been better off remaining as such (not sure your post was in that category, it was well received as it was).
I’d always recommend custom prompting your AI, it does a lot to make the tone more sensible, and can work well to force it to be concise especially.
BTW- The current rough plan is not to ban AI content and indeed to evaluate it based on its merits. I’m mostly wondering what to do about the middle ground content which is valuable, but a bit too taxing for the reader purely because it is written by AI.
I appreciate that you have a different judgment call regarding conciseness. When I was reviewing it, I thought there were a number of distinct points that warranted discussion: initial observation re celebrated comments criticizing AI, discussion of the process and counterfactual, isolated demand for rigor, effect of criticism in chilling contributions, illustration of this chilling, and the point that we should evaluate based on quality, not provenance or process.
I am glad that the plan is not to categorically ban AI content, but creating a extra scrutiny on the grounds of moderation (de jure, rather than de facto disparate treatment) does not make much sense to me.
On second thought, AI significantly reduces the costs for the writers and in the pure human context, the costs for the writer are something of a safeguard against the overproduction of bad content (i.e., if the writer wastes the readers’ time, he/​she is wasting their own). I would still think a light touch would be prudent, given how effective AI can be to help proliferate good ideas/​insights.