Disclosure is a reasonable idea, but mandating it at the top is awful, because the first line of a essay generally should be a hook, or convey the most information about the essay (after the title, anyways; especially because EA Forum doesn’t have a subtitle the way eg Substack does).
I would recommend allowing the author to put the disclosure anywhere in their essay. After the intro section might be a more natural place, or at the bottom similar to acknowledgements.
I disagree—disclosure is for the benefit of the reader, not the author[1]. If the reader had to read half a post, or even an entire post, before they were told they were reading LLM-generated text, they might be wasting quite a lot of time and attention.
We’ll see how this shakes out in practice though. If it proves too costly for authors of good quality posts which are LLM-assisted, we can always reconsider.
I agree disclosure is for the benefit of the reader—I’m saying that, as a reader, I disprefer having to skip through a sentence at the top of many new posts disclaming that they used LLMs for copy editing and feedback.
I think the main thing I care about is “were large sections of this written directly by LLM” which I would prefer as first sentence so I know when to not read (which is actually the policy as written here, though I only realized that as of writing this comment). But—it appears that the default warning box has started scaring people into disclosing all forms of LLM usage at the top of essays, which I argue is a bad norm.
I wonder if the disclosures could be non-text by default—e.g. colour-coded with an optional footnote for details.
The thing I’m not liking as a reader is having words to process on this stuff at the start (for me this isn’t just cases where people aren’t following policy; I’ve felt it some about a case where the words were one of the suggested wordings from the policy). Non-text ways to signal could potentially get best-of-both-worlds in terms of reader attention.
Hmm yes—would it also work if it was a coloured callout you could get used to and ignore? I explicitly want newer users to know what the disclosures mean—i.e. a colour code without any text would be too esoteric.
Yeah I think that would be an improvement over the current behaviour. I’d still probably prefer something very short (“LLM usage: zero/minimal/moderate/major”) which can be expanded if people want more texture.
From what I can see, the main issue here who writes the words, about how much LLMs are used in the process.
If most of the brainstorming, research and structuring was done by the LLM but you wrote the words yourself, from my perspective that wouldn’t require any caveat at all. But if LLM’s wrote half of the words than I would definitely want to know at the top of the post (and personally I probably wouldn’t read it).
That’s why it’s so important that we get clear labelling. On this forum we should be able to choose whether or not to read something not written by a human. I would hope that only a minority of posts will have heavy LLM writing, so most posts won’t need any disclosure at all.
I completely agree with @Austin that people shouldn’t write anything if they use LLMs for feedback and copy editing—like he said they shouldn’t have to under this policy. I have seen people stating doing that, but hopefully it will settle down when they realise it isn’t necessary.
I don’t understand why you put such a significance on the drafting of the material. Someone could have more problematic use of AI if they simply deferred to erroneous AI research findings and made a post in his/her own words. Someone could brainstorm and follow the erroneous reasoning of an AI and do so in human words. Conversely, AI could draft words where the research and reasoning is checked and the words to express the thoughts are iterated many times between human/AI to come to a very strong and clear method of expressing it.
Your drawing the line at drafting both does not capture many bad uses of AI and also captures many good or great uses of AI, in my view.
scaring people into disclosing all forms of LLM usage at the top of essays, which I argue is a bad norm
Yep, that’s different. I’ve only seen one example of this so far, but if it continues it’s probably just a design issue we can tweak (i.e. maybe the copy isn’t clear enough on the post-page).
Disclosure is a reasonable idea, but mandating it at the top is awful, because the first line of a essay generally should be a hook, or convey the most information about the essay (after the title, anyways; especially because EA Forum doesn’t have a subtitle the way eg Substack does).
I would recommend allowing the author to put the disclosure anywhere in their essay. After the intro section might be a more natural place, or at the bottom similar to acknowledgements.
I disagree—disclosure is for the benefit of the reader, not the author[1]. If the reader had to read half a post, or even an entire post, before they were told they were reading LLM-generated text, they might be wasting quite a lot of time and attention.
We’ll see how this shakes out in practice though. If it proves too costly for authors of good quality posts which are LLM-assisted, we can always reconsider.
Though we don’t want disclosure to be too onerous, which is why it is currently just text rather than the callout boxes LessWrong is using.
I agree disclosure is for the benefit of the reader—I’m saying that, as a reader, I disprefer having to skip through a sentence at the top of many new posts disclaming that they used LLMs for copy editing and feedback.
I think the main thing I care about is “were large sections of this written directly by LLM” which I would prefer as first sentence so I know when to not read (which is actually the policy as written here, though I only realized that as of writing this comment). But—it appears that the default warning box has started scaring people into disclosing all forms of LLM usage at the top of essays, which I argue is a bad norm.
I wonder if the disclosures could be non-text by default—e.g. colour-coded with an optional footnote for details.
The thing I’m not liking as a reader is having words to process on this stuff at the start (for me this isn’t just cases where people aren’t following policy; I’ve felt it some about a case where the words were one of the suggested wordings from the policy). Non-text ways to signal could potentially get best-of-both-worlds in terms of reader attention.
Hmm yes—would it also work if it was a coloured callout you could get used to and ignore? I explicitly want newer users to know what the disclosures mean—i.e. a colour code without any text would be too esoteric.
Yeah I think that would be an improvement over the current behaviour. I’d still probably prefer something very short (“LLM usage: zero/minimal/moderate/major”) which can be expanded if people want more texture.
From what I can see, the main issue here who writes the words, about how much LLMs are used in the process.
If most of the brainstorming, research and structuring was done by the LLM but you wrote the words yourself, from my perspective that wouldn’t require any caveat at all. But if LLM’s wrote half of the words than I would definitely want to know at the top of the post (and personally I probably wouldn’t read it).
That’s why it’s so important that we get clear labelling. On this forum we should be able to choose whether or not to read something not written by a human. I would hope that only a minority of posts will have heavy LLM writing, so most posts won’t need any disclosure at all.
I completely agree with @Austin that people shouldn’t write anything if they use LLMs for feedback and copy editing—like he said they shouldn’t have to under this policy. I have seen people stating doing that, but hopefully it will settle down when they realise it isn’t necessary.
I don’t understand why you put such a significance on the drafting of the material. Someone could have more problematic use of AI if they simply deferred to erroneous AI research findings and made a post in his/her own words. Someone could brainstorm and follow the erroneous reasoning of an AI and do so in human words. Conversely, AI could draft words where the research and reasoning is checked and the words to express the thoughts are iterated many times between human/AI to come to a very strong and clear method of expressing it.
Your drawing the line at drafting both does not capture many bad uses of AI and also captures many good or great uses of AI, in my view.
Yep, that’s different. I’ve only seen one example of this so far, but if it continues it’s probably just a design issue we can tweak (i.e. maybe the copy isn’t clear enough on the post-page).