Hang on, the category/example you cite is listed in the ‘Recommended use of LLMs’ section. So, I’m not sure what you’re disagreeing with?
Indeed, almost half the post is about distinguishing good from bad uses of LLMs, thus I’m struggling to make sense of your last paragraph. Are you referring to discussion (which demonizes all AI use for writing) that has happened elsewhere?
Requiring disclosures to be at the top of the post (rather than e.g. allowing them to be at the bottom) does feel like it’s sending some implicit “this is kind of bad so people need to be warned about it” message, even if it’s in a “recommended uses” section.
Like I think people might reasonably worry about others pre-judging posts with this disclaimer, and hence (perhaps, sometimes) prefer workflows where they don’t need to include the disclaimer, even if this makes their posts worse.
I don’t think there’s an easy answer here—like, presumably the point of the policy is to allow this kind of pre-judging and let people make differently-informed choices about what they engage with. But I think the post kind of papers over this tension.
I can clarify that in writing this policy that was definitely part of my reasoning (i.e. to make it slightly costlier to use AI for final drafting).
I do think that “even if this makes their posts worse” is going to be fairly rare.
Though, as AI gets better at writing, we might all come to look on disclaimers differently. At some point readers may even prefer to know an AI has already checked over a post before they bother to read it.
I think there are very few justifications for consumers not knowing what they are buying. We should know as much as possible. When we eat food all the ingredients should be there on the packet. If some people think AI written posts are likely to be better, than people might even be more likely to read them? We should have the right to read or not read heavily AI written posts.
Labelling from my perspective is not about it being “good” or “bad” persay, but helping people make informed decisions.
Ok so I can kind of tune into what you’re saying here, but I also feel kind of uneasy about it. I guess I’d be curious what you make of the following potential arguments:
Ingredients are important because we can’t directly discern what’s in food. But with writing we can see exactly what’s there and judge that directly without needing to judge the process. (This perspective would endorse reviews being posted warning people not to read low-quality stuff.)
Requiring disclosure is an inappropriate form of thought policing—people should have the right to use whatever cognitive processes and augmentation methods they like, and take responsibility for the words they then share. If this produces LLM garbage it’s not on them to label that up front, but this should have the natural consequence that people stop listening to them.
I’m not disagreeing with this post (or, in any event, not in the comment to which you replied). I am noting that most of the discussion that I have seen has been pretty against AI-generated writing writ large, conflating the good use of it with the bad use. I am noting my opinion that there is a lot of value in this usage. When I am saying “most of the discussion”, I am not talking about this post specifically, but the broader discussion there has been about the use of AI to generate writings.
Hang on, the category/example you cite is listed in the ‘Recommended use of LLMs’ section. So, I’m not sure what you’re disagreeing with?
Indeed, almost half the post is about distinguishing good from bad uses of LLMs, thus I’m struggling to make sense of your last paragraph. Are you referring to discussion (which demonizes all AI use for writing) that has happened elsewhere?
Requiring disclosures to be at the top of the post (rather than e.g. allowing them to be at the bottom) does feel like it’s sending some implicit “this is kind of bad so people need to be warned about it” message, even if it’s in a “recommended uses” section.
Like I think people might reasonably worry about others pre-judging posts with this disclaimer, and hence (perhaps, sometimes) prefer workflows where they don’t need to include the disclaimer, even if this makes their posts worse.
I don’t think there’s an easy answer here—like, presumably the point of the policy is to allow this kind of pre-judging and let people make differently-informed choices about what they engage with. But I think the post kind of papers over this tension.
I can clarify that in writing this policy that was definitely part of my reasoning (i.e. to make it slightly costlier to use AI for final drafting).
I do think that “even if this makes their posts worse” is going to be fairly rare.
Though, as AI gets better at writing, we might all come to look on disclaimers differently. At some point readers may even prefer to know an AI has already checked over a post before they bother to read it.
I think there are very few justifications for consumers not knowing what they are buying. We should know as much as possible. When we eat food all the ingredients should be there on the packet. If some people think AI written posts are likely to be better, than people might even be more likely to read them? We should have the right to read or not read heavily AI written posts.
Labelling from my perspective is not about it being “good” or “bad” persay, but helping people make informed decisions.
Ok so I can kind of tune into what you’re saying here, but I also feel kind of uneasy about it. I guess I’d be curious what you make of the following potential arguments:
Ingredients are important because we can’t directly discern what’s in food. But with writing we can see exactly what’s there and judge that directly without needing to judge the process. (This perspective would endorse reviews being posted warning people not to read low-quality stuff.)
Requiring disclosure is an inappropriate form of thought policing—people should have the right to use whatever cognitive processes and augmentation methods they like, and take responsibility for the words they then share. If this produces LLM garbage it’s not on them to label that up front, but this should have the natural consequence that people stop listening to them.
Hi Nick. This comment is empty.
I’m not disagreeing with this post (or, in any event, not in the comment to which you replied). I am noting that most of the discussion that I have seen has been pretty against AI-generated writing writ large, conflating the good use of it with the bad use. I am noting my opinion that there is a lot of value in this usage. When I am saying “most of the discussion”, I am not talking about this post specifically, but the broader discussion there has been about the use of AI to generate writings.