Requiring disclosures to be at the top of the post (rather than e.g. allowing them to be at the bottom) does feel like it’s sending some implicit “this is kind of bad so people need to be warned about it” message, even if it’s in a “recommended uses” section.
Like I think people might reasonably worry about others pre-judging posts with this disclaimer, and hence (perhaps, sometimes) prefer workflows where they don’t need to include the disclaimer, even if this makes their posts worse.
I don’t think there’s an easy answer here—like, presumably the point of the policy is to allow this kind of pre-judging and let people make differently-informed choices about what they engage with. But I think the post kind of papers over this tension.
I can clarify that in writing this policy that was definitely part of my reasoning (i.e. to make it slightly costlier to use AI for final drafting).
I do think that “even if this makes their posts worse” is going to be fairly rare.
Though, as AI gets better at writing, we might all come to look on disclaimers differently. At some point readers may even prefer to know an AI has already checked over a post before they bother to read it.
I think there are very few justifications for consumers not knowing what they are buying. We should know as much as possible. When we eat food all the ingredients should be there on the packet. If some people think AI written posts are likely to be better, than people might even be more likely to read them? We should have the right to read or not read heavily AI written posts.
Labelling from my perspective is not about it being “good” or “bad” persay, but helping people make informed decisions.
Ok so I can kind of tune into what you’re saying here, but I also feel kind of uneasy about it. I guess I’d be curious what you make of the following potential arguments:
Ingredients are important because we can’t directly discern what’s in food. But with writing we can see exactly what’s there and judge that directly without needing to judge the process. (This perspective would endorse reviews being posted warning people not to read low-quality stuff.)
Requiring disclosure is an inappropriate form of thought policing—people should have the right to use whatever cognitive processes and augmentation methods they like, and take responsibility for the words they then share. If this produces LLM garbage it’s not on them to label that up front, but this should have the natural consequence that people stop listening to them.
Requiring disclosures to be at the top of the post (rather than e.g. allowing them to be at the bottom) does feel like it’s sending some implicit “this is kind of bad so people need to be warned about it” message, even if it’s in a “recommended uses” section.
Like I think people might reasonably worry about others pre-judging posts with this disclaimer, and hence (perhaps, sometimes) prefer workflows where they don’t need to include the disclaimer, even if this makes their posts worse.
I don’t think there’s an easy answer here—like, presumably the point of the policy is to allow this kind of pre-judging and let people make differently-informed choices about what they engage with. But I think the post kind of papers over this tension.
I can clarify that in writing this policy that was definitely part of my reasoning (i.e. to make it slightly costlier to use AI for final drafting).
I do think that “even if this makes their posts worse” is going to be fairly rare.
Though, as AI gets better at writing, we might all come to look on disclaimers differently. At some point readers may even prefer to know an AI has already checked over a post before they bother to read it.
I think there are very few justifications for consumers not knowing what they are buying. We should know as much as possible. When we eat food all the ingredients should be there on the packet. If some people think AI written posts are likely to be better, than people might even be more likely to read them? We should have the right to read or not read heavily AI written posts.
Labelling from my perspective is not about it being “good” or “bad” persay, but helping people make informed decisions.
Ok so I can kind of tune into what you’re saying here, but I also feel kind of uneasy about it. I guess I’d be curious what you make of the following potential arguments:
Ingredients are important because we can’t directly discern what’s in food. But with writing we can see exactly what’s there and judge that directly without needing to judge the process. (This perspective would endorse reviews being posted warning people not to read low-quality stuff.)
Requiring disclosure is an inappropriate form of thought policing—people should have the right to use whatever cognitive processes and augmentation methods they like, and take responsibility for the words they then share. If this produces LLM garbage it’s not on them to label that up front, but this should have the natural consequence that people stop listening to them.