I’d be much more interested in reading your prompts to ChatGPT than the output it produced. I suspect this would make it much easier for me (and others) to understand your position.
It changed generating a comment from something that would have probably taken 1.5 hours of work to something that took about 15 minutes and generated what I wanted to say.
Although I can’t directly compare the ChatGPT version to a hypothetical directly-written version of the comment, my hunch is that the former is about twice as long as the latter as the latter would have been. It’s pretty common for AI to need many more words to express the same idea than a reasonably skilled human author. So in a sense, I think generative AI use often shifts the time burden of the author-reader joint enterprise from the author to the readers. This may or may not be a good tradeoff on the whole, but it is worth considering both sides.
My general take is that content authored with that level of AI assistance should be flagged as such, so the reader can make their own decision about whether to engage with it.
I do think the comment would have been much better received if it was more concise and simple to read (regardless of how it was written), see The value of content density
Overall, do you stand by your comment? If I wrote a point-by-point response would some points get a “that’s just something the LLM put in because it seemed plausible and isn’t actually my view”?
I was specifically asking (and am still wondering) whether you stand by every individual point in your original post, such that it would be worth it for me to write a point-by-point response.
(Sometimes when people give high-level instructions to an LLM which results in output where they’re willing to stand by the general message, but some of the specific claims aren’t actually what they believe. The same thing can also happen when hiring people: if I was trying to deeply engage with a company on one of their policies it wouldn’t be productive to write a point-by-point response to an answer I’d received from a first-line support representative.)
I’d be much more interested in reading your prompts to ChatGPT than the output it produced. I suspect this would make it much easier for me (and others) to understand your position.
Although I can’t directly compare the ChatGPT version to a hypothetical directly-written version of the comment, my hunch is that the former is about twice as long as the latter as the latter would have been. It’s pretty common for AI to need many more words to express the same idea than a reasonably skilled human author. So in a sense, I think generative AI use often shifts the time burden of the author-reader joint enterprise from the author to the readers. This may or may not be a good tradeoff on the whole, but it is worth considering both sides.
My general take is that content authored with that level of AI assistance should be flagged as such, so the reader can make their own decision about whether to engage with it.
I do think the comment would have been much better received if it was more concise and simple to read (regardless of how it was written), see The value of content density
Overall, do you stand by your comment? If I wrote a point-by-point response would some points get a “that’s just something the LLM put in because it seemed plausible and isn’t actually my view”?
I’m confused: this seems to me to be a restatement of your main point and not a response to my question?
I was specifically asking (and am still wondering) whether you stand by every individual point in your original post, such that it would be worth it for me to write a point-by-point response.
(Sometimes when people give high-level instructions to an LLM which results in output where they’re willing to stand by the general message, but some of the specific claims aren’t actually what they believe. The same thing can also happen when hiring people: if I was trying to deeply engage with a company on one of their policies it wouldn’t be productive to write a point-by-point response to an answer I’d received from a first-line support representative.)