Seems like a good compromise. The examples at the end are also helpful.
About this, however:
The laissez-faire option is flawed because LLM-generated writing is increasingly difficult to detect. There are posts (I’ve seen a lot of these) which have the form of a good quality post which is worth reading, but on closer analysis turn out not to contain any ideas, or just to contain a couple of bullet points’ worth of ideas, surrounded by a lot of fluff and repetition. This leads to quite a large waste of time for the reader.
While this is true, and indeed happens a lot everywhere nowadays, let’s not forget about the option for actual malice—manipulation by posts that look good or convincing but are actually written to persuade you to serve someone’s interests. Which can be done by anyone ranging from individuals, to companies, to industry lobbies to state governments.
Allowing LLM-generated content not only leaves the door open to heaps of slop, but also allows all of this. So some sort of defence is definitely warranted.
Seems like a good compromise. The examples at the end are also helpful.
About this, however:
While this is true, and indeed happens a lot everywhere nowadays, let’s not forget about the option for actual malice—manipulation by posts that look good or convincing but are actually written to persuade you to serve someone’s interests. Which can be done by anyone ranging from individuals, to companies, to industry lobbies to state governments.
Allowing LLM-generated content not only leaves the door open to heaps of slop, but also allows all of this. So some sort of defence is definitely warranted.