I empathise but strongly disagree. AI has lowered the costs of making superficially plausible but bad content. The internet is full of things that are not worth reading and people need to prioritise.
Human written writing has various cues that people are practiced at identifying that indicate bad writing, and this can often be detected quickly, eg seeming locally incoherent, bad spelling, bad flow, etc. These are obviously not perfect heuristics, but convey real signal. AI has made it much easier to avoid all these basic heuristics, without making it much easier to have good content and ideas. Therefore, if AI wrote a text, it will be more costly to identify bad quality than if a human wrote it—AI text often looks good at first glance but is BS when you look into it deeply
People are rationally responding to the information environment they find themselves in. If cheap tests are less effective, conditional on the text being AI written, then you should be more willing to judge it or ditch it entirely if you conclude it was AI written. Having higher standards of rigour is just rational
Identifying signs of AI and using this as a reason not to spend further time assessing is rational for the reasons you and titotal state. But such identification should not effect one’s evaluations of content (allocating karma, up voting, or more extremely, taking moderation actions) except insofar as it otherwise actually lowers the quality of the content.
If AI as source effects your evaluation process (in assessing the content itself, not in deciding whether to spend time on it) this is essentially pure prejudice. It’s similar to the difference between cops incorporating crime statistics in choosing whether to investigate a young black male for homicide and a judge deciding to lower the standard of proof on that basis. Prejudice in the ultimate evaluation process is simply unjust and erodes the epistemic commons.
Innocent until proven guilty is a fine principle for the legal system, but I do not think it is obviously reasonable to apply it to evaluating content made by strangers on the internet. It is not robust to people quickly and cheaply generating new identities, and new questionably true content. Further, the whole point of the principle is that it’s really bad to unjustly convict people, along with other factors like wanting to be robust to governments persecuting civilians. Incorrectly dismissing a decent post is really not that bad.
Feel free to call discriminating against AI content prejudice if you want, but I think this is a rational and reasonable form of prejudice and disagree with the moral analogy you’re trying to draw by using that word and example
I empathise but strongly disagree. AI has lowered the costs of making superficially plausible but bad content. The internet is full of things that are not worth reading and people need to prioritise.
Human written writing has various cues that people are practiced at identifying that indicate bad writing, and this can often be detected quickly, eg seeming locally incoherent, bad spelling, bad flow, etc. These are obviously not perfect heuristics, but convey real signal. AI has made it much easier to avoid all these basic heuristics, without making it much easier to have good content and ideas. Therefore, if AI wrote a text, it will be more costly to identify bad quality than if a human wrote it—AI text often looks good at first glance but is BS when you look into it deeply
People are rationally responding to the information environment they find themselves in. If cheap tests are less effective, conditional on the text being AI written, then you should be more willing to judge it or ditch it entirely if you conclude it was AI written. Having higher standards of rigour is just rational
See my response to titotal.
Identifying signs of AI and using this as a reason not to spend further time assessing is rational for the reasons you and titotal state. But such identification should not effect one’s evaluations of content (allocating karma, up voting, or more extremely, taking moderation actions) except insofar as it otherwise actually lowers the quality of the content.
If AI as source effects your evaluation process (in assessing the content itself, not in deciding whether to spend time on it) this is essentially pure prejudice. It’s similar to the difference between cops incorporating crime statistics in choosing whether to investigate a young black male for homicide and a judge deciding to lower the standard of proof on that basis. Prejudice in the ultimate evaluation process is simply unjust and erodes the epistemic commons.
Innocent until proven guilty is a fine principle for the legal system, but I do not think it is obviously reasonable to apply it to evaluating content made by strangers on the internet. It is not robust to people quickly and cheaply generating new identities, and new questionably true content. Further, the whole point of the principle is that it’s really bad to unjustly convict people, along with other factors like wanting to be robust to governments persecuting civilians. Incorrectly dismissing a decent post is really not that bad.
Feel free to call discriminating against AI content prejudice if you want, but I think this is a rational and reasonable form of prejudice and disagree with the moral analogy you’re trying to draw by using that word and example
I’ve made a pretty clear distinction here that you seem to be eliding:
Identifying AI content and deciding on that basis it’s not worth your time
Identifying AI content and judging that content differently simply because it is AI generated (where that judgment has consequences)
The first is a reasonable way to protect your time based on a reliable proxy for quality. The second is unfair and poisoning of the epistemic commons.