Innocent until proven guilty is a fine principle for the legal system, but I do not think it is obviously reasonable to apply it to evaluating content made by strangers on the internet. It is not robust to people quickly and cheaply generating new identities, and new questionably true content. Further, the whole point of the principle is that it’s really bad to unjustly convict people, along with other factors like wanting to be robust to governments persecuting civilians. Incorrectly dismissing a decent post is really not that bad.
Feel free to call discriminating against AI content prejudice if you want, but I think this is a rational and reasonable form of prejudice and disagree with the moral analogy you’re trying to draw by using that word and example
Innocent until proven guilty is a fine principle for the legal system, but I do not think it is obviously reasonable to apply it to evaluating content made by strangers on the internet. It is not robust to people quickly and cheaply generating new identities, and new questionably true content. Further, the whole point of the principle is that it’s really bad to unjustly convict people, along with other factors like wanting to be robust to governments persecuting civilians. Incorrectly dismissing a decent post is really not that bad.
Feel free to call discriminating against AI content prejudice if you want, but I think this is a rational and reasonable form of prejudice and disagree with the moral analogy you’re trying to draw by using that word and example
I’ve made a pretty clear distinction here that you seem to be eliding:
Identifying AI content and deciding on that basis it’s not worth your time
Identifying AI content and judging that content differently simply because it is AI generated (where that judgment has consequences)
The first is a reasonable way to protect your time based on a reliable proxy for quality. The second is unfair and poisoning of the epistemic commons.