I have a lot of sympathy towards being frustrated at knee-jerk bias against AI usage. I was recently banned from r/philosophy on first offense because I linked a post that contained an AI-generated image and a (clearly-labelled) AI summary of someone else’s argument[1]. (I saw that the subreddit had rules against AI usage but I foolishly assumed that it only applied to posts in the subreddit itself). I think their choice to ban me was wrong, and deprived them of valuable philosophical arguments that I was able to make[2] in other subreddits like r/PhilosophyOfScience. So I totally get where you’re coming from with frustration.
And I agree that AI, like typewriters, computers, calculators, and other tools, can be epistemically beneficial in allowing people who otherwise don’t have the time to make arguments to develop them.
Nonetheless I think you’re wrong in some important ways.
Firstly, I think you’re wrong to believe that perception of AI ought only to cause us to be skeptical of whether to engage with some writing, and it is “pure prejudice” to apply a higher bar to writing after reading it conditional upon whether it’s AI. I think this is an extremely non-obvious claim, and I currently think you’re wrong.
To illustrate this point, consider two other reasons I might apply greater scrutiny to some content I see:
If an essay is written in Comic Sans (a font often adopted by unserious people), we might initially suspect that the essay’s not very serious, but after reading it, we should withdraw any adverse inferences we make about the essay simply due to font. This is because we believe (by stipulation) that an essay’s font can tell us whether an essay is worth reading, but cannot provide additional data after reading the essay. In Pearlian terms, reading the essay “screens off” any information we gain from an essay’s font.
I think this is not true for learning that a paper is written by Francisca Gino. Since Francisca Gino’s a known data fraudster, even after carefully reading a paper by her, or at least with the same level of care I usually apply to reading psychology papers, I should continue to be more skeptical of her findings than after reading the same paper written by a different academic. I think this is purely rational, rather than an ad hominem argument, or “pure prejudice” as you so eloquently put it.
Now, is learning whether an essay is written (or cowritten) by AI a signal more akin to learning that an essay is written in Comic Sans, or closer to learning that it’s written by Francisca Gino? Reasonable people can disagree here, but at the very least the answer’s extremely non-obvious, and you haven’t actually substantiated why you believe it’s the former, when there are indeed good reasons to believe it’s the latter.
In brief:
AI hallucination—while AIs may intentionally lie less often than Harvard business professors, they still hallucinate at a higher rate than i’m comfortable with seeing on the EA Forum.
AI persuasiveness—for the same facts and levels of evidence, AIs might be more persuasive than most human writers. To the extent this additional persuasiveness is not correlated with truth, we should update negatively accordingly upon seeing arguments presented by AIs.
Authority and cognition—If I see an intelligent and well-meaning person present an argument with some probably-fillable holes, that they allude to but do not directly address in the writing, I might be inclined to give them a benefit of a doubt and assumed they’ve considered the issue and decided it wasn’t worth going into in a short speech or essay. However, this inference is much more likely to go wrong if an essay is written with AI assistance. I alluded to this point in my comment on your other top-level post but I’ll mention it again here.
I think it’s very plausible, for example, that if you took the time to write out/type out your comment here yourself, you’d have been able to recognize my critique for yourself, and it wouldn’t have been necessary for me to dive into it.
I still defend this practice. I think the alternative of summarizing other people’s arguments in your own words has various tradeoffs but a big one is that you are injecting your own biases into the summary before you even start critiquing it.
In the case of the author with the history of fraud, you are applying prejudice, albeit perhaps appropriately so.
You raise stronger points than I’ve yet heard on this subject, though I still think that if you read some kind of content and find it compelling on its merits, there is still a strong case to apply at least similar scrutiny regardless of whether there are signs of AI use. Although I still think there is too much knee-jerk sentiment on the matter, you’ve given me something to think about.
I have a lot of sympathy towards being frustrated at knee-jerk bias against AI usage. I was recently banned from r/philosophy on first offense because I linked a post that contained an AI-generated image and a (clearly-labelled) AI summary of someone else’s argument[1]. (I saw that the subreddit had rules against AI usage but I foolishly assumed that it only applied to posts in the subreddit itself). I think their choice to ban me was wrong, and deprived them of valuable philosophical arguments that I was able to make[2] in other subreddits like r/PhilosophyOfScience. So I totally get where you’re coming from with frustration.
And I agree that AI, like typewriters, computers, calculators, and other tools, can be epistemically beneficial in allowing people who otherwise don’t have the time to make arguments to develop them.
Nonetheless I think you’re wrong in some important ways.
Firstly, I think you’re wrong to believe that perception of AI ought only to cause us to be skeptical of whether to engage with some writing, and it is “pure prejudice” to apply a higher bar to writing after reading it conditional upon whether it’s AI. I think this is an extremely non-obvious claim, and I currently think you’re wrong.
To illustrate this point, consider two other reasons I might apply greater scrutiny to some content I see:
An entire essay is written in Comic Sans
I learned that a paper’s written by Francisca Gino
If an essay is written in Comic Sans (a font often adopted by unserious people), we might initially suspect that the essay’s not very serious, but after reading it, we should withdraw any adverse inferences we make about the essay simply due to font. This is because we believe (by stipulation) that an essay’s font can tell us whether an essay is worth reading, but cannot provide additional data after reading the essay. In Pearlian terms, reading the essay “screens off” any information we gain from an essay’s font.
I think this is not true for learning that a paper is written by Francisca Gino. Since Francisca Gino’s a known data fraudster, even after carefully reading a paper by her, or at least with the same level of care I usually apply to reading psychology papers, I should continue to be more skeptical of her findings than after reading the same paper written by a different academic. I think this is purely rational, rather than an ad hominem argument, or “pure prejudice” as you so eloquently put it.
Now, is learning whether an essay is written (or cowritten) by AI a signal more akin to learning that an essay is written in Comic Sans, or closer to learning that it’s written by Francisca Gino? Reasonable people can disagree here, but at the very least the answer’s extremely non-obvious, and you haven’t actually substantiated why you believe it’s the former, when there are indeed good reasons to believe it’s the latter.
In brief:
AI hallucination—while AIs may intentionally lie less often than Harvard business professors, they still hallucinate at a higher rate than i’m comfortable with seeing on the EA Forum.
AI persuasiveness—for the same facts and levels of evidence, AIs might be more persuasive than most human writers. To the extent this additional persuasiveness is not correlated with truth, we should update negatively accordingly upon seeing arguments presented by AIs.
Authority and cognition—If I see an intelligent and well-meaning person present an argument with some probably-fillable holes, that they allude to but do not directly address in the writing, I might be inclined to give them a benefit of a doubt and assumed they’ve considered the issue and decided it wasn’t worth going into in a short speech or essay. However, this inference is much more likely to go wrong if an essay is written with AI assistance. I alluded to this point in my comment on your other top-level post but I’ll mention it again here.
I think it’s very plausible, for example, that if you took the time to write out/type out your comment here yourself, you’d have been able to recognize my critique for yourself, and it wouldn’t have been necessary for me to dive into it.
I still defend this practice. I think the alternative of summarizing other people’s arguments in your own words has various tradeoffs but a big one is that you are injecting your own biases into the summary before you even start critiquing it.
Richard Chappell was also banned temporarily, and has a more eloquent defense. Unlike me he’s an academic philosopher (TM)
In the case of the author with the history of fraud, you are applying prejudice, albeit perhaps appropriately so.
You raise stronger points than I’ve yet heard on this subject, though I still think that if you read some kind of content and find it compelling on its merits, there is still a strong case to apply at least similar scrutiny regardless of whether there are signs of AI use. Although I still think there is too much knee-jerk sentiment on the matter, you’ve given me something to think about.