Reasoning transparency demands AI-use disclosure
Post drafted and edited by the author. Claude and Grammarly were used for a light copyedit
As AI tools become increasingly useful for communicating research, opinions, or simply sharing ideas, it is becoming important to proactively disclose their use when we communicate with others. Being transparent demands AI-use disclosure. Organizations that communicate externally should embed strong AI-disclosure norms, and so should the forum. If you use AI tools for interpersonal communication, disclose their use in conversation.
The ability of LLMs to draft, re-draft, code, and analyse has been wonderful to see. As a researcher at AIM (views my own), I am certainly using AI tools in my work and exploring the domains in which they may make me more (and less) effective.
I personally enjoy writing, so I doubt I will ever use LLMs for extensive drafting. However, producing “content” (read: anything from a tweet to a book) is becoming increasingly cheaper. Lower barriers are leading to a steady increase in production. AI use speeds up research. Alarmingly, it also makes it easy to produce research that looks legit, but is, to all intents and purposes, slop.
Interpersonal communications can also become stilted, creating an underlying unease that your conversations with humans are being intermediated by LLMs. Someone with a very fun writing style now writes like a Roomba; the cold emails you receive are long and well-written but rife with inaccuracies.
I am not arguing for a Luddite retrenchment. AI tools are certainly helpful, and we should keep exploring how they can help us become better and more transparent communicators.
Making it a norm among transparent communicators to disclose the extent and nature of AI use is a matter of both principle and consequence. From a principled perspective, I think we owe it to colleagues and strangers alike to tell them when we are speaking with our own voice and when we are not.
Beyond this, LLMs make constant mistakes and hallucinations. Further, as highly complex prediction machines, they are really good at making something look legit when it isn’t.
Disclosing the extent of AI use in a research or communication output — upfront and prominently — and making it an expectation that others do the same can help readers calibrate their scepticism. In technical work, it would encourage careful reading of the details or support replication efforts.
Despite usually steering clear of the forum, I chose to write this piece because I think some of the transparency practices EAs have are great models for my work and represent a commendable characteristic of the community. In the same way that it has become the norm to disclose the time spent on research and the depth of research or thinking, we should integrate AI disclosures into our communications.
Further reading and guidance:
I would love to have a discreet mandatory way to disclose the level of AI use in the forum. Not sure how it could look in practice but I am in favour of normalizing AI use in writing and at the same time being honest on how much AI got into the text.
Is there a reason why the first sentence of this post would not suffice (even if perhaps moved to the end of the document)?
I think the first sentence in your post is great.
But now this depends on the individual will of the op, and what I’m suggesting is something structural.
Imagine for instance when you write sth you can select the level of AI involvement from a dropdown and it appears somewhere.
I agree with that, could even be a built in checkbox on posting?
I think I disagree. Admittedly I tend to use AI more for talking through ideas and editing than actual drafting, so maybe something different happens when AI is used for drafting, but to me this feels like a demand that writers tell you whether they used MS Word or Google Docs or Open Office or a type writer. Why? The use of a tool, whether an AI or a word processor, doesn’t make the work any less the work of the human author, nor does it diminish the trust we should place in the claims made. The human author is still responsible for the accuracy of the claims.
I think this is fair—it would be ridiculous to expect disclosure of the use of a dictionary. I’d be in favour of this being down to social norms / personal behaviour because I think it’s not the most clear thing in the world.
I still personally think there’s something qualitatively different. Imagine you have to go through 80 pages of calculations and the author tells you they used a calculator which routinely makes errors at random. In theory you expect the author to back their work and have checked it… in practice.… I worry lots of people don’t.. As a consumer I’d rather know what tool was used.
Another analogy would be how code and packages use are standard disclosures in papers.