How should norms of academic writing and publishing be changed once AI systems become superhuman in more respects?

Link post

Increasingly capable AI systems create dramatic challenges for academic writing norms. How should we react? Jonathan Symons from Macquarie University and I, in a new paper in Digital Society (open access), argue that minor fixes to current norms will be insufficient: https://​​rdcu.be/​​dp9h8.

A very short summary:

We start by proposing four intuitively plausible desiderata for norms of academic writing:

– adequate attribution

– enable novelty/​disincentivise redundancy

– prevent harm

– “good as norms”.

We consider candidate “quick fixes” to current norms in the age of advanced AI, e.g. banning the use of AI or extending the definition of plagiarism to cover AI output, and argue that they all fail by the desiderata.

To indicate the scale of change needed, we tentatively sketch a more promising novel system of norms. Our central idea is that at some point advanced AI systems should “sign off’’ on statements that outline the human and AI contributions to the research done. We also highlight the type of technological and regulatory infrastructure that would be needed to enable a proposal along these lines.

The most serious worry about our proposal is that it incrementally shifts power from humans to AI systems and/​or the corporations developing them. However, we feel that the risks here would arise simply from developing AI capabilities so far that research which contains decisive contributions from AI is often superior to research without AI contributions.
The primary threat to humans’ role in academia arises from the development of AI with superhuman capabilities, not from norms requiring verification of AI-assisted publication.

No comments.