I strongly endorse this and think that there are some common norms that stand in the way of actually-productive AI assistance.
People don’t like AI writing aesthetically
AI reduces the signal value of text purportedly written by a human (i.e. because it might have been trivial to create and the “author” needn’t even endorse each claim in the writing)
Both of these are reasonable but we could really use some sort of social technology for saying “yes, this was AI-assisted, you can tell, I’m not trying to trick anyone, but also I stand by all the claims made in the text as though I had done the token generation myself.”
I strongly endorse this and think that there are some common norms that stand in the way of actually-productive AI assistance.
People don’t like AI writing aesthetically
AI reduces the signal value of text purportedly written by a human (i.e. because it might have been trivial to create and the “author” needn’t even endorse each claim in the writing)
Both of these are reasonable but we could really use some sort of social technology for saying “yes, this was AI-assisted, you can tell, I’m not trying to trick anyone, but also I stand by all the claims made in the text as though I had done the token generation myself.”