Why you shouldn’t cite, acknowledge, or credit an AI with authorship

Robots can’t write. This was the conclusion I reached at the end of last summer, after taking a close look at the ermerging “large language model” that we now all know as “ChatGPT” but I was playing around with as “GPT-3” back then. I still believe that while artificial intelligence can produce impressively articulate texts it is not doing it by writing. In this post, I will not try to defend this idea; instead I want to identify three important consequences that I will take up in later posts one at a time. I just need to get these three ideas off my chest.

First, language models can’t be authors. They have no authority to state facts or opinions; indeed, they know no facts and hold no opinions. No matter how much of a text you have let an AI generate for you, you cannot attribute authorship to it. It can’t take responsibility for your mistakes and, therefore, can’t take credit for your work. AIs do not make any decisions about what they are saying, nor do they have any sense of their rhetorical situation. They don’t imagine a reader or observe any ethical relationship to them. “Machine learning” notwithstanding, they don’t actually learn anything from the criticism their output receives, they just adjust their parameters. They can’t execute what Foucault called “the author function”.

Second, since they can’t be authors, language models can’t be cited either. There is, properly speaking, no text to cite; there’s is only a record (if you keep it) of your interaction with a machine. Recently a university embarrassed itself by citing ChatGPT’s contribution to an email to its students as a “personal communication”. At the time, they probably just thought they were being transparent, but it is important to keep in mind that AIs aren’t persons and don’t communicate. Such a citation is nonsense. If you use the words that an AI suggests to you to describe something or analyse something or summarize something, you must stand by those words as you would your own. Saying you got it from ChatGPT is like saying the idea came to you in the shower. Go ahead and tell your reader such things if you want. It’s not a citation.

Finally, since they can’t be authors, and therefore can’t be cited, language models can’t be acknowledged. In our acknowledgements, we mention people who have made meaningful contributions to our work for which they are not explicitly cited, and which did not rise to the level of making them co-authors of the text we have written. The point here is that we can acknowledge contributions from entities that could have been cited in or co-authored or work. Yes, you can acknowledge friends and family members who have no scholarly authority or expertise. But the truth is that they could write a book and would then be able to claim to authorship. That’s just how it works. You can also acknowledge institutions (like your department or funding agency); institutions are named as the authors of documents all the time too. In my view, acknowledging the contribution of artifificial intelligence is therefore the first step down a road we don’t want to go toward “robot rights”. We would be committing a fateful category mistake.

I have heard people say that we should think of AI, not in terms of plagiarism (I agree), but in terms of “co-creation”. (There’s already a Wikipedia article about “collaboration” between humans and AI.) I think this, too, misunderstands the contribution that language models make. We do not cite or otherwise acknowledge the contributions of Word or Google in our writing. We should treat ChatGPT the same way. It is simply a machine we use to make our writing better.

Leave a Reply

Your email address will not be published. Required fields are marked *