Why I Refuse to Read the Paper David Gunkel “Co-Authored” with ChatGPT

The journal Human-Machine Communication recently published a paper by David Gunkel called “Prompted by me. Generated by ChatGPT,” which he promoted on X under the slogan “Death of the Author.” I had a look and quickly made up my mind not to read it. After a little more thought, I realized that my personal decision should also be a maxim for others. “I think it’s important for academics to refuse to read this kind of paper,” I said, reposting his paper. Over the past few days, David and I have been going back and forth about it, and he finally suggested that I should make an argument for my refusal. This post is a (too long) first draft of that argument.

TL;DR: David’s “process note” abrogates responsibility for the text by pre-emptively deferring an unspecified amount of its intention to a stochastic process. The note is not a declaration but a disclaimer. This renders the text uninterpretable.

Let me begin with what I hope is an elementary logical point that I was surprised to have to make repeatedly in the threads on X. If there are good reasons not to read a text, they are not be found in that text itself. If that were the case, one would have to read it in order to defend one’s refusal to read it, a catch 22. Indeed, on X, David has implied that I am rejecting an argument that I am refusing to read, but this isn’t a fair criticism of my stance. I am refusing to even hear an argument on the basis of how it has been presented.

What, then, are my reasons and on what basis have I formed them? I did, of course, have to read some of the paper before refusing to go on. I read the title, the byline, the abstract, and the process note. I have also consulted the journal’s policies, especially those related to AI. And I have had a look at the editors and editorial board of the journal; David serves on the latter.

In the “Process Note on Composition and Attribution” — which, to be precise, has been placed in a box and is signed, enigmatically, “The Authors,” although the paper formally has only one author, namely, David J. Gunkel — the paper presents itself as a “human-machine collaboration”. Part of Note is procedural and part of it is philosophical. The procedure is stated in the middle of the note:

The text that is presented here emerged through an iterative process involving prompts, generated responses, revisions, and theoretical provocations that circulated between David J. Gunkel and ChatGPT 4o. At times, the human component of the team guided or corrected; at other times, the model generated formulations that challenged and even recontextualized the human’s own assumptions direction.

I want to stress that I have no objections to this procedure. This is no doubt already a common way to compose text “in the age of AI” and it’s altogether likely that most scholars and students in the future will compose their texts in this way to a greater or lesser extent. To anticipate my philosophical objections (or my objections to the philosophical part of the note) I might take issue with the presentation of David as “the human component of the team,” which does indeed sound like it was generated by the machine, not the man, but the iterative process that is being described here is a perfectly legitimate use of technology in the production of texts and, indeed, or perhaps better, in the development of one’s ideas. Prompting a language model to probe the prose of the world strikes me as a reasonable thing to do in some cases, even using it to “raise the temperature” of (i.e., introduce some randomness into) one’s thinking is also legitimate if you like that sort of thing. In any case, it is not what David did with the machine, but the significance he accords it for the authorship of his text, that motivates my refusal to read it.

Before we get to that, however, we have to answer the question of why, if the process note can be attributed to “the Authors” (in the plural), the published paper has only one author, namely, like I say, David — indeed, David J. Gunkel, whom I suspect the David I know and love to goad on X might not fully identify with. Here the answer is simple and is to be found in the publication ethics statement of the journal, Human-Machine Communication:

No tool, program, or other forms of AI (such as a large language model, i.e., LLM) will receive a byline (recognition as an author) on an article published in Human-Machine Communication. Currently, machines do not have the ability to accept responsibility or discipline for work created and therefore are prohibited from authorship. Authors using any machine-generated content, similar to content created by an LLM, are required to document this in both the acknowledgments and methods section of a submission. Furthermore, HMC may place an LLM-badge on the article as an acknowledgment to readers about the practice.

Now, I do in fact have an issue (to be taken up in another post) with the requirement to “document” machine-generated content and the practice of labelling it with badges. (I note, however, that HMC has not placed an LLM-badge on David’s article for some reason). But I completely agree that AIs should never be given a byline. In a moment, we’ll get to my core objection, which is that the process note explicitly recognizes ChatGPT as an author and therefore seems to prima facie violate HMC’s own policy here but, before we get to that, I want to note that, “HMC supports the ICMJE definitions of authorship,” which state that: “Authors should not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author.” I will agree with anyone who suggests that David’s paper follows the letter of these rules, but I will insist that it violates their spirit. Indeed, the purpose of the paper, as stated in the abstract, is to “disrupt” those rules or, as David also sometimes likes to say, “think otherwise” about them. That is what I refuse to do and therefore refuse to read the paper.

Taking the procedural part as read, then, the process note makes two assertions of a more philosophical nature that I reject. The first is a general reflection on collaborative writing:

As with any collaborative work, it can be difficult — if not impossible — to draw clear lines demarcating where the contribution of one partner ends and the other begins. This is especially the case when one of the partners is a large language model.

The first sentence here is unobjectionable. But is there any meaningful sense in which the contribution of a large language is “especially” difficult to distinguish from that of a human co-author? On the face it, since it is possible to document every step of the process, it would seem much easier to demarcate the contributions made by the language model. We just need to consider the ongoing, unrecorded conversation between two peers, over months and years, that results in a conventionally co-authored paper. But, more importantly, this inability to itemize the contributions of each author is precisely why co-authors are conventionally both held responsible for the entire paper that their names are signed to. That is, the consequence of the lack of clear lines is simply a shared, un-demarcated responsibility for the text. Since the journal’s author guidelines implicitly rule out this taking of responsibility by ChatGPT, David’s attempt to “attribute” the contribution of some of the content, content that is not demarcated from his own, is a de facto abrogation of responsibility for the meaning of an (unspecified) portion of the text.

After the procedural description that I already quoted, the note ends as follows:

In this case, the language model was not used as an instrument by a human agent but functioned like a co-author and interlocutor — producing text, proposing structures, and participating in the generation of new insights. The result of this effort is not the product of one or the other, but of this entangled (and difficult to disentangle) interchange.

I have emphasize “like a coauthor” because I believe this is a violation of (the spirit of) the journal’s rules against giving language models “recognition as an author” (recalled the ICMJE statement that we should “not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author.“) HMC’s guidelines are clear: “Currently, machines do not have the ability to accept responsibility or discipline for work created and therefore are prohibited from authorship.” That is, David may be right that he can’t himself disentangle his own contribution from ChatGPT — though I would say that scholars should normally pride themselves on being able to disentangle their own contribution from their sources — but before you publish a work you must accept responsibility for it. My objection to David’s note is that he does not do this. He obscures the “authorship” of the whole paper behind a screen of stochastic processes that he claims he cannot “disentangle” from his own intentions.

Now, having not read it, the only access I have to the “intention” of “the Authors” is the abstract of the paper, which is the basis on which we often decide whether or not a paper is worth our time to read. Here it is in full:

This essay—which is not only about human-machine collaboration but is a performance in human-machine collaboration—interrogates the shifting terrain of authorship and creativity in the age of generative artificial intelligence (GAI). Challenging both the instrumentalist view of technology and the romantic myth of the singular genius, it argues for a reconceptualization of creative production as distributed, dialogical, and co-constituted. Drawing on both theoretical innovations in poststructuralism and the practices of pre- and post-modern content creators, the essay repositions the algorithm not as a mere tool but as an active participant in the generation of meaning. In doing so, it exposes and disrupts—in both content and form—the metaphysical assumptions that continue to underwrite our understanding of writing, agency, and communication.

The gesture at the “shifting terrain of authorship” and its declared aim of “exposing and disrupting … the metaphysical assumptions that continue to underwrite or understanding of writing…” resonates, as David has made clear on X, with thinkers like Barthes, Foucault, and Derrida, i.e, what he here calls “theoretical innovations in poststructuralism” and elsewhere simply “the death of the author”.

Now, I disagree with David about the consequences of poststructuralism for both authorship in general and the place of large language models in that terrain. Let me say that again: I disagree with David. What he has done with this paper is to attempt to force me (and others like me) into accepting his conclusion simply by reading and interpreting his text. While even his journal won’t let him do it, he has asked his reader to recognize ChatGPT as his co-author. If his conclusion is to “disrupt” the metaphysics of authorship, he would have won his argument the moment I begin to read his text as though it might possibly mean something.

On these grounds, then, I refuse to the paper. There is much more to be said. And I have not said even this much as efficiently as I could. The issue is serious and the consequences are worth thinking about. And in future posts I will deal with the problem of AI authorship more generally and without direct reference to this paper. David asked me to make an argument. And I have at least outlined one here, stuffing it with more documentation than might be useful to the reader. At least we’re now on the same page, I hope.

I think this paper is a bad idea — indeed, it is an infamous device — and, as I said on X, it is unethical in at least one simple sense: it sets a bad example for others, and especially, students to follow. In my opinion, even reading it, is bad form. But talking about our reasons for not reading is of course entirely in order. Finally, let me address something that of course troubles me in this whole exercise: Do all these words not vindicate David’s little stunt? Is he not drawing attention to an issue and getting us talking about important things? Well, I would not condone the literal murder of a writer to draw attention to the literary “death of the author”. And I might even pen a lengthy denunciation of the act if someone I otherwise respected ever committed it. I would certainly refuse to read the manifesto he wrote in the blood of the victim!

Leave a Reply

Your email address will not be published. Required fields are marked *