Category Archives: Uncategorized

The Human Scale

When I talk about academic writing with students at the Danish Design School, I begin by showing them a short clip of Thomas Leslie, Professor of Architecture at the University of Illinois, talking about Pier Luigi Nervi’s approach to designing structures. Before I unpack it, I encourage you to take five minutes to watch it yourself (from 27:25 to 32:12) and see if you can discern the lessons for writers that I extract from it.

First of all, it’s useful to think of your writing problem as in some way related or analogous to the problems of your academic discipline. (When I talk to students in innovation management, for example, I tell them that “Good writing is the creative destruction of bad ideas”; when I talk to project management students, I try to get them to see to that a collaborative writing project is one of the most difficult projects they may ever manage.) Getting students think of writing a paper as a “design problem” is especially apt, not least on Leslie’s definition of a “designer”: “someone who thinks things through.”

Another great point that he makes early on is that Nervi’s greatness lay in not “throwing up his arms” at “impossible constraints” but, rather, going ahead and working within them, to “hack together” a solution under the material conditions he had been given. Indeed, for Nervi, the aim was to build something “out of almost nothing” and this, of course, is the essential and difficult problem of the writer: to use the very limited materials provided by words to express what we know. Nervi, as any writer must, was also able to adapt his thinking to the time constraints he was given. He had a deadline and he made sure that his design could be realized within it.

But what I really love about Leslie’s presentation here is the way he relates the handmade components (the ferrocemento components of the roof) to the larger project. The “rigorous, algorithmic process” of producing and assembling the pieces ends up producing an impressive “architectural space”. The “structural form” is intimately related to “the pattern that comes from the human scale of the process”. Nervi’s design assumed that he would be “working with human beings” and this sensibility is then felt even in the enormous exhibition hall that results. Something important emerges from the fact that, as Leslie puts it, “people are actually making all of the things.”

This is something I latch onto, especially, as you can imagine, in this so-called “age of AI”. A larger text, like a research paper or thesis, will necessarily be a complex structure with many working parts. But it is important that the reader experience this overarching (!) “architectural form,” not as something that was “generated” by a monstrously intelligent machine on the basis of some “large” stochastic “model” but, rather, as something that was crafted by human hands, one paragraph at a time, one deliberate moment after another.

Why I Refuse to Read the Paper David Gunkel “Co-Authored” with ChatGPT

The journal Human-Machine Communication recently published a paper by David Gunkel called “Prompted by me. Generated by ChatGPT,” which he promoted on X under the slogan “Death of the Author.” I had a look and quickly made up my mind not to read it. After a little more thought, I realized that my personal decision should also be a maxim for others. “I think it’s important for academics to refuse to read this kind of paper,” I said, reposting his paper. Over the past few days, David and I have been going back and forth about it, and he finally suggested that I should make an argument for my refusal. This post is a (too long) first draft of that argument.

TL;DR: David’s “process note” abrogates responsibility for the text by pre-emptively deferring an unspecified amount of its intention to a stochastic process. The note is not a declaration but a disclaimer. This renders the text uninterpretable.

Let me begin with what I hope is an elementary logical point that I was surprised to have to make repeatedly in the threads on X. If there are good reasons not to read a text, they are not be found in that text itself. If that were the case, one would have to read it in order to defend one’s refusal to read it, a catch 22. Indeed, on X, David has implied that I am rejecting an argument that I am refusing to read, but this isn’t a fair criticism of my stance. I am refusing to even hear an argument on the basis of how it has been presented.

What, then, are my reasons and on what basis have I formed them? I did, of course, have to read some of the paper before refusing to go on. I read the title, the byline, the abstract, and the process note. I have also consulted the journal’s policies, especially those related to AI. And I have had a look at the editors and editorial board of the journal; David serves on the latter.

In the “Process Note on Composition and Attribution” — which, to be precise, has been placed in a box and is signed, enigmatically, “The Authors,” although the paper formally has only one author, namely, David J. Gunkel — the paper presents itself as a “human-machine collaboration”. Part of Note is procedural and part of it is philosophical. The procedure is stated in the middle of the note:

The text that is presented here emerged through an iterative process involving prompts, generated responses, revisions, and theoretical provocations that circulated between David J. Gunkel and ChatGPT 4o. At times, the human component of the team guided or corrected; at other times, the model generated formulations that challenged and even recontextualized the human’s own assumptions direction.

I want to stress that I have no objections to this procedure. This is no doubt already a common way to compose text “in the age of AI” and it’s altogether likely that most scholars and students in the future will compose their texts in this way to a greater or lesser extent. To anticipate my philosophical objections (or my objections to the philosophical part of the note) I might take issue with the presentation of David as “the human component of the team,” which does indeed sound like it was generated by the machine, not the man, but the iterative process that is being described here is a perfectly legitimate use of technology in the production of texts and, indeed, or perhaps better, in the development of one’s ideas. Prompting a language model to probe the prose of the world strikes me as a reasonable thing to do in some cases, even using it to “raise the temperature” of (i.e., introduce some randomness into) one’s thinking is also legitimate if you like that sort of thing. In any case, it is not what David did with the machine, but the significance he accords it for the authorship of his text, that motivates my refusal to read it.

Before we get to that, however, we have to answer the question of why, if the process note can be attributed to “the Authors” (in the plural), the published paper has only one author, namely, like I say, David — indeed, David J. Gunkel, whom I suspect the David I know and love to goad on X might not fully identify with. Here the answer is simple and is to be found in the publication ethics statement of the journal, Human-Machine Communication:

No tool, program, or other forms of AI (such as a large language model, i.e., LLM) will receive a byline (recognition as an author) on an article published in Human-Machine Communication. Currently, machines do not have the ability to accept responsibility or discipline for work created and therefore are prohibited from authorship. Authors using any machine-generated content, similar to content created by an LLM, are required to document this in both the acknowledgments and methods section of a submission. Furthermore, HMC may place an LLM-badge on the article as an acknowledgment to readers about the practice.

Now, I do in fact have an issue (to be taken up in another post) with the requirement to “document” machine-generated content and the practice of labelling it with badges. (I note, however, that HMC has not placed an LLM-badge on David’s article for some reason). But I completely agree that AIs should never be given a byline. In a moment, we’ll get to my core objection, which is that the process note explicitly recognizes ChatGPT as an author and therefore seems to prima facie violate HMC’s own policy here but, before we get to that, I want to note that, “HMC supports the ICMJE definitions of authorship,” which state that: “Authors should not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author.” I will agree with anyone who suggests that David’s paper follows the letter of these rules, but I will insist that it violates their spirit. Indeed, the purpose of the paper, as stated in the abstract, is to “disrupt” those rules or, as David also sometimes likes to say, “think otherwise” about them. That is what I refuse to do and therefore refuse to read the paper.

Taking the procedural part as read, then, the process note makes two assertions of a more philosophical nature that I reject. The first is a general reflection on collaborative writing:

As with any collaborative work, it can be difficult — if not impossible — to draw clear lines demarcating where the contribution of one partner ends and the other begins. This is especially the case when one of the partners is a large language model.

The first sentence here is unobjectionable. But is there any meaningful sense in which the contribution of a large language is “especially” difficult to distinguish from that of a human co-author? On the face it, since it is possible to document every step of the process, it would seem much easier to demarcate the contributions made by the language model. We just need to consider the ongoing, unrecorded conversation between two peers, over months and years, that results in a conventionally co-authored paper. But, more importantly, this inability to itemize the contributions of each author is precisely why co-authors are conventionally both held responsible for the entire paper that their names are signed to. That is, the consequence of the lack of clear lines is simply a shared, un-demarcated responsibility for the text. Since the journal’s author guidelines implicitly rule out this taking of responsibility by ChatGPT, David’s attempt to “attribute” the contribution of some of the content, content that is not demarcated from his own, is a de facto abrogation of responsibility for the meaning of an (unspecified) portion of the text.

After the procedural description that I already quoted, the note ends as follows:

In this case, the language model was not used as an instrument by a human agent but functioned like a co-author and interlocutor — producing text, proposing structures, and participating in the generation of new insights. The result of this effort is not the product of one or the other, but of this entangled (and difficult to disentangle) interchange.

I have emphasized “like a coauthor” because I believe this is a violation of (the spirit of) the journal’s rules against giving language models “recognition as an author” (recall the ICMJE statement that we should “not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author.“) HMC’s guidelines are clear: “Currently, machines do not have the ability to accept responsibility or discipline for work created and therefore are prohibited from authorship.” That is, David may be right that he can’t himself disentangle his own contribution from ChatGPT — though I would say that scholars should normally pride themselves on being able to disentangle their own contribution from their sources — but before you publish a work you must accept responsibility for it. My objection to David’s note is that he does not do this. He obscures the “authorship” of the whole paper behind a screen of stochastic processes that he claims he cannot “disentangle” from his own intentions.

Now, having not read it, the only access I have to the “intention” of “the Authors” is the abstract of the paper, which is the basis on which we often decide whether or not a paper is worth our time to read. Here it is in full:

This essay—which is not only about human-machine collaboration but is a performance in human-machine collaboration—interrogates the shifting terrain of authorship and creativity in the age of generative artificial intelligence (GAI). Challenging both the instrumentalist view of technology and the romantic myth of the singular genius, it argues for a reconceptualization of creative production as distributed, dialogical, and co-constituted. Drawing on both theoretical innovations in poststructuralism and the practices of pre- and post-modern content creators, the essay repositions the algorithm not as a mere tool but as an active participant in the generation of meaning. In doing so, it exposes and disrupts—in both content and form—the metaphysical assumptions that continue to underwrite our understanding of writing, agency, and communication.

The gesture at the “shifting terrain of authorship” and its declared aim of “exposing and disrupting … the metaphysical assumptions that continue to underwrite or understanding of writing…” resonates, as David has made clear on X, with thinkers like Barthes, Foucault, and Derrida, i.e, what he here calls “theoretical innovations in poststructuralism” and elsewhere simply “the death of the author”.

Now, I disagree with David about the consequences of poststructuralism for both authorship in general and the place of large language models in that terrain. Let me say that again: I disagree with David. What he has done with this paper is to attempt to force me (and others like me) into accepting his conclusion simply by reading and interpreting his text. While even his journal won’t let him do it, he has asked his reader to recognize ChatGPT as his co-author. If his conclusion is to “disrupt” the metaphysics of authorship, he would have won his argument the moment I begin to read his text as though it might possibly mean something.

On these grounds, then, I refuse to the paper. There is much more to be said. And I have not said even this much as efficiently as I could. The issue is serious and the consequences are worth thinking about. And in future posts I will deal with the problem of AI authorship more generally and without direct reference to this paper. David asked me to make an argument. And I have at least outlined one here, stuffing it with more documentation than might be useful to the reader. At least we’re now on the same page, I hope.

I think this paper is a bad idea — indeed, it is an infamous device — and, as I said on X, it is unethical in at least one simple sense: it sets a bad example for others, and especially, students to follow. In my opinion, even reading it, is bad form. But talking about our reasons for not reading is of course entirely in order. Finally, let me address something that of course troubles me in this whole exercise: Do all these words not vindicate David’s little stunt? Is he not drawing attention to an issue and getting us talking about important things? Well, I would not condone the literal murder of a writer to draw attention to the literary “death of the author”. And I might even pen a lengthy denunciation of the act if someone I otherwise respected ever committed it. I would certainly refuse to read the manifesto he wrote in the blood of the victim!

Private and Publish

With my blog more out of view than has been in the past, I’m finding that I approach writing with less urgency. This is a good thing, I think, but I will need to establish an entirely “private” writing practice if I want to make any thing of it.

That is, I need to start thinking about writing more “publishable” pieces. I have put those two words in scare quotes to signal the uneasy tension between them and their difference from the mood of blogging. When I am writing a blogpost, as I am doing now, I almost feel like I’m thinking out loud, in public. In fact, my catchy title (of which I am at this moment still quite proud but may feel differently by the end) came to me while writing the first sentence of this paragraph. Blogging isn’t something we do in private and posting isn’t quite publishing. And writing essays and books has an altogether different feel.

It requires days and weeks of lonely work, unobserved by others, sometimes making progress, sometimes losing ground. None of the struggle is visible to the reader except as a trace in the final product, often seen mostly in its imperfections. (Hopefully, most of the struggling ends up being successful, and therefore invisible.) There is some legitimate concern these days about the private/public distinction, the erosion of our privacy by forces outside of us. But perhaps we also need to understand the practices that maintain our privacy from the inside and, perhaps ironically, this doesn’t mean withdrawing from the world, staying out of public view. Rather, it means carefully selecting, in private, what to share with others, through publication.

This idea that publishing a work properly, not merely posting it to the internet, constitutes an “interiority,” a privacy of the mind, is interesting to think about. It’s also an experience that I don’t give myself often enough. I like to think that I have a rich and interesting inner life, but, truth be told, I don’t work on it, I don’t elaborate it, nearly enough in prose. I need to set aside some hours in the morning to dwell upon my private thoughts with a public in mind. And the best way I know of to do this is to write down what I think for someone I respect. That is the basic scene of good prose writing.

I once somewhat bitterly suggested that, in this age of “surveillance capitalism” (and, indeed, “central intelligence,”)

if you want your privacy, you have to keep it like a secret.

As invective against a certain kind of governance, I still think this is a nice way of putting it. But I need to go at this more constructively, with a healthier attitude. In fact, I need a more durable, longterm strategy. As an individual, then, maybe the best way to maintain your privacy is to open it

like a book.

A Simple Test

I just asked Microsoft’s Copilot to write me a 1000-word essay about the normative implications of Quine’s naturalized epistemology, giving it a prompt of less than 20 words. It immediately complied and, within a few seconds, generated a coherent essay that could easily earn a decent grade in an undergraduate philosophy course. What I mean is that if a student had written the same essay under closed-book conditions, given three hours at the end of a course, it would clearly have demonstrated a familiarity with the texts studied (Copilot was able to correctly cite the two key texts that I would have) and an understanding of the issues involved. The exact grade would of course depend on the level of the course and the standards of the teacher, but the student would certainly have had to attend the class and at least skimmed the readings to pull it off.

I mention this, not to counter those who still insist that AI is not capable of doing their assignments, but to answer those that would have us abandon as meaningless any assignment that an AI can easily do. Keep in mind that my little test used a very low-level model (Copilot is available for free to all staff and students at CBS) and my prompt consisted of a one-sentence question along with the instruction to generate a 1000-word essay (it went over by about 100 words). A sophisticated student, faced with a 5000-word term paper at the end of a course they had not followed very closely, would be able to provide a better model with the course syllabus, learning objectives, and even the actual readings. Given a few hours, and assuming an above-average intelligence, they could no doubt cobble something together that would be quite impressive by pre-2022 standards. This ability to fake a semester’s worth of learning over a weekend is the problem we have to have face, I think.

In the future, I think universities will have to make students sit for written exams, on-site and off-line, more often. A degree that does not require at least half of a student’s total grades to come from such performances cannot be taken seriously. In fact, transcripts should make it very clear which grades were earned through homework (where AI support should be presumed) and which were earned through invigilated examination. That is, it should be clear whether the graduate of a given program is capable of writing coherently about their subject themselves. Their future employers can use that information as they please.

The simple test that I propose, then, is a 20-word question with no further context than the course that the student has taken. The student is given three hours and up to 1000 words to demonstrate what they have learned by answering the question to the best of their ability. Understanding the question (and its significance) is itself part of the competence being examined. Under these conditions, I am convinced that the instructor who designed and taught the course can easily determine whether the learning objectives have been met, just as a music teacher can evaluate a student’s ability simply by giving them some sheet music and an instrument to play it on, or a drafting teacher can evaluate drawing ability by giving a student a piece of paper and an object as model. The fact that an AI can also do these things does not make it less impressive when a student can muster their flesh and bones, their brain and their heart, to do it. An education, after all, consists of disciplining the body so as to liberate the mind. It’s important that we show our students what they are capable of.

On Holding Beliefs

Quine suggested that we think of our knowledge as an ever-changing “web of belief”. These beliefs have ontological implications, which is to say that they commit us to the existence of “things” of various kinds, such as furniture, corporations, and even numbers. Some of our beliefs we hold very lightly, others more firmly, and we keep our commitments accordingly. We are not always very explicit about either our beliefs or our ontological commitments — indeed, we may sometimes be entirely unaware of them — but they can often be gleaned from our words and actions even by complete strangers. Granted, there will always be some “indeterminacy” about exactly what we believe and what we think reality consists of. But our peers, at least, usually have a pretty good sense how we parse our experience into objects of belief. After all, they live in the same world that we do and, for the most part, parse it like we would.

An education is both a revision and a disciplining of our beliefs. We not only come to believe things we hadn’t before believed, and stop believing things we once thought were true, we also learn to hold our beliefs more, let’s say, intelligently. Educated people are, ideally, less afraid of being wrong about something they believe. They have experienced it often enough. Having come to believe something through deliberate effort, they know what sort of doubts may be raised. They sometimes face those doubts very explicitly through the criticism they receive from their peers. And they are not afraid of this criticism either. Just as they are familiar with the experience of being wrong themselves, they are familiar with the errors and misconceptions of others. They hold their beliefs in the face of doubt and criticism, at least until it becomes too much. Then they willingly discard the discredited notion.

I said that we may hold our beliefs firmly or lightly. But it is important to remember that while the strength of a belief may be continuous, its attitude is discrete. We believe something or not. We think that something is true or we do not. We may believe something only for a moment, and hold the belief so lightly that even the slightest doubt removes it altogether, but, while we believed, we believed that something was the case. A belief is a “propositional attitude”; it is a particular take on the reality in which we live.

One of the most important lessons of higher education consists in appreciating the contingency of our beliefs. We believe any one thing only because we believe many other things. And that means that revising one belief will often require us to revise others. Holding beliefs intelligently, then, means being careful about our revisions, always considering the implications. There is no shame in refusing to believe something that requires too radical a change in our existing web of belief, even if the evidence for the proposition is, and even in our own eyes, rather strong. In fact, sometimes we are put in the uncomfortable position of thinking something is true that we can’t quite bring ourselves to believe. The problem lies elsewhere in the web, and it will take us a little while to make all the necessary adjustments, to make room for the new among the old. Give it time. And give your peers time to do likewise when it happens to them.

No one has ever been right about everything. That much is probably obvious. What may be less obvious is that the main purpose of an education is not to make you right about as many things as possible. It is to teach you how to be wrong.