Monthly Archives: October 2023

Almost Writing

About six years ago, a question occurred to me. Is blogging even writing? To use a familiar distinction in its pre-deconstructed form, is blogging more like writing or speaking? After deconstruction, of course, this question doesn’t come up, but on most days I am utterly beholden to the metaphysics of presence and worried about writing, if you will, “proper” — or, as the English say, properly worried about writing. The future of writing is online. Is online writing even writing? I worry.

The issue turns on the famous displacements of the subjects of writing in time and place. The meaning of the text is produced by the difference and deferment of the words. Even the word “subject” is ambiguous here since it can refer to both the subject that is writing and the topic being written about. The important thing about writing, it seems to me, is that, while it is going on, the subject (the “I”) of the text is utterly alone. Explicitly so. Excruciatingly so. The reader is implicit, imaginary. In an important sense, the reader is a “fantasy” of the writer. (“Imagine having readers!” the author sighs.) The loneliness of the writer — what we may call, borrowing a line from Virginia Woolf, “the loneliness that is the truth of things” — is eventually to be shared with the reader. But first it must be suffered alone. And, here’s the thing: in the moment of writing, it seems it could last forever.

Blogging, I want to say, is different. As I write these words I know my suffering will stop. I’ve decided to post this before 8:00 AM. (Let’s see if I find the courage.) It’s not really writing. I’m not deferring the meaning of my words — not for very long — I will give them to my reader presently, immediately. Almost. Blogging almost isn’t writing. Or is it almost writing?

L’art pour l’art

Dave Cormier recently advised us to stop assigning essays because it doesn’t teach students what he wants to teach them, namely, how to do research. My immediate response to this was to say that I’m teaching students how to write sentences to put in paragraphs to make arguments to use in essays. To steal a line from T. S. Eliot, I’m teaching prose primarily as prose and not another thing.

In fact, I felt like I had pre-emptively responded to his post nine months ago with my first post of the year, reflecting on the coming (now ongoing) disruption of higher education by generative AI: “How They Must Write: Saving the Five-Paragraph Essay and Other Contingencies.”

When I say I want to teach students how to “make arguments” I’m not saying that I’m teaching the essay in order to build “critical thinking” skills, although I do believe that Dave’s long list of skills that generative AI will increasingly “cover” are important and still need to be taught. I think Dave agrees with this, although I was surprised to see him say that AI means that “the student doesn’t need to be creative” any longer.

I’m not using the essay to teach information literacy. I’m literally (!) just trying to teach literacy — specifically, writing skills. Students should understand how sentences and paragraphs work. They should be able to compose them into arguments and they should be able to occupy a reader’s attention effectively for 5 or 11 or 40 minutes. Students should simply be able to write essays, not for the sake of some other skill that is required to write them, but for the sake of writing essays. Indeed, I’ll teach some of those other skills for the sake of the essays!

So I will keep assigning essays, including take-home essays. It’s just that I’ll only give grades for in-class writing. Or, rather, that is my advice to teachers.

ChatGPT Can’t Write

A recent paper by Mark Coeckelbergh and David Gunkel in AI & Society has got me thinking. Since I know that David will now immediately think his work is done — getting us thinking is his goal — let me stress from the outset that it has mainly got me thinking that they’re wrong. Since their aim is “deconstructive”, however, telling them they’re “wrong” is not so straightforward. Properly speaking, in order to be wrong, there have to be facts of the matter and you have to say something about those facts, and deconstructive writing often resists being read that way. Still, I think I’ve found a sense in which Mark and David are simply and plainly wrong and I invite them to consider it here.

Although it may not be their explicit thesis, I take the underlying message of their paper to be that large language models constitute a “fundamental challenge to long-standing metaphysical, epistemological, and axiological assumptions,” which they sort, roughly, under the rubric of “logocentrism”, and which therefore also gives them an “opportunity” to remind us of the not-quite-so-long-standing but nonetheless familiar “postmodern” critique (or deconstruction) of these assumptions as found in the work of Barthes, Foucault, and Derrida. Specifically, they put the challenge of generative AI as follows: “these algorithms write without speaking, i.e. without having access to (the) logos and without a living voice.” This is the statement that I think is wrong. But I want to make clear that, although I’m not myself ashamed of my logocentrism, I don’t just think it is wrong on those “long-standing metaphysical assumptions” they propose to deconstruct. I want to offer a critique on Mark and David’s own terms.

I should say that I’ve tried this sort of thing before in my conversations with David about robot rights, with rather limited results. I disagree with him that we can “face” a robot as an “other” in Levinas’ sense; and I don’t think they provide the correlative “incidents” of “rights” in Hohfeld’s sense. As far as I can tell, he has so far been unmoved by my arguments, which are based both on my understanding of how robots work and my reading of Levinas and Hohfeld. Past failures notwithstanding, I can’t think of a better way to do critique than that, and I’m going to offer something similar here.

Mark and David pass somewhat lightly over how language models work, encouraging us to take them more or less at face-value (or to abolish any distinction between face-value and “real” value). But we have to remember that there are many ways to imagine a machine putting words together that we would not consider writing. In a previous post, I suggested that LLMs are not, in fact, “artificial intelligences”; they are merely “electronic articulators”; and I asked us to consider the following example of a (non-)”writing” machine:

Imagine you have three bags, numbered 1, 2, 3. In bag number 1 there are slips of paper with names on them: “Thomas”, “Zahra”, “Sami”, “Linda”. In bag number 2 there are phrases like “is a”, “wants to be a”, “was once a”, and so forth. In bag number 3, there are the names of professions: “doctor”, “carpenter”, “lawyer”, “teacher”, etc. You can probably see where this is going. To operate this “machine”, you pull a slip of paper out of each bag and arrange them 1-2-3 in order. You’ll always get a string of words that “make sense”. Can this arrangement of three bags write?

My suggestion is, appearances notwithstanding, that what language models in fact do is not something Barthes, Foucault, and Derrida would countenance as writing, any more than they would call our system of three bags an “écrivain”. Since these authors are “dead” in all the relevant senses, that’s not going to bother Mark and David, of course, so let me put it in more technical terms: what large language models do cannot be construed as writing even according to the “innovations” of the “postmodern literary theory” that Mark and David propose to “capitalize on”. The operations of ChatGPT are not “grammatological”; they do not make a “différance”. Their output, as a consequence, are not “texts” that can be “subject” to “deconstruction” or, even, I dare say, “reading”. It can of course easily be turned into text by a writer who puts their name to it, authorizing it and then, in order that may it be read, politely dying.*

I wish to make this argument by quoting passages form Barthes, Foucault, and Derrida as they appear in Mark and David’s text and simply challenging them to explain how they imagine ChatGPT carries out the necessary operations required of even the postmodern conception “writing”. Let’s start with Barthes.

Text is made of multiple writings, drawn from many cultures and entering into mutual relations of dialogue, parody, contestation, but there is one place where this multiplicity is focused and that place is the reader. … A text’s unity lies not in its origin but in its destination.

Roland Barthes, “Death of the Author”

Given what we know about how ChatGPT generates it output, it’s hard to see it “drawing from cultures” or “entering into mutual relations”. That is, this “multiplicity” that produces a text is entirely foreign to ChatGPT, which merely computes the next probable token in a string of tokens. I’m certainly curious to hear the analysis (or even deconstruction) of how the output is “made” as “text.”

Next, here’s Foucault:

Although, since the eighteenth century, the author has played the role of the regulator of the fictive, a role quite characteristic of our era of industrial and bourgeois society, of individualism and private property, still given the historical modifications that are taking place, it does not seem necessary that the author function remain constant in form, complexity, and even in existence. I think that, as our society changes, at the very moment when it is in the process of changing, the author function will disappear.

Michel Foucault, “What is an Author?”

But it’s important to recall that the disappearance of the author function does not mean anyone or anything can now “write”. Rather, new questions arise: “What are the modes of existence of this discourse? Where has it been used, how can it circulate, and who can appropriate it for himself? What are the places in it where there is room for possible subjects? Who can assume these various subject functions?” How, I want to know, can ChatGPT occupy these positions, execute these new functions?

Finally, let’s consider Derrida. Mark and David seem to think that for ChatGPT in particular “there is nothing outside the text”: “For the text of an LLM to make sense, the texts (and the contexts to which they refer) are enough. For this purpose, nothing more is needed.” ChatGPT, on their view, becomes not just a possible writer but an exemplary writer of non-logocentric text (better than Beckett? Better than Gertrude Stein?). But would Derrida agree?

‘There is nothing outside the text.’ That does not mean that all referents are suspended, denied, or enclosed in a book, as people have claimed, or have been naïve enough to believe and to have accused me of believing. But it does mean that every referent, all reality has the structure of a differential trace, and that one cannot refer to this ‘real’ except in an interpretive experience”

Jacques Derrida, Limited inc

Does it not seem like Mark and David’s “nothing outside the text” is, in the case of LLMs, a matter of suspending, denying, or enclosing all referents in a book? Where, in the operations of ChatGPT do we find it actually referring, i.e., producing a “differential trace” of the real? Where is ChatGPT’s “interpretative experience”?

Like I say, I want to leave this as a challenge. Mark and David have forced me to read Barthes, Foucault, and Derrida very closely these past few days, and that is of course rewarding all on its own. But the more I read them, the less likely it seems to me that they would countenance what ChatGPT does as any kind of “writing”. Sure, Barthes suborned the murder of literary authority, but he didn’t leave only a reader in its place. A “scriptor” was to take the author’s place. We could look more closely at what he thought this writer was doing. But I doubt we could ever conclude that ChatGPT is doing it.


*”Writing,” says Barthes, “is that neutral, composite, oblique space where our subject slips away, the negative where all identity is lost, starting with the very identity of the body writing.” It seems to me that this implies a body to begin with, a subject to slip away. Perhaps, when he so famously says that “the birth of the reader must be at the cost of the death of the Author,” we misunderstand him if we think the Author must die once and for all — that all authors die to make all readers possible. Rather, the author must precisely live, to do the writing, to be the body writing in practice, but must then die, if only in principle (or on principle, if you will), in order for the text to be read.

Being Conversant

In so far as we take the ‘organic’ character of language seriously, we cannot accurately describe the first steps towards its conquest as learning part of the language; rather it is a matter of partly learning.

Donald Davidson

In an academic setting, you don’t know something if you can’t participate in a conversation with other knowledgeable people about it. That is of course a very pragmatic requirement and what Davidson says of language learning certainly also applies to learning about the world. We don’t know something or not; we know something to an extent. Likewise, it’s not that we can or cannot talk about something; but we are more or less conversant about it.

But, if conversation can be a test of our knowledge, can it also be a source of it? As I said the other day, my gut says no. If we talk to a knowledgeable person, we can often feel like we have learned something, even a great deal, but when we then go on to either use that knowledge or talk to others about it, we find that there is something crucial we missed. For academic purposes, especially, we don’t really know what we’re talking about until we have checked the sources we have heard about. A conversation may, thus, occasion learning but cannot, in itself, accomplish it. It is not until we have read about or experienced it, or perhaps even just thought very carefully about it, ourselves that we can truly say we “know”. Such is the case, anyway, at university.

Fact and Nuance

I’m writing this post from memory. Yesterday, I was talking to some students and quoted Norman Mailer as saying, “a fact is a compression of nuances that alienates the reality.” Now, as I recall, (and I will check when I get the chance), the actual sentence (somewhere in, I think, The Presidential Papers), says, “…alienate the reality,” i.e., the verb is in the plural, which seems to refer back to “nuances” not “compression”. It’s interesting that such a tiny nuance of language (which may in turn be a typo or misprint) can shift the meaning of a sentence so radically. I’m pretty sure he meant that the compression, not the nuances, alienate(s) the reality, but is there a fact of the matter to settle this question? (Do you see what I did there?)