We live in an age of science and abundance. The care and reverence for books as such, proper to an age when no book was duplicated until someone took the pains to copy it out by hand, is obviously no longer suited to ‘the needs of society’, or to the conservation of learning. The weeder is supremely needed if the Garden of the Muses is to persist as a garden.
EZRA POUND, ABC of Reading
For a couple of years now on Twitter, David Gunkel has been challenging me to “think otherwise” about robot rights. I’m still not exactly sure what (or how) he thinks about robot rights, but, to his credit, my own thinking about the issue has become clearer during the time that I’ve engaged with his work. (Most recently, see his chapter, “The Rights of Robots”.) While the subject of robot rights interests me at a gut level as an amateur philosopher, I’ve come to realize that an important part of it is actually in my professional wheelhouse. This summer, I’ve given myself the project of getting some of my thoughts written down.
We all know that technology has had a profound effect on writing practices. Just in the last thousand years of human history, the transitions from manuscripts to moveable type, from typing to word-processing, from dictionaries to spell checkers, from style guides to grammar checkers, and from spell and grammar checking to autocompletion, have gradually, albeit with increasing intensity, transformed what it means to say that someone has “written” something. The day is already upon us when a properly trained language model like GPT-3 can produce a plausible blog post with very little human guidance. The day when it can be trained to produce a coherent, scholarly prose paragraph that meets my formal definition (if not my personal standards) is probably not far off. Indeed, I’d be surprised if at least one hasn’t already been produced.
This raises the question, “Can a robot be an author?” (I have said it can produce text, and this is undeniable, but can it write?) The question is analogous to questions about the “moral standing” of robots or their “status as persons” and can be made explicitly a “rights” issue by asking, under what circumstances might a machine be given the “moral right to be identified as the author” of a text?
If you or I write a poem, we can assert the moral right to be identified as the author of that poem. Now, a canso, for example, is a relatively simple structure with relatively simple purpose. Over a hundred years ago, writing about the troubadour’s predicament as it stood already eight centuries ago, Ezra Pound put it as follows:
After the compositions of Vidal, Rudel, Ventadour, of Bornelh and Bertrans de Born and Arnaut Daniel, there seemed little chance of doing distinctive work in the ‘canzon de l’amour courtois’. There was no way, or at least there was no man in Provence capable of finding a new way of saying in six closely rhymed strophes that a certain girl, matron or widow was like a certain set of things, and that the troubadour’s virtues were like another set, and that all this was very sorrowful or otherwise, and that there was but one obvious remedy.
Ezra Pound, “Troubadours—Their Sorts and Conditions”
I immediately imagine prompting GPT-3 with “Write six closely rhymed strophes that say that a certain girl, matron or widow is like a certain set of things, and that the troubadour’s virtues are like another set, and that all this is very sorrowful or otherwise, and there is but one obvious remedy.” With a little training (the canso provides a rich tradition of exemplars to be devoured by a “learning machine”), I’m sure GPT-3 could produce a poem equal to one I could produce on an average day (without the intercession of the Muses, let’s say). But who is the author of that poem? Was this poem actually “written”? Can a sufficiently trained language model claim authorship of the poem?
A language model can also be trained to summarize a journal article, or even a whole set of journal articles, and a few years ago Springer published a book about lithium batteries that was written by such a machine. Who (if anyone) is the author of that book? And under what circumstances would we grant an algorithm either (legal) copyright or (moral) standing as an author? Why would we do so? Why might we have to? What would it mean if we did?
These are the questions that I would like to explore over the summer. I’m expecting to learn something as I look into this (I’m already learning about autogressive language models and deep learning from people like Jay Alammar, for example), but I won’t keep you guessing about my views going in. Under no circumstances can a machine be an author. Robots can’t write. Writing is not merely text prediction, and scholarly discourse is not merely a language model. As Borges put it long ago, “a book is more than a verbal structure or series of verbal structures; it is the dialogue it establishes with its reader and the intonation it imposes upon his voice and the changing and durable images it leaves in his memory” (“A Note on (toward) Bernard Shaw”, Labyrinths, p. 213). Properly speaking, there can be no artificial writing because there is no artificial imagination. Imagination precedes artifice.