“One great use of words,” said Voltaire, “is to hide our thoughts.” In his famous treatise The Concept of Anxiety, Kierkegaard picked up on this idea, via Young and Talleyrand, and put a distinctively mischievous spin on it. We do, indeed, use language to hide our thoughts, he said, “namely, the fact that we don’t have any .” This is a great way to get into the core of my anxieties about artificial intelligence in general, and large language models like GPT-3 and LaMDA specifically. After all, I’m entirely certain that they have no conscious thoughts, but at least one person who is very close to the action, Blake Lemoine at Google, has been persuaded by their facility with language that they do. For my part, I’m concerned that the presumption that people generally use language to say what they think is being undermined by the apparent ability of unthinking machines to talk.
Now, my concern is mainly with academic or scholarly writing, i.e., writing done by students and faculty in universities. My working definition of this kind of writing has always been that it is the art of writing down what you know for the purpose of discussing it with other knowledgeable people. But this definition is of course a rather earnest one (some would say it is outright quaint) when compared with more cynical definitions that are, I should add, sometimes put forward without a hint of irony. Academic writing, it is said, is the art of putting words together in way that meets the expectations of your teachers; scholarly writing is the art of getting something past your reviewers so that it will be published and impress your tenure committee. On this view, that is, we use language at university, not to tell each other what we know, but to hide what we don’t know from each other, or, as Kierkegaard might suggest, the fact that we don’t really know anything at all. This is not a pleasant thing to think about for a writing instructor.
Two recent pieces in the Economist provide me with a good way of framing my concerns. “Neural language models aren’t long programs,” Blaise Agüera y Arcas tells us; “you could scroll through the code in a few seconds. They consist mainly of instructions to add and multiply enormous tables of numbers together.” Basically, these programs just convert some text into numbers, look up some other numbers in a database, carry out some calculations, the results of which are used to update the database, and are then also converted into a string of text. That’s all. What is confusing is that Agüera y Arcas then goes on to say that “since social interaction requires us to model one another, effectively predicting (and producing) human dialogue forces LaMDA to learn how to model people too.” His description of the program clearly says that it doesn’t “model people” at all. We might say that it uses language to hide the fact that it doesn’t have a model of people.
“There are no concepts behind the GPT-3 scenes,” Douglas Hofstadter explains; “rather, there’s just an unimaginably huge amount of absorbed text upon which it draws to produce answers.” But he, too, ends up being “strangely” optimistic about where this could go if we just turn up the computing power.
This is not to say that a combination of neural-net architectures that involve visual and auditory perception, physical actions in the world, language and so forth, might not eventually be able to formulate genuinely flexible concepts and recognise absurd inputs for what they are. But that still wouldn’t amount to consciousness. For consciousness to emerge would require that the system come to know itself, in the sense of being very familiar with its own behaviour, its own predilections, its own strengths, its own weaknesses and more. It would require the system to know itself as well as you or I know ourselves. That’s what I’ve called a “strange loop” in the past, and it’s still a long way off.
How far off? I don’t know. My record for predicting the future isn’t particularly impressive, so I wouldn’t care to go out on a limb. We’re at least decades away from such a stage, perhaps more. But please don’t hold me to this, since the world is changing faster than I ever expected it to.
I feel at something of a disadvantage with people like this because they understand how the technology works better than I do and seem to see a potential in it that I don’t. That is, after trying to understand how they tell me it works, I conclude that intelligent language models aren’t just “a long way off” but are simply impossible to imagine. But then they tell me that they think these are all possibilities that we can expect to see even within a few decades. Some promoters of this technology even tell me that the systems already “model”, “reason”, “perceive”, “respond” intelligently. But looking at the technical details (within my limited ability to understand them) I simply don’t see them modeling anything — no more than a paper bag can add, as I like to put it, just because if you put two apples in there, and then another two, there are four.
My view is that we haven’t taken a step towards artificial intelligence since we invented the statue and the abacus. We have always been able to make things that look like people and other things that help us do things with our minds. The fantasy (or horror) of a making something with a mind like ours is also nothing new. In other words, my worry is not that the machines will become conscious, but that we will one day be persuaded that unconscious machines are thinking.
At a deeper level, my worry is that machines will become impressive enough in their mindless output to suggest to students and scholars that their efforts to actually have thoughts of their own are wasted, that the idea of thinking something through, understanding it, knowing what you’re talking about, etc. will be seen as a quaint throwback to a bygone era when getting your writing done actually demanded the inconvenience of making up your mind about something. Since their task is only to “produce a text” (for grading or publication) and since a machine can do that simply by predicting what a good answer to a prompt might be, they might think it entirely unnecessary to learn, believe, or know anything at all to succeed.
That is, I worry that artificial intelligence will give scope to Kierkegaard’s anxiety. Perhaps, guided by ever more sophisticated language models, academic discourse will become merely a game of probabilities. What is the sequence of words that is most likely to get me the grade I want or the publication I need?