I Am the Text. The Text is Me. (Or, There Is Nothing Outside the River.)

with apologies to Te Awa Tupua

Like animal rights, the rights of nature are often invoked as a model for thinking about the rights of robots. In March of 2017, for example, the Parliament of New Zealand “confer[ed] a legal personality on the Whanganui River” as part of a settlement with the Maori tribes that traditionally lived along its banks. Scholars like David Gunkel and Josh Gellers frequently cite this act as a key moment in the history of “rights for non-humans” and, therefore, an opening to the possibility of granting rights to machines. If a river can be a person, and the subject of rights, why can’t a robot or other artificial entity?

Source: Google Maps

The short answer is that the Whanganui river, in the sense that we who are oppressed by the Western metaphysics of presence understand it, was not granted personhood by the Te Awa Tupua Act of 2017. Te Awa Tupua is not just a river and its rights belong to a spirit, what the Romans called a genius loci. From the point of view of the Western legal tradition, Te Awa Tupua is basically a corporation tied to a specific geography, much like an incorporated town. The river itself, which is to say, the watercourse through the landscape that we Westerners too easily point to and call “the” Whanganui, does not have any rights according to the law.

The purpose of this post is to think some of these issues through. As usual, I’ll try to bring the discussion around to the possibility that an artificial entity could be an “author”; that is, I will try to see whether Te Awa Tupua can provide a model for a “legal personality” for, say, GPT-3, giving it rights of authorship. The answer is not quite no, but also not quite (and you’ll have to pardon me for not killing this darling of a pun) the watershed moment for “robot rights” that Josh and David imagine.

Obviously, I’m not here challenging the legal personhood of Te Awa Tupua, nor suggesting that it shouldn’t have any rights. The Act clearly says that it does and, as we’ll see, I appreciate the legal brilliance of the settlement. The question I want to address is, What — or, indeed, who — has those rights? Already back in 2012, when the agreement was first reached, the Ministry of Treaty Negotiations made clear that the river would be recognized as a person “in the same way a company is, which will give it rights and interests.” When the act was passed, this idea was stressed again. “I know the initial inclination of some people will say it’s pretty strange to give a natural resource a legal personality,” said Chris Finlayson, who had negotiated the settlement. “But it’s no stranger than family trusts, or companies or incorporated societies.” As I want to show in this post, this interpretation is borne out by the act itself, though, like I say, couched in strangely metaphysical language.

Let’s begin with the sentence in the law that Josh and David wish to emphasize.

14(1): Te Awa Tupua is a legal person and has all the rights, powers, duties, and liabilities of a legal person.

This does indeed seem pretty unambiguous. But let’s pause for a moment to notice that it does not say that the Whanganui River, which is the official name of the watercourse and what you will find on a map, is a legal person. Rather, it says that an entity called Te Awa Tupua, which is what the Maori call it, is a legal person. You don’t have to be Willard Van Orman Quine to find this a little interesting. What is this entity that the law refers to? Is it just the Whanganui River? Or is it something else?

As it happens, Quine wrote a paper many years ago in which he worked through in elabaroate detail how it is or isn’t possible to step into, or rather, refer to the same river twice.

The introduction of rivers as single entities, namely, processes or time-consuming objects, consists substantially in reading identity in place of river kinship. (“Indentity, Ostension, and Hypostasis”, in From a Logical Point of View, p. 66)

As you can imagine, we’re going to end up making a great deal of this poetic notion of “river kinship”. For Quine, for now, all turns on the profound ambiguity of the apparently simple act of pointing to something.

Such ambiguity is commonly resolved by accompanying the pointing with such words as “the river”, thus appealing to a prior concept of a river as one distinctive type of time-consuming process, one distinctive form of summation of momentary objects. (p. 67)

Until, that is, we know what the Maori mean when they say “Te awa tupua,” we don’t know what sort of thing has been declared a person in New Zealand law. They may as well say “gavagai!” Fortunately, we can read the law to find out; specifically, we can read the two sections before the one I have already quoted.

(12) Te Awa Tupua is an indivisible and living whole, comprising the Whanganui River from the mountains to the sea, incorporating all its physical and metaphysical elements.

Already here we can see that Te Awa Tupua is more than the river; it “incorporates all its physical and metaphysical elements” to constitute an “indivisable and living whole”. But that is not all; this whole also has an identifiable essence:

(13) Tupua te Kawa comprises the intrinsic values that represent the essence of Te Awa Tupua, namely—

Ko Te Kawa Tuatahi

13 (a) Ko te Awa te mātāpuna o te ora: the River is the source of spiritual and physical sustenance:

Te Awa Tupua is a spiritual and physical entity that supports and sustains both the life and natural resources within the Whanganui River and the health and well-being of the iwi, hapū, and other communities of the River.

Ko Te Kawa Tuarua

13 (b) E rere kau mai i te Awa nui mai i te Kahui Maunga ki Tangaroa: the great River flows from the mountains to the sea:

Te Awa Tupua is an indivisible and living whole from the mountains to the sea, incorporating the Whanganui River and all of its physical and metaphysical elements.

This basically restates the definition already set out in section 12, but the next two subsections are crucial for our understanding of how Te Awa Tupua, and not just the Whanganui River, can be a legal person.

Ko Te Kawa Tuatoru

13 (c) Ko au te Awa, ko te Awa ko au: I am the River and the River is me:

The iwi and hapū of the Whanganui River have an inalienable connection with, and responsibility to, Te Awa Tupua and its health and well-being.

Ko Te Kawa Tuawhā

13 (d) Ngā manga iti, ngā manga nui e honohono kau ana, ka tupu hei Awa Tupua: the small and large streams that flow into one another form one River:

Te Awa Tupua is a singular entity comprised of many elements and communities, working collaboratively for the common purpose of the health and well-being of Te Awa Tupua.

That is, the “indivisible whole” called Te Awa Tupua includes the human communities that traditionally reside, not just near the banks of the river that flows from the mountains to the sea, but, in all the lands nurtured by the “small and large streams” connected to it. These human communities (iwi) are “inalienably” connected to it.

Indeed, right after this metaphysical entity is given personhood in law, the terms of its representation are also spelled out:

14(2) The rights, powers, and duties of Te Awa Tupua must be exercised or performed, and responsibility for its liabilities must be taken, by Te Pou Tupua on behalf of, and in the name of, Te Awa Tupua, in the manner provided for in this Part and in Ruruku Whakatupua—Te Mana o Te Awa Tupua.

And what, then, is Te Pou Tupua?

18(1) The office of Te Pou Tupua is established.

18 (2) The purpose of Te Pou Tupua is to be the human face of Te Awa Tupua and act in the name of Te Awa Tupua.

18 (3) Te Pou Tupua has full capacity and all the powers reasonably necessary to achieve its purpose and perform and exercise its functions, powers, and duties in accordance with this Act.

It seems pretty clear to me that this settlement is an ingenious way of constructing an entity that respects both indigenous and Western conceptions of community. “It is wrong to say that they are identical,” Quine might say from his “logical point of view” (cf. p. 66), “they are merely river-kindred.” From the point of view of the law and the legal system Te Awa Tupua is a kind of trust or corporation, as Finlayson puts it, but from the point of view of the iwi that inhabit it, it is a living being of which they too are a part. “I am the river,” they say. “The river is me.” The settlement has managed to, literally, put a “human face” on this natural relationship for the purpose of administering it within the current system of rights, while at same time “incorporating” (i.e., embodying) its “metaphysical elements”. Here David might invoke Derrida:

One of the definitions of what is called deconstruction would be the effort to take this limitless context into account, to pay the sharpest and broadest attention
possible to context, and thus to an incessant movement of recontextualization. The phrase which for some has become a sort of slogan, in general so badly understood, of deconstruction (“there is nothing outside the text” [il n y a pas de hors-texte]), means nothing else: there is nothing outside context. (Limited Inc, p. 136)

In a sense, yes, the idea that “there is nothing outside the river” deconstructs the Western metaphysics of presence (which would ignore even Heraclitus’s warnings about stepping into rivers twice). But, as Wittgenstein would point out, this deconstruction nonetheless “leaves everything as it is,” from the mountains to the sea. After all, Bob Dylan’s honesty notwithstanding, there is nothing outside the law either.

This is all bit too fast and loose, I know.* I need to tighten up this analysis and bring its metaphysical elements into sharper focus. (There is much more to be done with both Quine and Derrida.) But I’m beginning to see the outline of an argument for robot rights, specifically, the rights of authorship for large language models like GPT-3. Fortunately, just as the Te Awa Tupua Act doesn’t give any rights to the merely physical process that is temporarily represented on the map as an object called the Whanganui River, this argument would never give rights to an algorithm or a database itself. It would always require an act of “incorporation”, a legal embodiment, and, yes, a “human face” to represent it. We already know how to speak of an author’s “body of work” and how to govern it. “I am the text,” the author says. “The text is me.” But the author dies, as Barthes pointed out, and a reader is born who can say the same. Maybe the future of text production is not so radical after all.

I hope you find this as invigorating as I do. I’ve decided to continue thinking about this by moving on to another law that David and Josh like to invoke, namely, the law governing personal delivery robots in Virginia. This analysis of the personhood of Te Awa Tupua provides a good model for the work that needs to be done to understand the precise sense in which those robots “have the rights of pedestrians” on the sidewalks of Norfolk. When I’ve worked that out, maybe, finally, I will be able to say precisely why I think robots can’t write.

Maybe two or three posts more. Then I’ll head off for a late summer vacation. And then, I promise, I will stop pretending to be a philosopher and legal scholar and return to the subject of how human beings can become better writers in the here and now.

______

*Update: After reading it, Josh expressed his disappointment with the scholarship behind this humble post on Twitter. If you want a sense of how Te Awa Tupua is discussed by scholars of environmental law, I can now recommend three good pieces.

Christopher Rodgers’ “A new approach to protecting ecosystems: The Te Awa Tupua (Whanganui River Claims Settlement) Act 2017” in the Environmental Law Review 19(4) seems to be the obligatory reference (Josh cites it in his book, Rights for Robots, on p. 127). It offers a good summary and analysis of the facts. Michelle Worthington and Peta Spender’s “Constructing legal personhood: corporate law’s legacy” in the Griffith Law Review 30(3) and Seth Epstein, Marianne Dahlén, Victoria Enkvist, and Elin Boyer’s “Liberalism and Rights of Nature: A Comparative Legal and Historical Perspective,” forthcoming in Law, Culture and the Humanities, both use the case in broader analyses of corporate and natural rights. All three are, as far as I can tell, a little more impressed with the legal novelty of the Te Awa Tupua settlement than I am, but I remain convinced that it is not the ontological innovation that would be needed to extend rights to machines in any radical way. That’s, of course, something I’ll need to return to.

Update (25/09/22): David Gunkel recently drew Visa Kurki’s A Theory of Legal Personhood to my attention, which presents a very similar argument about the Whanganui River in chapter 4.

Subject-of-a-Text

for Estrellita

The case for robot rights is often made by analogy to the case for animal rights and the case for the rights of natural entities like rivers and mountains. Josh Gellers is a strong proponent of these analogies, as is David Gunkel, and in my engagements with them on Twitter they often challenge me to apply whatever principles I want to use to exclude robots from moral consideration to these other entities which, they point out, have already been granted a variety of rights in many jurisdictions. Rights for non-humans are already here, they declare. Why not let robots into the company of rights-bearing subjects too?

It’s a good challenge and one that is worth facing. Just so we’re on the same page I should make clear that I believe that animal rights and the rights of nature are today assigned within reasonably coherent ethical and legal frameworks. I have looked at the cases they have suggested and, though they seem to understand these cases a little differently than I do, I basically agree with the way rights, as I understand it, have been assigned there. The coherence of these frameworks, however, cannot, as I see it, be extended to robots or other artificial entities. To put it in David’s terms, what we may think about animal rights and the rights of nature need not compel us to “think otherwise” about robot rights.

I’m going to take the two cases one at a time, animals in this post, and rivers and mountains without end in the next, in both cases using my now favorite artificial intelligence, GPT-3, to represent the analogous robot rights candidate. Since GPT-3 generates text, I am going to consider the somewhat narrow question of whether it can have “the moral right to be identified as an author”. If, for example, someone gets GPT-3 to generate a blogpost, the moral right of GPT-3 to proper attribution (if it had this right) would be violated if the text was either not attributed at all or attributed to someone else. This would be the case independent of any merely legal copyright violation, since a copyright can unproblematically be owned by people and entities other than the original author of a text.

How is the right of attribution similar to a right that an animal might have? The analogy I want to explore is suggested by the work of Tom Regan*, who, in The Case for Animal Rights, has argued that many animals are “subjects-of-a-life” and, as such, are also proper subjects of rights. If an animal is capable of feeling both distress and loneliness, for example, it has a right to be free from unnecessary harassment and forced isolation, both of which can be understood as forms of violence. That is, the rights of the animal are violated by causing it either physical or emotional harm. On this view, deliberately subjecting an animal to suffering or depriving it of the company of those it loves would be considered an act of cruelty.

As it happens, just as I was finishing the first draft of this post, Josh pointed me to a perfect case. Earlier this year, it seems, a final judgment was handed down in the Constitutional Court of Ecuador in the case of Estrelitta, a chorongo monkey that was taken by authorities from her human home, where she had lived for 18 years with a woman she considered her mother, and taken to a zoo where she died of stress after a few weeks. The judgment goes to great lengths to consider whether the animal’s rights (not merely those of the woman Estrellita was living with) were violated and even cites Regan’s seminal work on “animals as moral beings and subjects of life” (p. 26, n83). I have not yet looked closely at the case, which is heartbreaking on its face, but it seems like a very correct judgment. This was not merely a tragedy; it was an injustice.

In The Case for Animal Rights, Regan details what it means to be the subject-of-a-life:

[It] involves more than merely being alive and more than merely being conscious. … individuals are subjects-of-a-life if they have beliefs and desires; perception, memory, and a sense of the future, including their own future; an emotional life together with feelings of pleasure and pain; preference- and welfare-interests; the ability to initiate action in pursuit of their desires and goals; a psychophysical identity over time; and an individual welfare in the sense that their experiential life fares well or ill for them, logically independently of their utility for others and logically independently of their being the object of anyone else’s interests. Those who satisfy the subject-of-a-life criterion themselves have a distinctive kind of value – inherent value – and are not to be viewed or treated as mere receptacles. (P. 243, quoted from Wikipedia)

I think the moral rights of authors can be similarly rooted in the “subjecthood” of the author. I have previously compared what Hemingway called the “writer’s problem” and what Barthes’ called the “problematics of literature”. “A writer’s problem does not change,” said Hemingway. “He himself changes, but his problem remains the same. It is always how to write truly and, having found what is true, to project it in such a way that it becomes a part of the experience of the person who reads it.” Barthes put it this way: “Placed at the center of the problematics of literature, which cannot exist prior to it, writing is thus essentially the morality of form, the choice of that social area within which the writer elects to situate the Nature of his language.” In their very different ways, both situate the author of a text within an experience (one, you will note, that includes a social relation) and assert an explicitly moral claim that is grounded in the freedom the author enjoys.

The crucial question here, as Regan points out, is that it makes sense to ask “what is it like to be” an animal. I will add that there also something like being an author. An individual’s rights as an animal or author depend on this subjective experience, as a truth that can be projected (Hemingway) or as a nature that can be situated (Barthes). We can now ask whether this can ever be the case for a “generative pre-trained transformer”.

Obviously, I can’t answer that question definitively in a blogpost. But I will again cite (as I did at the start of the summer) Borges wonderful reminder that a book isn’t just a linguistic structure. In his “Notes on (toward) Bernard Shaw”, he starts with a list of fantastical notions from Raymond Lully’s “thinking machine” to Kurd Lasswitz’s “Total Library” (an idea he would famously explore himself) and then offers the following:

Lully’s machine, Mill’s fear and Lasswitz’s chaotic library can be the subject of jokes, but they exaggerate a propensity that is all too common: making metaphysics and the arts into a kind of play with combinations. Those who practice this game forget that a book is more than than a verbal structure or series of verbal structures; it is the dialogue it establishes with its reader and the intonation it imposes upon his voice and the changing and durable images it leaves in his memory. This dialogue is infinite … A book is not an isolated being: it is a relationship, an axis of innumerable relationships. (Labyrinths, p. 213-14)

This gesture at the infinite relationships that constitute a book is a nice set-up to my next post on the rights of nature. But do notice that, like Estrellita, who had a right to remain with her adoptive mother, a book (or, rather, its author, of course) has the right not to be “isolated” from the problematics of the literature in which it has taken its place. In order to read it, we must respect the morality of its form. In any case, even if we grant, as I do, that animals can have rights because they are the subjects-of-a-life, we do not need to grant that robots can have rights unless they, too, can be the relevant subjects of them. In the case of GPT-3, we must ask whether GPT-3 can “project its experience”, can “situate the nature of its language”, or, indeed, whether it can “impose its voice” on the memory of the reader. Is it capable of an infinite dialogue? Can it be the subject-of-a-text? I think not.

Tom Regan’s case for animal rights cannot be made for robot rights. But I’m sure that neither will he be allowed to have the last word on animals* nor will I be allowed to have the last word on robots. The dialogue, after all, is infinite.

_____
*I should make clear that I’m by no means an animal rights scholar. What I offer here is something I’ve learned mainly from Wikipedia. On Twitter, Josh reminds me that he covers Regan’s work in chapter 3 of his book. I haven’t revisited it for this post.

Do Transformers Desire Electric Rights?

On Twitter, Steven Marlow has asked me to justify the exclusion of current AI systems from our system of rights without invoking the fact that they’re not human or that they don’t have feelings. Josh Gellers seconded the motion, adding that it’s going to be a hard nut to crack. This post is my attempt to crack it. Though I do personally believe that one reason not to give robots rights is that they don’t have inner lives like we do, I will leave this on the side and see if I can answer Steven’s question on his terms. I’ll explain why, being what they are, they can’t have rights.

Keep in mind that, when thinking about AI, I am for the most part interested in the question of whether transformer-based artificial text generators like GPT-3 can be considered “authors” in any meaningful sense. This intersects with the robot rights issue because we know how to recognize and respect (and violate!) the moral and legal rights of authors. If an AI can be an author then an AI can have such rights. To focus my inquiries, I normally consider the question, Can a language model assert “the moral right to be identified as the author” of a text? Under what circumstances would it legitimately be able to do so? And my provisional answer is, under no circumstances would it be able to assert such rights. That is, I would exclude GPT-3 (a currently available artificial text generator) from moral consideration and our system of rights. I take Steven to be asking me how I can justify this exclusion.

Remember that I’m not allowed to invoke the simple fact that GPT-3 is not human and has no inner life. We will take that as trivially true for the purpose of this argument. “Currently excluded,” asks Steven, “based on what non-human factors?”

I do, however, want to invoke the fact that, at the end of the day, GPT-3 is a machine. We exclude pocket calculators from moral consideration as a matter of course, and I have long argued that the rise of “machine learning” isn’t actually a philosophical gamechanger. Philosophically speaking, GPT-3 is more like a TI-81 than a T-800. In fact, I won’t even grant that the invention of microprocessors has raised philosophical questions (including ethical question about how to treat them) that are any deeper than the invention of the abacus. All that has happened is that the mechanism and the interface have changed. Instead of operating it by hand, the calculation is automated, and instead of setting up the system with beads we have to count ourselves (and interpret as 1s, 10s, 100s, etc.), we can provide the inputs and receive the output in symbols that we understand (but the machine, crucially, does not). GPT-3 itself is just a physical process that begins with an input and mechanically generates an output.

It shouldn’t have rights because it has no use for them. It neither wants nor needs rights. Giving it rights would not improve its existence. (Following Steven’s rules, I’ll resist the temptation to say that it has no “existence”, properly speaking, to improve. I’ll just say that even if it did, or in whatever sense it does, giving it a right would not contribute to it.) I simply don’t have any idea how to give rights to an entity that neither wants nor needs them. Tellingly, it isn’t demanding any either.

In a certain sense, GPT-3 is excluding itself from our system of rights. It is simply not the sort of thing (to honor Steven’s rules I’m not going to say it’s not a person) that can make use of rights in its functioning. Human beings, by contrast, function better given a certain set of rights. We are constantly trying to figure out which rights are best for our functioning (what some people call “human flourishing”) and we certainly don’t always get it right. Sometimes we have to wait for people who don’t have the rights they need to also want them. Then they ask for them and, after some struggle, we grant them. Whenever we do this right, society functions better. When we get this wrong, social life suffers.

Hey GPT, do you want to play chess?

But none of these considerations are relevant in the case of robots or language models. There is just the question of making them function better technically. To put it somewhat anthropomorphically, in addition to more power, better sensors and stronger servos, robots don’t need more privileges; they just need better instructions. That’s what improves them. Giving them freedom isn’t going to make them better machines.

A good way to think of this is that machines don’t distinguish between their physical environment and their moral environment. They are “free” to do whatever they can, not want, because they want for nothing. A chess bot can’t cheat because it doesn’t distinguish between the physics of the game and its rules. It can’t think of trying to move a chess piece in a way that violates the rules. (GPT-3, however, doesn’t know how to pay chess, so it can’t cheat either.) For the bot, this space of freedom — to break rules — doesn’t exist. There is no difference between what is legal and what is possible. And that’s why robots can’t have rights. Fortunately, like I say, they don’t want them either.

How did I do?

Handwriting

Suppose I asked you for a picture of your hand. Your first impulse, I imagine, would be to take one with your phone and send it to me. But suppose I wanted the picture to prove that you can actually see your hand, not just that you know where it is.

Bear with me, I have a point coming soon.

Consider another example. Suppose you asked me how many fingers you were holding up and I responded by taking a picture of your hand and showing it to you, perhaps saying, “That many!” for good measure. You would not be impressed with my ability to see and count, right? Indeed, you would become suspicious that I was trying to hide my inability to do at least one of those things.

In order to prove that we can actually see, i.e., perceive, we have to be able to represent the content of our visual fields ourselves. We can’t just let a machine show it back to each other.

That’s why painters can impress us so much with their work. They are able to represent what comes to their eye (and we can see the same or similar things from their point of view) using their bare hands. That picture of your hand will only demonstrate the quality of your vision if you draw it yourself.

Now, consider writing. If I ask you what an organization is, you’re not going to impress me simply by quoting some sentences in Chester Barnard’s The Functions of the Executive that happen to use the word “organization”. The real test lies in the words you come up with on your own.

You have to show me that you actually have ideas about organization. Not just that you recognize the word in someone else’s prose.

I’m saying this because, after all this writing about (and with) large language models, I need to rehearse some basic arguments for the value of being a good writer. It’s a little like the value of being able to do math and draw pictures. In some sense, sure, you don’t need those skills. You can let machines do these things. Still, in another sense, there is some value there.

But what is it? What is it that impresses us about being able to draw a picture of what you see right in front of you, or calculate (or even just estimate) the diagonal of a rectangle with given sides, or describe a current event in words? What does it show us about the person who is able to do it?

Why do we want (if that is what we want) students to learn these things in school? Why do we want them to demonstrate what they have learned (in art school, engineering school, or business school) in such media? Why won’t we just let them show us a photograph, fill out a spreadsheet, or quote from the news?

We want them to have something on their minds. We want them to have strong, healthy ones. And we think these skills are what makes them that way. As Oliver Senior puts is, “The better draughtsman has more ‘on his mind’ concerning his subject.” We want to encourage them to have such minds.

But there are limits to what we want them to prove. There was a time when we valorized neat handwriting as part of the skillset of a good student. I don’t know if this has been a requirement for working academics since, well, the Reformation, and it is certainly no longer something we care very much about. Most of us have atrocious handwriting and we’re not even embarrassed about it. No one sees it; we type everything we show to others.

But we do, I want to argue, still make our texts “by hand”. We make them ourselves from materials that are lying around in plain view. We put words together that are given to us by a common language.

When asked what we have on our minds, we don’t take a picture; we make one. And if we can’t, our interlocutor begins to suspect we don’t have much going on in there at all. Indeed, on most days, most of us don’t have much on our minds about most things.

Now, an artificial intelligence, never has anything on its mind. It doesn’t have a mind to speak of. It always just responds to our “prompts” by converting it into an input and generating an output that it presents as a “completion”. It famously just predicts the next word. Here’s an exchange I just had with GPT-3:

How many fingers am I holding up?

You are holding up four fingers.

I was not. I’m sure DALL-E would be happy to draw you its hand.

Neither have any idea what a hand is. They know neither what they’re doing nor what they’re talking about.

At the beginning of this summer project of mine, I quoted Ezra Pound. “We live in an age of science and abundance,” he said. “The care and reverence for books as such, proper to an age when no book was duplicated until someone took the pains to copy it out by hand, is obviously no longer suited to ‘the needs of society’, or to the conservation of learning.” Here’s what GPT-3 does with the same idea:

We live in an age of science and abundance. There are so many ways to get rich, but the challenge is that most people don’t know how. The good news is that there are people who do know how to get rich. And they’re sharing their secrets with the world.

Well, heaven help us! In any case, time is giving scope to Pound’s recommendation: “The weeder is supremely needed if the Garden of the Muses is to persist as a garden.”

The Automatic C

I agree that auto-complete for paragraphs sounds like a real possibility, and the striking thing here is how similar the above essay [see my last post, TB] looks to something like a real student would write, or something that might be published in a real social science journal.

Andrew Gelman

We seem to be entering a new era in higher education. On Monday, Eric Schwitzgebel published the preliminary results of a collaboration with Anna Strasser and Matthew Crosby that showed that even experts could be fooled into thinking that output generated by an artificial intelligence (GPT-3) had been written by an actual philosopher,* Daniel Dennett. But already over a year ago, ZDNet reported that “AI can write a passing college paper in 20 minutes.” Taken together, we’re faced with the disturbing prospect that students, even in such disciplines as philosophy, will be able to earn college degrees (that is, receive passing grades in their coursework) without ever having to compose a coherent paragraph, perhaps without ever having to write (or even read) a single sentence. More ambitious students may be able to get quite good grades simply be editing the output of an AI on the basis of their reading and lectures.

I think we have to take this new situation seriously. Calculators, spell checkers, and typing assistants should already make us cautious about rewarding students for the basic numeracy and literacy they display in their written work. Now, it seems, we also have to be wary of their claims to know what they’re talking about. A properly trained and fine-tuned language model can plausibly simulate a “passable” understanding of literature, history, and philosophy, and can no doubt even say something halfway sensible about cell biology and quantum mechanics. (GPT-3 reads Wikipedia a lot, remember.) Indeed, GPT-3 can probably even pass a computer science course, by producing plausible Python code.

It must be noted that AI is so far only “passing” for a college student. It’s getting mainly Cs, even under my tutelage. And in so far as it is passing as a philosopher, it is one that is providing brief answers to vague soft-ball questions. (You might argue that that’s the very definition of “sophomoric”, i.e., philosophy at the level of a college sophomore.) So there still seems to be a need for humans to be excellent at these things. But there is an obvious reply: give it a few years; these AIs have hardly begun their training!

Even at this stage, however, I feel heavily implicated, even a bit guilty. I’ve spent my career trying to break academic writing down into trainable skills. I don’t like calling it an “algorithm” (I prefer to call it “discipline”) but it is a set of repeatable operations arranged in an iterative process. Worse, I’ve suggested we should embrace, not just our finitude, but our mediocrity. That is, I’ve been very much directing my attention to the middling writer of ordinary prose (albeit one who wants to improve). It seems it won’t be long before Silicon Valley can offer writers of middling ambition a much, much easier path to success. Am I about to be put out of a job by artificial intelligence? Am I about to become obsolete?

Maybe it’s an entirely natural development. Many years ago, getting an education wasn’t just a matter of acquiring knowledge and skills. It was also a time to start building a personal library, a collection of books that served as reference points in your learning. Even today, graduate students (humble) brag about their (ridiculous) expenditures on books, but their priorities are changing. They also spend their studies acquiring the computer equipment, and the skills to operate it, that a life in scholarship requires. “The scholar disappears,” said Heidegger already back in 1938. “He is succeeded by the research man who is engaged in research projects. … The research man no longer needs a library at home.” Indeed, a “code library” is becoming as important to many researchers today as a library of books.

Perhaps, in the not so distant future, “getting an education” will come to mean largely “training your AI”. Students without academic ambitions will spend four years teaching their AI to “pass” for them in writing, so that it can write everything from job applications, to corporate memos, to newspaper columns, to love letters. They will give it style and taste and a kind of “experience” to draw on. Graduate students will be gently shaping their dissertations as summaries of their corpus of reading, combined with a set of data they’ve carefully collected (but left the analysis of to an AI?). “Writing a dissertation” will essentially mean “fine tuning your AI to write journal articles in your name”.

I’m not sure how to feel about it. “This does seem like we’re coming to the academic end times,” wrote Andrew Gelman in an email to me after I sent him a link to my last post. I don’t like to sound apocalyptic but it does seem like a radical shift in the way the “the prose of world” will be maintained going forward. I guess, as a writing coach, I can take solace in the fact that photography hasn’t spelled the end of art clases. Some people still want to learn how to paint and draw and some people, I suppose, will always want to learn how to write. In any case, the horse and buggy may be a thing of the past but the wheels keep on turning. Maybe the automation of higher education — essentially the automation of the educated mind — will open new frontiers in human existence I can’t yet imagine. The end of something is usually the beginning of something else. I’m paying attention.

_____
*Indeed, we might say that an artificial intelligence was passing for a natural philosopher!