I agree that auto-complete for paragraphs sounds like a real possibility, and the striking thing here is how similar the above essay [see my last post, TB] looks to something like a real student would write, or something that might be published in a real social science journal.
Andrew Gelman
We seem to be entering a new era in higher education. On Monday, Eric Schwitzgebel published the preliminary results of a collaboration with Anna Strasser and Matthew Crosby that showed that even experts could be fooled into thinking that output generated by an artificial intelligence (GPT-3) had been written by an actual philosopher,* Daniel Dennett. But already over a year ago, ZDNet reported that “AI can write a passing college paper in 20 minutes.” Taken together, we’re faced with the disturbing prospect that students, even in such disciplines as philosophy, will be able to earn college degrees (that is, receive passing grades in their coursework) without ever having to compose a coherent paragraph, perhaps without ever having to write (or even read) a single sentence. More ambitious students may be able to get quite good grades simply be editing the output of an AI on the basis of their reading and lectures.
I think we have to take this new situation seriously. Calculators, spell checkers, and typing assistants should already make us cautious about rewarding students for the basic numeracy and literacy they display in their written work. Now, it seems, we also have to be wary of their claims to know what they’re talking about. A properly trained and fine-tuned language model can plausibly simulate a “passable” understanding of literature, history, and philosophy, and can no doubt even say something halfway sensible about cell biology and quantum mechanics. (GPT-3 reads Wikipedia a lot, remember.) Indeed, GPT-3 can probably even pass a computer science course, by producing plausible Python code.
It must be noted that AI is so far only “passing” for a college student. It’s getting mainly Cs, even under my tutelage. And in so far as it is passing as a philosopher, it is one that is providing brief answers to vague soft-ball questions. (You might argue that that’s the very definition of “sophomoric”, i.e., philosophy at the level of a college sophomore.) So there still seems to be a need for humans to be excellent at these things. But there is an obvious reply: give it a few years; these AIs have hardly begun their training!
Even at this stage, however, I feel heavily implicated, even a bit guilty. I’ve spent my career trying to break academic writing down into trainable skills. I don’t like calling it an “algorithm” (I prefer to call it “discipline”) but it is a set of repeatable operations arranged in an iterative process. Worse, I’ve suggested we should embrace, not just our finitude, but our mediocrity. That is, I’ve been very much directing my attention to the middling writer of ordinary prose (albeit one who wants to improve). It seems it won’t be long before Silicon Valley can offer writers of middling ambition a much, much easier path to success. Am I about to be put out of a job by artificial intelligence? Am I about to become obsolete?
Maybe it’s an entirely natural development. Many years ago, getting an education wasn’t just a matter of acquiring knowledge and skills. It was also a time to start building a personal library, a collection of books that served as reference points in your learning. Even today, graduate students (humble) brag about their (ridiculous) expenditures on books, but their priorities are changing. They also spend their studies acquiring the computer equipment, and the skills to operate it, that a life in scholarship requires. “The scholar disappears,” said Heidegger already back in 1938. “He is succeeded by the research man who is engaged in research projects. … The research man no longer needs a library at home.” Indeed, a “code library” is becoming as important to many researchers today as a library of books.
Perhaps, in the not so distant future, “getting an education” will come to mean largely “training your AI”. Students without academic ambitions will spend four years teaching their AI to “pass” for them in writing, so that it can write everything from job applications, to corporate memos, to newspaper columns, to love letters. They will give it style and taste and a kind of “experience” to draw on. Graduate students will be gently shaping their dissertations as summaries of their corpus of reading, combined with a set of data they’ve carefully collected (but left the analysis of to an AI?). “Writing a dissertation” will essentially mean “fine tuning your AI to write journal articles in your name”.
I’m not sure how to feel about it. “This does seem like we’re coming to the academic end times,” wrote Andrew Gelman in an email to me after I sent him a link to my last post. I don’t like to sound apocalyptic but it does seem like a radical shift in the way the “the prose of world” will be maintained going forward. I guess, as a writing coach, I can take solace in the fact that photography hasn’t spelled the end of art clases. Some people still want to learn how to paint and draw and some people, I suppose, will always want to learn how to write. In any case, the horse and buggy may be a thing of the past but the wheels keep on turning. Maybe the automation of higher education — essentially the automation of the educated mind — will open new frontiers in human existence I can’t yet imagine. The end of something is usually the beginning of something else. I’m paying attention.
_____
*Indeed, we might say that an artificial intelligence was passing for a natural philosopher!
Hi Thomas
Many thanks for this thought-provoking post.
I’ve a long-time interest in Leroi-Gourhan’s theory (Johnson, 2011) of the externalization of human heritable capacities that expands heritable genetic traits.
According to this theory, the human species achieves new levels of development using human-made tools and signs that subsequently become a part of the species’ “external genetic pool”.
But here it seems like we’re hitting a wall: what about a tool like AI that only produces an (arguably sophisticated) remix of past ideas? Wouldn’t it introduce a kind of inbreeding? Will AI condemn us to an eternity of self referencing?
—
Johnson, C. (2011). Leroi-Gourhan and the Limits of the Human. French Studies, 65(4), 471–487. https://doi.org/10.1093/FS/KNR134
Thanks for this thought-provoking post! I wonder to what extent students would be able to learn to train an AI to have a particular style if they skip practice in an organic writing process. How necessary is experience with forming sentences and drafting to clarify an idea if they are going to prompt the AI well? Also, as models get better, will they even need to train them or will they just select from style options?