As I often stress to my students, knowledge is not just a resource; it is a competence. To be knowledgeable is to be able to know things and to be enabled to do things you would not otherwise be able to do. (My emphasis.) Notice that this means you can easily test your knowledge. If you want to know what exactly you have learned, go to work on some materials and see what you can do with them. If you want to know whether you have learned something, notice how well you are able to perform the relevant tasks.
And that, of course, is also at the basis of our system of examination. After a teacher has tried to teach you something, they give you an assignment that you can only successfully complete if you have the knowledge they were trying to impart. They propose a performance of your competence. This provides an objective test of whether you possess the knowledge, and a subjective test how well you have mastered it. Your teacher can tell you whether or not you succeeded; but only you really know how hard it was.
I was reminded of this when watching Joss Fong’s video for Vox (above) about AI and homework. As I said a couple of weeks ago, it’s great at capturing the current mood among students and teachers (and journalists). Mine is somewhat different.
For one thing Fong, like many teachers, locates on-site, invigilated writing on the “ban AI” path, while I think of such exams as a way of allowing students (and teachers) to use AI in any way they like. By requiring students to regularly “sit” without AI at their side and show us what they can do, we can safely let them experiment with it both inside and outside the classroom. Rejecting rigorous exam conditions as “policing” and “hall monitoring” is like rejecting refereed competitions in physical education. An exam is just an opportunity to find out what you have learned, what you are good at. We need to normalize these conditions all the more urgently now, not use the occasion to get rid of them altogether.
What the teachers and students in the video have come to understand is that you simply can’t tell what you have learned if the performance of your competence has been assisted by AI. AI is capable of contributing both facts and logic to your writing whether or not you asked it to. You may think you’re just letting it prime your brainstorming, but it is really giving you ideas you don’t yet have the knowledge to understand; you may think it is just cleaning up your language a little, but it may be correcting (or distorting) your whole line of argument. Letting AI into your process renders your own contribution unclear. As I often say to teachers who think the trick is just to design the assignment to test the students ability to use AI, I simply don’t know how to distinguish the student’s contribution from the AI, nor how to grade the former in the presence of the latter. No one has yet been able to explain to me how I could.
I like what Fong says about “desirable difficulties” in learning. She cites research showing that apps that provide turn-by-turn navigation impoverish our spatial knowledge. Like the calculator analogy, which I also use, it is a simplification, but it is instructive. It turns out that coordinating a map with a territory activates our brains in much deeper, richer way, than simply turning right and left when a pleasant voice tells us to. Perhaps this is what Gilles Deleuze and Felix Guattari had in mind when they recommended that we make “a map not a tracing”: “The map does not reproduce the unconscious closed in upon itself; it constructs the unconscious. It fosters connections between fields” (ATP, p. 13). Consider how much harder it is draw even a two dimensional image by looking at it than using tracing paper. Now consider the difference between drawing a three dimensional object and tracing over a photograph of it. One approach teaches you how to see; it even makes you more observant at an “unconscious” level; the other just requires a steady hand.
And, by relating this to writing, we can (literally) flesh out the analogy. Learning to write is not just “building a mental map of the world” it is articulating what Merleau-Ponty and Foucault (borrowing from Hegel) called “the prose of the world”. The “connections between fields” that Deleuze and Guattari are talking about is ultimately the body in the world, trying to find its way around. The grammar or “usage” of this world is represented in the grammar of our sentences and paragraphs. It is no accident that Hamlet’s melancholy, his loss of interest in “all the uses of this world,” was tied to his desire that his “too solid flesh would melt”. Meaning is use. Our prose works reality at the joints, coordinates our organs with our environment. It (literally) gives us meaning.
Literature is “equipment for living”, said Kenneth Burke. While higher education, and writing instruction specifically, should teach students how to articulate themselves, AI offers to articulate their minds and bodies for them. Ultimately, I fear, the effect will be to dismember them, to carve them (not their experience) up at the joints, to disintegrate the prose of their world. A piece of equipment, like a map, only helps if you know how to use it, how to read it. Getting students to write their own prose is no different from telling them to draw a map to show us (and them) they know their way around. We do them a disservice by not requiring them to show us how profitably they can make use of their world.