In the future, it will become increasingly necessary to decide what sorts of writing students should be able to “generate” on their own. Those who argue that AI is a “powerful learning tool” will still need to tell us what their students are expected to actually learn. And this means that they will have to describe tests that their students are expected to pass under a set of well-defined constraints. Just looking at their prompts, and the conditions under which their students are expected to respond to them, will tell us what sort of “knowledge and skills” these teachers have been trying to inculcate.
While I am personally skeptical, it is possible that interacting with ChatGPT can teach students “powerful” lessons about how to become better writers. ( “A good notation,” said Bertrand Russell in his introduction to Wittgenstein’s Tractatus, “[is] almost like a live teacher … and a perfect notation would be a substitute for thought.” It’s possible that the same may be said of a good language model.) If so, the proof will be in the pudding that students are capable of making without its assistance. (A student who has employed a “substitute for thought” throughout their university education, will presumably have some difficulty thinking on their own. This is a good way to get at the issue I’m addressing.) If you want to convince me that language models should be integrated into the higher education curriculum, you should describe the minds of the students who are shaped by their use. And the only way to do this is to prescribe some mental activities that they can demonstrate they are capable of under controlled exam conditions.
To this end, I want to propose a course structure that covers 16 weeks and 8 essays. Let’s give you 3 hours of instruction per week and let us assume that the students will do at least 6 hours work outside of class per week. Every four weeks, I will need to steal 3 hours from this budget for an in-class essay. The rest of the syllabus is up to you. (I’ll of course lay out my solution to this problem in some follow up posts.)
My suggestion is that the students write a 1000-word essay every two weeks, alternating between a take-home prompt with one week in which to do the work and an in-class prompt with three hours in which to do it. If we imagine classes are held on Mondays, students can be given the take-home prompts at the end of the class and submit their essays on Friday (there’s no need to ruin their weekend). The in-class essay can likewise be held during an extra class session on Friday. Both the take-home and the in-class essays will be read and graded by the teacher, but only the grades on the in-class essay will count. That is, the students will receive qualitative and quantitative feedback on how they are doing on 8 essays, but only four of them will actually be considered a test of what they have learned.
I will imagine you have access to some sort of “computer room” for the exam. That is, students will be given standard word-processing software but — and this is of course crucial — no access to the Internet or to generative AI of any kind. You can decide whether to make a set of sources or other materials available to them, including dictionaries, handbooks, and casebooks. Remember, you are trying to show me what they are capable of, and you may indeed be trying to show me how good they are at drawing on their course readings in their writing. (What I am not interested in, however, is how good they are at prompting ChatGPT to write their essays for them!)
It seems natural to set up the take-home essays as a task that is similar to the in-class essays. That is, you want the take-home essay to give them a chance to learn, in part, how to interpret the prompt and then find the sources they need to respond intelligently. Also, you want to make sure that they are learning how to structure and flesh out an argument spanning 500-1000 words in three hours. The good student would impose a discipline on their process working at home that will train them to perform well under exam conditions. For example, a while back I suggested that a good first assignment in a course on Hamlet would be simply to ask, “What happens in Act I?” while a good second assignment might be “Why is Claudius still alive at the end of Act II?” In both cases, the text of the play might be the only source the student is required to cite; but, depending on how one has approached these types of question in class, the student might be encouraged to tackle complex issues of interpretation as these come up in the long history of critical work on the play. Like I say, it’s up to you how to design both the course (which doesn’t have to be about Hamlet) and the assignments. What interests me is how you would demonstrate what the student has learned.
It’s possible to make the assignments test only course content from the previous two weeks, or to let them be more cumulative. I imagine that the student’s writing skills will be evaluated to an increasingly higher standard as the course progresses. But their understanding of the themes of the course would no doubt also be expected to gain in sophistication. By week ten, for example, I would have my students answer the question, “Did Hamlet succeed?” And I would have exposed them to a variety of answers to this question that their essay would demonstrate their knowledge or ignorance of, even if they did not mention it all. I am confident that every two weeks, looking at the essays the students submit, I would get a good sense of what they are learning, i.e., how I am doing as a teacher. If I have integrated AI into my teaching, I would be likewise be getting a good sense of whether it is helping or hindering their learning.
“Surely, we can say what a math student at any level should be able to do without a calculator, or what a history student should be capable of without the internet,” I said in an encouragingly popular tweet yesterday. “AI generalizes this problem. In each field, we must decide what students should be able to do on their own.” That, in essence, is my challenge to those who believe that integrating generative AI into the higher education curriculum affords powerful new learning opportunities (beyond the obvious skill of learning how to use AI). It is true that in the future we will all let AI write speeches and emails that make erudite reference to Hamlet’s words in order to impress our audiences and engage their imaginations. But should an undergraduate degree in English literature teach you only how to get your AI to make you appear literate? Or do we want our students to actually internalize the language of Shakespeare?