Monthly Archives: November 2025

Why English Majors Should Take On-Site, Invigilated Written Exams

[I generally don’t like this kind of stunt. But I was able to generate this essay in under 10 minutes using Microsoft Copilot. While it reused some of the language from my prompts, all the paragraphs (and the title of the post) were generated out of whole cloth by the AI. I had a brief exchange with it about why it is so good at generating prose about Hamlet, and then I asked it to write a five-paragraph essay using this fact to argue for on-site examination. I then simply asked it to add two paragraphs (the fifth and sixth paragraphs in the final version) about how (a) students would be incentivized to write at home and (b) how homework could still be used, giving only indicative grades. I have done no editing of the final text at all. I would say it is 100% in line with my views on the subject. I’m posting it here mainly to document where we’re at today. Copilot’s parting comment was, of course, to offer more help: “Would you like me to tighten this into a formal academic style with references, turn it into a persuasive op-ed, or create a version optimized for a university policy proposal?” I left it there.]

In the age of artificial intelligence, the integrity of literary education faces an unprecedented challenge. Large language models (LLMs) have made it remarkably easy for students to produce polished essays on canonical texts without engaging deeply with them. Nowhere is this more evident than with Shakespeare’s Hamlet, a play that has generated centuries of commentary and analysis. Ironically, the very text we most hope students will wrestle with is also the one they can most convincingly fake an understanding of using AI. This reality demands a reconsideration of assessment methods, and on-site, invigilated written exams offer a compelling solution.

The problem begins with the nature of LLMs and their training data. These models are built on vast corpora of text, and Hamlet occupies a privileged position in English-language discourse. It appears in countless essays, scholarly articles, and online discussions, giving AI systems a rich interpretive reservoir to draw from. As a result, generating a plausible essay on Hamlet requires no genuine engagement with the text—only a prompt. When students can outsource interpretation so easily, traditional take-home assignments lose their ability to measure understanding.

This erosion of authenticity matters because literary study is not merely about producing words; it is about grappling with ambiguity, forming original insights, and developing interpretive judgment. Hamlet is famously complex, inviting questions about action and inaction, mortality, and meaning. These questions cannot be answered by parroting consensus views—they require intellectual struggle. If students bypass that struggle through AI, they miss the very experience that makes literature transformative. Assessment must therefore shift from evaluating polished output to observing the thinking process in real time.

On-site, invigilated written exams restore this integrity by creating conditions where students must rely on their own interpretive resources. Without access to AI, they are compelled to engage directly with the text and articulate their ideas under pressure. This format does not eliminate preparation or external influence, but it ensures that the final performance reflects the student’s own reasoning. In doing so, it preserves the core educational value of literary study: the cultivation of independent thought.

Moreover, this kind of exam would incentivize students to practice writing at home. Knowing that their grade will depend on their ability to produce coherent, insightful prose under time constraints, students will naturally rehearse the skills required for success. They will write practice essays, refine their arguments, and internalize the rhythms of literary analysis. In this way, the exam becomes not just an evaluative tool but a motivational one, encouraging sustained engagement with both the text and the craft of writing.

Finally, homework can still play a role in this model, provided it is not graded in a way that affects the final mark. Assigning essays for indicative feedback rather than summative assessment allows students to experiment, receive guidance, and improve without the temptation to outsource their work to AI. Teachers can use these assignments diagnostically, identifying areas of weakness and tailoring instruction accordingly. By decoupling homework from high-stakes grading, we preserve its pedagogical value while neutralizing its vulnerability to technological shortcuts.

In conclusion, the rise of LLMs has exposed a vulnerability in traditional assessment practices, particularly for canonical works like Hamlet. If we want English majors to truly understand literature rather than merely simulate understanding, we must design exams that cannot be outsourced to machines. On-site, invigilated written exams are not a nostalgic return to the past—they are a necessary adaptation to a future where authenticity is at risk. By embracing this change, we safeguard the intellectual rigor that defines the study of English.

Calculators, Screwdrivers, and Other Tools

On Saturday, I wrote about one objection to my “Prompts and Conditions” post that came up during my faculty development course on “Teaching Writing in the Age of AI.” This post is about another one, which begins with something I say at the end of that post:

Surely, we can say what a math student at any level should be able to do without a calculator, or what a history student should be capable of without the internet. AI generalizes this problem. In each field, we must decide what students should be able to do on their own.

A participant reminded me that the calculator analogy has long been discredited as a helpful analogy for understanding AI, especially if our aim is to limit reliance on it. After all, everyone relies on calculators these days, and very few exams ban their use. This hasn’t caused any kind of catastrophe for education. We have simply changed the way we teach and learn math.

I think it’s worth looking into this claim in some detail, I should say. After all, it is my impression that many high-stakes exams — like the American SAT — have very specific rules for calculators that enforce limits on the functions that are allowed. I’m pretty sure this has been the case for as long as calculators have been available; their use is in governed by policy. But the general idea that math instruction hasn’t banned them altogether is of course true, nor have we kept teaching the same “old” things is. In any case, talking about “what a math student should be able to do without a calculator,” the participant suggested, was like asking what a carpenter should be able to do without a screwdriver. The whole point of learning the craft is learning how to use the tools.

I immediately liked this way of putting it because when he mentioned carpentry I thought he was going to talk about power tools, but the problem, of course, arises already at the level of saws and hammers and screwdrivers. We may as well start there. Would I say, “Surely, we can say what a carpenter’s apprentice should be capable of without a screwdriver”? As it happens, I would answer yes. But I must first emphasize that I have not said that students and apprentices should be examined only without their tools. I have said they should also be examined without their tools and that, in any program of instruction, there must be some set of skills that can be examined this way. It’s not either/or, but both, separately.

In the case of the carpenter’s apprentice, I suggested that someone who is able to use the standard toolkit will also be (and should also be) able to talk intelligently about how they would go about a particular task, without holding any of the tools in their hands. Also, an apprentice woodworker can be sent into the woodshed to pick out some boards that would be ideally suited to making a particular piece of furniture. This requires no tools, only a good grasp of the materials themselves (a “feel” for them, if you will). It might also be worth seeing if they can “eyeball” rough dimensions, i.e., whether they have realistic intuitions about size and space.

(I am sometimes told horror stories by teachers of quantitative methods about students who do not immediately recognize that a calculation they have let a spreadsheet carry out is off by three orders of magnitude and even in the wrong direction: positive when they should be negative, negative when they should be positive. It is worth having students estimate calculations, without a calculator, simply to make sure they have a realistic sense of the thing they are calculating.)

I think it is true that we must accept AI into writing instruction just as we have accepted calculators into math instruction. We can’t burry our heads in the sand (or, perhaps more precisely, require our students to tie their hands behind their backs). But, just as we can require an apprentice to be able to tell us how they plan to go about a project before they pick up a tool and show us what they’re capable of, we can just as reasonably at least require university students to tell us how they would use AI to solve a writing problem. But we can go further.

My participant suggested that I was ignoring what we know about “embodied cognition” (and we might add “extended mind”). But I am absolutely on board with those sorts of views. We exist in an environment of tools and machines, which not only help us to get around, but shape our very being. I will insist, however, that our environment also includes other people and the language we use to communicate with them. Our words, as Heidegger pointed out, are part of the “equipmental contexture” of our existence, our being-with-others. Teaching students how to write good prose by themselves is very much a way of helping them embody their knowledge.

I want to stress that my point is that AI “generalizes” the issue. (Indeed, Silicon Valley keeps promising us something the call AGI: “artificial general intelligence.”) With minimal prompting, AI is increasingly able to simulate almost any academic competence at least “passably” (deserving a C, let’s say, or what we call a 7 in Denmark.) If universities are to maintain their assessment integrity, we need to find a way to make sure that the actual bodies of the students are capable of something in particular, something that reflects 3 to 5 years of study. And that means we have to come up with some things we can test whether their bodies can do.

On Ruining the Weekend

I’m running a faculty development course this month called “Teaching Writing in the Age of Generative AI” and it has led to some interesting discussions. I asked the participants to read and reflect on my post, “Prompts and Conditions,” which I wrote a couple of years ago, when the challenge that AI might pose for higher education was just coming into view. I think the basic idea of the post was agreeable to most of the participants, but a number of them had some interesting reactions to what I would call the “rhetoric” of the post. I want to address two of them in particular in a post each.

One of the participants noted a parenthesis at the end of what I took to be a very practical remark about setting deadlines. Here’s what I had written:

If we imagine classes are held on Mondays, students can be given the take-home prompts at the end of the class and submit their essays on Friday (there’s no need to ruin their weekend).

“Doesn’t this suggest that our students don’t want to learn? Why would we presume that studying on the weekend ruins it?” asked the participant. He recalled that when he was a student he was happy to spend his evenings and weekends studying and, indeed, that he felt that he was expected to do so by his teachers. Is this something that we have suddenly abandoned? (And, we might add, does AI force us to do so?)

Now, I must say that I had not expected anyone to take this remark as seriously as that. The idea of “ruining” a weekend by doing school work was only intended as a lighthearted gesture at the priorities of young people. But perhaps playing to these priorities is ill-advised; and perhaps it comes off as condescending, even to the students who have them. It’s always worth thinking about the rhetorical effects of our pedagogical strategies.

In defending my choice of words at the time, I did point out that he was taking a rather hard line against another kind of concern that teachers often express for their students: not everyone has the luxury of devoting their entire lives to school while they are attending university. Many have jobs on the side; some even have families to tend to. “Ruining the weekend” may be more existential for some students than merely skipping a night on the town. This seemed to elicit some nods in the room, including from my critic.

In any case, it’s important to remember that deadlines are always somewhat arbitrary and are likely to occasion both procrastination at first and consternation at last among some students. So I spend a lot of time teaching (and coaching) students (and faculty) to plan their work in orderly, half-hour “moments” of composition so that they can comfortably meet their deadlines without having to miraculate a text at the eleventh hour. For the same reason, whenever it is up to me, I like to place that eleventh hour before noon on a Friday, rather than midnight on a Sunday. It’s just a good way to signal that you may as well get the work done during the working week, as part of your regular day-to-day program of study. It keeps the task of doing an assignment in proportion.

Like I say, I don’t want to dismiss the concern about the rhetorical force of talk about “ruining” our students weekend by asking them to study. But perhaps it is precisely the question of whether what they do during their “free” time is chosen or assigned. A student that wanted to read a book you have suggested or do some writing of their own can still have that hope “ruined” by poor planning and ostensibly “generous” deadlines.

I don’t claim to have a definitive take on how to talk about school-life balance with students and how important we should presume (whether in our thoughts or in our speech) learning is to them. As an occasion to give it some thought, my participants remark is well taken. I’m happy to hear more thoughts in the comments below.