[I generally don’t like this kind of stunt. But I was able to generate this essay in under 10 minutes using Microsoft Copilot. While it reused some of the language from my prompts, all the paragraphs (and the title of the post) were generated out of whole cloth by the AI. I had a brief exchange with it about why it is so good at generating prose about Hamlet, and then I asked it to write a five-paragraph essay using this fact to argue for on-site examination. I then simply asked it to add two paragraphs (the fifth and sixth paragraphs in the final version) about how (a) students would be incentivized to write at home and (b) how homework could still be used, giving only indicative grades. I have done no editing of the final text at all. I would say it is 100% in line with my views on the subject. I’m posting it here mainly to document where we’re at today. Copilot’s parting comment was, of course, to offer more help: “Would you like me to tighten this into a formal academic style with references, turn it into a persuasive op-ed, or create a version optimized for a university policy proposal?” I left it there.]
In the age of artificial intelligence, the integrity of literary education faces an unprecedented challenge. Large language models (LLMs) have made it remarkably easy for students to produce polished essays on canonical texts without engaging deeply with them. Nowhere is this more evident than with Shakespeare’s Hamlet, a play that has generated centuries of commentary and analysis. Ironically, the very text we most hope students will wrestle with is also the one they can most convincingly fake an understanding of using AI. This reality demands a reconsideration of assessment methods, and on-site, invigilated written exams offer a compelling solution.
The problem begins with the nature of LLMs and their training data. These models are built on vast corpora of text, and Hamlet occupies a privileged position in English-language discourse. It appears in countless essays, scholarly articles, and online discussions, giving AI systems a rich interpretive reservoir to draw from. As a result, generating a plausible essay on Hamlet requires no genuine engagement with the text—only a prompt. When students can outsource interpretation so easily, traditional take-home assignments lose their ability to measure understanding.
This erosion of authenticity matters because literary study is not merely about producing words; it is about grappling with ambiguity, forming original insights, and developing interpretive judgment. Hamlet is famously complex, inviting questions about action and inaction, mortality, and meaning. These questions cannot be answered by parroting consensus views—they require intellectual struggle. If students bypass that struggle through AI, they miss the very experience that makes literature transformative. Assessment must therefore shift from evaluating polished output to observing the thinking process in real time.
On-site, invigilated written exams restore this integrity by creating conditions where students must rely on their own interpretive resources. Without access to AI, they are compelled to engage directly with the text and articulate their ideas under pressure. This format does not eliminate preparation or external influence, but it ensures that the final performance reflects the student’s own reasoning. In doing so, it preserves the core educational value of literary study: the cultivation of independent thought.
Moreover, this kind of exam would incentivize students to practice writing at home. Knowing that their grade will depend on their ability to produce coherent, insightful prose under time constraints, students will naturally rehearse the skills required for success. They will write practice essays, refine their arguments, and internalize the rhythms of literary analysis. In this way, the exam becomes not just an evaluative tool but a motivational one, encouraging sustained engagement with both the text and the craft of writing.
Finally, homework can still play a role in this model, provided it is not graded in a way that affects the final mark. Assigning essays for indicative feedback rather than summative assessment allows students to experiment, receive guidance, and improve without the temptation to outsource their work to AI. Teachers can use these assignments diagnostically, identifying areas of weakness and tailoring instruction accordingly. By decoupling homework from high-stakes grading, we preserve its pedagogical value while neutralizing its vulnerability to technological shortcuts.
In conclusion, the rise of LLMs has exposed a vulnerability in traditional assessment practices, particularly for canonical works like Hamlet. If we want English majors to truly understand literature rather than merely simulate understanding, we must design exams that cannot be outsourced to machines. On-site, invigilated written exams are not a nostalgic return to the past—they are a necessary adaptation to a future where authenticity is at risk. By embracing this change, we safeguard the intellectual rigor that defines the study of English.