Category Archives: Uncategorized

Higher Learning, Basic Skills

“If its object were scientific and philosophical discovery, I do not see why a University should have students.” (John Henry Newman, 1852)

In the coming years, I expect, the purpose of higher education will be a recurring topic of discussion among academics. What are students supposed to learn at university? How are they to be improved? What do we want them to become better at? The answer will have to take seriously the fact that we have the authority only to impart and certify knowledge, to arm them only with ideas and “the authority of right reason,” as Scotus Erigena put it. We may only teach and examine our students. Why should students submit to this authority? What do they gain from it? What does society gain from demanding it of them?

An alternative, after all, is fast coming into view. Students are learning to use artificial intelligence to “support their learning,” which, unfortunately, is often indistinguishable (sometimes to themselves and sometimes, it seems, even to their teachers) from avoiding the problem of learning altogether. I’ve pointed out before that when Bertrand Russell suggested that a “system of notation” could almost serve as a “live teacher,” and “a perfect notation would be a substitute for thought,” he was probably not imagining the “stochastic parotting” of large language models. But they certainly do seem capable of substituting for a great deal of the thinking we’ve traditionally demanded of our students. We seem to be on the verge of automating the very problem of knowledge. Our students have been given an epistemic workaround.

Now, I have served for about two decades as a writing coach at a major European business school and during that time I have developed what I think is a healthy, working epistemology. I don’t claim to have solved all the problems that philosophers have raised in regards to the nature of knowledge, but I have a confident grasp of what we mean, or should mean, when we say that we are “knowledgeable” people and are helping our students to become likewise “able”. As I never tire of telling them, to be knowledge-able is both to be able-to-know things and en-abled by that knowledge to do things they would not otherwise be capable of doing. Knowledge is a competence that is manifest in a performance. In what, then, does that performance consist? What should university students become increasingly better at doing? What does it mean to be able to “know something” for, let us say, “academic purposes”?

First, if you are knowledgeable about something you are able to make up your mind about it. Invoking the long tradition of Western epistemology, I usually characterize this competence as the ability to form a justified, true belief. Given a series of experiences or a set of materials, a knowledgeable person is able reach a determination of what is going on and what it means. They can convert what William James (and, later, Thomas Kuhn) called “the bloomin’, buzzin’ confusion of experience” into sterner stuff: propositional attitudes about stable facts, and theories that arrange concepts into orderly frameworks. Obviously, even very knowledgeable people can be wrong about particular facts and this is why I have increasingly come to emphasize that knowledge isn’t so much being right about things as being able to change your mind about them an orderly fashion. On the path “from stimulus to science,” as Quine puts it, “we are faced with the problem of error” at every step. Knowledgeable people face this problem squarely, let’s say.

But forming a belief, no matter how true or justified, is not sufficient to be considered knowledgeable in an academic setting. You must not only know what you’re talking about, you must be able to talk about what you know. To know something at university is to be able to hold your own in a discussion about it with other knowledgeable people. For students, this should be understood as an ability to discuss topics raised in class with their fellow students. It is sometimes argued that “academic writing” lacks “relevance” for students because it doesn’t relate to their “lived experience” or “real life”. Against this, I have long tried to argue that the “academic situation” offers an entirely meaningful context for rhetorical engagement. The classroom is a microcosm of the discipline and the students are learning to address their peers and respect their criticism. In that sense, students are apprentice scholars.

But nor is that all there is to knowing things “for academic purposes”. Knowledgeable people are not just able to make up their minds and speak their minds; they are able to write it down. The test that I recommend is whether you are able to compose a coherent prose paragraph about something you know in under half an hour. More specifically, are you in an epistemic position to decide at the end of one day to write a paragraph a start of the next about something you knew was true last week? This is not just a good test of whether you actually know the thing you think is true, it is also a good skill to have in itself. It’s nice to know that you can write half a page about anything you know in half an hour. Doing it regularly should give you a certain confidence, not least that confidence which comes from regularly experiencing that you do, in fact, know things to be true.

It is this competence that I hope the university will continue to represent and this confidence (which is not without humility) that I hope we will continue to instill in our students. To that end, I never miss an opportunity to plead for a shift of focus in our assessments, from homework assignments back to on-site, off-line, invigilated written examination. A university student who has taken a course (or is in the middle of a term) should be able to answer a relevant 20-word question given 4 hours and 1000 words (with relevant accommodations for disability) in a way that can be easily graded by their teacher. The ability to write, say, five coherent prose paragraphs is a good proxy for whether the student is able to make up their mind on the subject they are studying, speak their mind, and (of course) write it down. I would certainly argue that the plain in-ability (that the exam conditions of course make, precisely, plain) to write such an essay should call into question their claim to be a “knowledgeable” person on the subject in question. Higher learning is manifest in such basic skills.

Why English Majors Should Take On-Site, Invigilated Written Exams

[I generally don’t like this kind of stunt. But I was able to generate this essay in under 10 minutes using Microsoft Copilot. While it reused some of the language from my prompts, all the paragraphs (and the title of the post) were generated out of whole cloth by the AI. I had a brief exchange with it about why it is so good at generating prose about Hamlet, and then I asked it to write a five-paragraph essay using this fact to argue for on-site examination. I then simply asked it to add two paragraphs (the fifth and sixth paragraphs in the final version) about how (a) students would be incentivized to write at home and (b) how homework could still be used, giving only indicative grades. I have done no editing of the final text at all. I would say it is 100% in line with my views on the subject. I’m posting it here mainly to document where we’re at today. Copilot’s parting comment was, of course, to offer more help: “Would you like me to tighten this into a formal academic style with references, turn it into a persuasive op-ed, or create a version optimized for a university policy proposal?” I left it there.]

In the age of artificial intelligence, the integrity of literary education faces an unprecedented challenge. Large language models (LLMs) have made it remarkably easy for students to produce polished essays on canonical texts without engaging deeply with them. Nowhere is this more evident than with Shakespeare’s Hamlet, a play that has generated centuries of commentary and analysis. Ironically, the very text we most hope students will wrestle with is also the one they can most convincingly fake an understanding of using AI. This reality demands a reconsideration of assessment methods, and on-site, invigilated written exams offer a compelling solution.

The problem begins with the nature of LLMs and their training data. These models are built on vast corpora of text, and Hamlet occupies a privileged position in English-language discourse. It appears in countless essays, scholarly articles, and online discussions, giving AI systems a rich interpretive reservoir to draw from. As a result, generating a plausible essay on Hamlet requires no genuine engagement with the text—only a prompt. When students can outsource interpretation so easily, traditional take-home assignments lose their ability to measure understanding.

This erosion of authenticity matters because literary study is not merely about producing words; it is about grappling with ambiguity, forming original insights, and developing interpretive judgment. Hamlet is famously complex, inviting questions about action and inaction, mortality, and meaning. These questions cannot be answered by parroting consensus views—they require intellectual struggle. If students bypass that struggle through AI, they miss the very experience that makes literature transformative. Assessment must therefore shift from evaluating polished output to observing the thinking process in real time.

On-site, invigilated written exams restore this integrity by creating conditions where students must rely on their own interpretive resources. Without access to AI, they are compelled to engage directly with the text and articulate their ideas under pressure. This format does not eliminate preparation or external influence, but it ensures that the final performance reflects the student’s own reasoning. In doing so, it preserves the core educational value of literary study: the cultivation of independent thought.

Moreover, this kind of exam would incentivize students to practice writing at home. Knowing that their grade will depend on their ability to produce coherent, insightful prose under time constraints, students will naturally rehearse the skills required for success. They will write practice essays, refine their arguments, and internalize the rhythms of literary analysis. In this way, the exam becomes not just an evaluative tool but a motivational one, encouraging sustained engagement with both the text and the craft of writing.

Finally, homework can still play a role in this model, provided it is not graded in a way that affects the final mark. Assigning essays for indicative feedback rather than summative assessment allows students to experiment, receive guidance, and improve without the temptation to outsource their work to AI. Teachers can use these assignments diagnostically, identifying areas of weakness and tailoring instruction accordingly. By decoupling homework from high-stakes grading, we preserve its pedagogical value while neutralizing its vulnerability to technological shortcuts.

In conclusion, the rise of LLMs has exposed a vulnerability in traditional assessment practices, particularly for canonical works like Hamlet. If we want English majors to truly understand literature rather than merely simulate understanding, we must design exams that cannot be outsourced to machines. On-site, invigilated written exams are not a nostalgic return to the past—they are a necessary adaptation to a future where authenticity is at risk. By embracing this change, we safeguard the intellectual rigor that defines the study of English.

Calculators, Screwdrivers, and Other Tools

On Saturday, I wrote about one objection to my “Prompts and Conditions” post that came up during my faculty development course on “Teaching Writing in the Age of AI.” This post is about another one, which begins with something I say at the end of that post:

Surely, we can say what a math student at any level should be able to do without a calculator, or what a history student should be capable of without the internet. AI generalizes this problem. In each field, we must decide what students should be able to do on their own.

A participant reminded me that the calculator analogy has long been discredited as a helpful analogy for understanding AI, especially if our aim is to limit reliance on it. After all, everyone relies on calculators these days, and very few exams ban their use. This hasn’t caused any kind of catastrophe for education. We have simply changed the way we teach and learn math.

I think it’s worth looking into this claim in some detail, I should say. After all, it is my impression that many high-stakes exams — like the American SAT — have very specific rules for calculators that enforce limits on the functions that are allowed. I’m pretty sure this has been the case for as long as calculators have been available; their use is in governed by policy. But the general idea that math instruction hasn’t banned them altogether is of course true, nor have we kept teaching the same “old” things is. In any case, talking about “what a math student should be able to do without a calculator,” the participant suggested, was like asking what a carpenter should be able to do without a screwdriver. The whole point of learning the craft is learning how to use the tools.

I immediately liked this way of putting it because when he mentioned carpentry I thought he was going to talk about power tools, but the problem, of course, arises already at the level of saws and hammers and screwdrivers. We may as well start there. Would I say, “Surely, we can say what a carpenter’s apprentice should be capable of without a screwdriver”? As it happens, I would answer yes. But I must first emphasize that I have not said that students and apprentices should be examined only without their tools. I have said they should also be examined without their tools and that, in any program of instruction, there must be some set of skills that can be examined this way. It’s not either/or, but both, separately.

In the case of the carpenter’s apprentice, I suggested that someone who is able to use the standard toolkit will also be (and should also be) able to talk intelligently about how they would go about a particular task, without holding any of the tools in their hands. Also, an apprentice woodworker can be sent into the woodshed to pick out some boards that would be ideally suited to making a particular piece of furniture. This requires no tools, only a good grasp of the materials themselves (a “feel” for them, if you will). It might also be worth seeing if they can “eyeball” rough dimensions, i.e., whether they have realistic intuitions about size and space.

(I am sometimes told horror stories by teachers of quantitative methods about students who do not immediately recognize that a calculation they have let a spreadsheet carry out is off by three orders of magnitude and even in the wrong direction: positive when they should be negative, negative when they should be positive. It is worth having students estimate calculations, without a calculator, simply to make sure they have a realistic sense of the thing they are calculating.)

I think it is true that we must accept AI into writing instruction just as we have accepted calculators into math instruction. We can’t burry our heads in the sand (or, perhaps more precisely, require our students to tie their hands behind their backs). But, just as we can require an apprentice to be able to tell us how they plan to go about a project before they pick up a tool and show us what they’re capable of, we can just as reasonably at least require university students to tell us how they would use AI to solve a writing problem. But we can go further.

My participant suggested that I was ignoring what we know about “embodied cognition” (and we might add “extended mind”). But I am absolutely on board with those sorts of views. We exist in an environment of tools and machines, which not only help us to get around, but shape our very being. I will insist, however, that our environment also includes other people and the language we use to communicate with them. Our words, as Heidegger pointed out, are part of the “equipmental contexture” of our existence, our being-with-others. Teaching students how to write good prose by themselves is very much a way of helping them embody their knowledge.

I want to stress that my point is that AI “generalizes” the issue. (Indeed, Silicon Valley keeps promising us something the call AGI: “artificial general intelligence.”) With minimal prompting, AI is increasingly able to simulate almost any academic competence at least “passably” (deserving a C, let’s say, or what we call a 7 in Denmark.) If universities are to maintain their assessment integrity, we need to find a way to make sure that the actual bodies of the students are capable of something in particular, something that reflects 3 to 5 years of study. And that means we have to come up with some things we can test whether their bodies can do.

On Ruining the Weekend

I’m running a faculty development course this month called “Teaching Writing in the Age of Generative AI” and it has led to some interesting discussions. I asked the participants to read and reflect on my post, “Prompts and Conditions,” which I wrote a couple of years ago, when the challenge that AI might pose for higher education was just coming into view. I think the basic idea of the post was agreeable to most of the participants, but a number of them had some interesting reactions to what I would call the “rhetoric” of the post. I want to address two of them in particular in a post each.

One of the participants noted a parenthesis at the end of what I took to be a very practical remark about setting deadlines. Here’s what I had written:

If we imagine classes are held on Mondays, students can be given the take-home prompts at the end of the class and submit their essays on Friday (there’s no need to ruin their weekend).

“Doesn’t this suggest that our students don’t want to learn? Why would we presume that studying on the weekend ruins it?” asked the participant. He recalled that when he was a student he was happy to spend his evenings and weekends studying and, indeed, that he felt that he was expected to do so by his teachers. Is this something that we have suddenly abandoned? (And, we might add, does AI force us to do so?)

Now, I must say that I had not expected anyone to take this remark as seriously as that. The idea of “ruining” a weekend by doing school work was only intended as a lighthearted gesture at the priorities of young people. But perhaps playing to these priorities is ill-advised; and perhaps it comes off as condescending, even to the students who have them. It’s always worth thinking about the rhetorical effects of our pedagogical strategies.

In defending my choice of words at the time, I did point out that he was taking a rather hard line against another kind of concern that teachers often express for their students: not everyone has the luxury of devoting their entire lives to school while they are attending university. Many have jobs on the side; some even have families to tend to. “Ruining the weekend” may be more existential for some students than merely skipping a night on the town. This seemed to elicit some nods in the room, including from my critic.

In any case, it’s important to remember that deadlines are always somewhat arbitrary and are likely to occasion both procrastination at first and consternation at last among some students. So I spend a lot of time teaching (and coaching) students (and faculty) to plan their work in orderly, half-hour “moments” of composition so that they can comfortably meet their deadlines without having to miraculate a text at the eleventh hour. For the same reason, whenever it is up to me, I like to place that eleventh hour before noon on a Friday, rather than midnight on a Sunday. It’s just a good way to signal that you may as well get the work done during the working week, as part of your regular day-to-day program of study. It keeps the task of doing an assignment in proportion.

Like I say, I don’t want to dismiss the concern about the rhetorical force of talk about “ruining” our students weekend by asking them to study. But perhaps it is precisely the question of whether what they do during their “free” time is chosen or assigned. A student that wanted to read a book you have suggested or do some writing of their own can still have that hope “ruined” by poor planning and ostensibly “generous” deadlines.

I don’t claim to have a definitive take on how to talk about school-life balance with students and how important we should presume (whether in our thoughts or in our speech) learning is to them. As an occasion to give it some thought, my participants remark is well taken. I’m happy to hear more thoughts in the comments below.

The Human Scale

When I talk about academic writing with students at the Danish Design School, I begin by showing them a short clip of Thomas Leslie, Professor of Architecture at the University of Illinois, talking about Pier Luigi Nervi’s approach to designing structures. Before I unpack it, I encourage you to take five minutes to watch it yourself (from 27:25 to 32:12) and see if you can discern the lessons for writers that I extract from it.

First of all, it’s useful to think of your writing problem as in some way related or analogous to the problems of your academic discipline. (When I talk to students in innovation management, for example, I tell them that “Good writing is the creative destruction of bad ideas”; when I talk to project management students, I try to get them to see to that a collaborative writing project is one of the most difficult projects they may ever manage.) Getting students think of writing a paper as a “design problem” is especially apt, not least on Leslie’s definition of a “designer”: “someone who thinks things through.”

Another great point that he makes early on is that Nervi’s greatness lay in not “throwing up his arms” at “impossible constraints” but, rather, going ahead and working within them, to “hack together” a solution under the material conditions he had been given. Indeed, for Nervi, the aim was to build something “out of almost nothing” and this, of course, is the essential and difficult problem of the writer: to use the very limited materials provided by words to express what we know. Nervi, as any writer must, was also able to adapt his thinking to the time constraints he was given. He had a deadline and he made sure that his design could be realized within it.

But what I really love about Leslie’s presentation here is the way he relates the handmade components (the ferrocemento components of the roof) to the larger project. The “rigorous, algorithmic process” of producing and assembling the pieces ends up producing an impressive “architectural space”. The “structural form” is intimately related to “the pattern that comes from the human scale of the process”. Nervi’s design assumed that he would be “working with human beings” and this sensibility is then felt even in the enormous exhibition hall that results. Something important emerges from the fact that, as Leslie puts it, “people are actually making all of the things.”

This is something I latch onto, especially, as you can imagine, in this so-called “age of AI”. A larger text, like a research paper or thesis, will necessarily be a complex structure with many working parts. But it is important that the reader experience this overarching (!) “architectural form,” not as something that was “generated” by a monstrously intelligent machine on the basis of some “large” stochastic “model” but, rather, as something that was crafted by human hands, one paragraph at a time, one deliberate moment after another.