On Saturday, I wrote about one objection to my “Prompts and Conditions” post that came up during my faculty development course on “Teaching Writing in the Age of AI.” This post is about another one, which begins with something I say at the end of that post:
Surely, we can say what a math student at any level should be able to do without a calculator, or what a history student should be capable of without the internet. AI generalizes this problem. In each field, we must decide what students should be able to do on their own.
A participant reminded me that the calculator analogy has long been discredited as a helpful analogy for understanding AI, especially if our aim is to limit reliance on it. After all, everyone relies on calculators these days, and very few exams ban their use. This hasn’t caused any kind of catastrophe for education. We have simply changed the way we teach and learn math.
I think it’s worth looking into this claim in some detail, I should say. After all, it is my impression that many high-stakes exams — like the American SAT — have very specific rules for calculators that enforce limits on the functions that are allowed. I’m pretty sure this has been the case for as long as calculators have been available; their use is in governed by policy. But the general idea that math instruction hasn’t banned them altogether is of course true, nor have we kept teaching the same “old” things is. In any case, talking about “what a math student should be able to do without a calculator,” the participant suggested, was like asking what a carpenter should be able to do without a screwdriver. The whole point of learning the craft is learning how to use the tools.
I immediately liked this way of putting it because when he mentioned carpentry I thought he was going to talk about power tools, but the problem, of course, arises already at the level of saws and hammers and screwdrivers. We may as well start there. Would I say, “Surely, we can say what a carpenter’s apprentice should be capable of without a screwdriver”? As it happens, I would answer yes. But I must first emphasize that I have not said that students and apprentices should be examined only without their tools. I have said they should also be examined without their tools and that, in any program of instruction, there must be some set of skills that can be examined this way. It’s not either/or, but both, separately.
In the case of the carpenter’s apprentice, I suggested that someone who is able to use the standard toolkit will also be (and should also be) able to talk intelligently about how they would go about a particular task, without holding any of the tools in their hands. Also, an apprentice woodworker can be sent into the woodshed to pick out some boards that would be ideally suited to making a particular piece of furniture. This requires no tools, only a good grasp of the materials themselves (a “feel” for them, if you will). It might also be worth seeing if they can “eyeball” rough dimensions, i.e., whether they have realistic intuitions about size and space.
(I am sometimes told horror stories by teachers of quantitative methods about students who do not immediately recognize that a calculation they have let a spreadsheet carry out is off by three orders of magnitude and even in the wrong direction: positive when they should be negative, negative when they should be positive. It is worth having students estimate calculations, without a calculator, simply to make sure they have a realistic sense of the thing they are calculating.)
I think it is true that we must accept AI into writing instruction just as we have accepted calculators into math instruction. We can’t burry our heads in the sand (or, perhaps more precisely, require our students to tie their hands behind their backs). But, just as we can require an apprentice to be able to tell us how they plan to go about a project before they pick up a tool and show us what they’re capable of, we can just as reasonably at least require university students to tell us how they would use AI to solve a writing problem. But we can go further.
My participant suggested that I was ignoring what we know about “embodied cognition” (and we might add “extended mind”). But I am absolutely on board with those sorts of views. We exist in an environment of tools and machines, which not only help us to get around, but shape our very being. I will insist, however, that our environment also includes other people and the language we use to communicate with them. Our words, as Heidegger pointed out, are part of the “equipmental contexture” of our existence, our being-with-others. Teaching students how to write good prose by themselves is very much a way of helping them embody their knowledge.
I want to stress that my point is that AI “generalizes” the issue. (Indeed, Silicon Valley keeps promising us something the call AGI: “artificial general intelligence.”) With minimal prompting, AI is increasingly able to simulate almost any academic competence at least “passably” (deserving a C, let’s say, or what we call a 7 in Denmark.) If universities are to maintain their assessment integrity, we need to find a way to make sure that the actual bodies of the students are capable of something in particular, something that reflects 3 to 5 years of study. And that means we have to come up with some things we can test whether their bodies can do.