The Art(ifice) of Learning

For the third year in a row, I’m running my series of informal talks about the Art of Learning here at CBS. It starts today. It’s a great chance for me to hear what I’m currently thinking about the state of knowledge in higher education. As usual, the plan is to begin with the art of knowing things, and then move on to reading, thinking, writing, listening, talking and enjoying things. There’s a final talk at the end about how to retain what we learn and maintain the ability to learn (and relearn) still more things.

This year, there’s a shadow hanging over me, of course. Students are asking themselves whether their time over the next five years is best spent developing their natural talents or learning how to use an artificial intelligence. As I said already at the end of last year’s series, I fear we’re approaching a dystopian future in which getting an education really just means fine tuning (and perhaps learning how to prompt) a bespoke AI, an artificially intelligent assistant (or outright alter ego) customized to your particular discipline.

For now, I’m sticking to my guns. Students should learn how to use their minds and bodies, shaping their senses and their motives ever more precisely to engage with an increasingly digital environment. They should not leave their thinking to machines any more than they should leave their feelings to them. Norman Mailer once said that technology is always offering more power and less pleasure. I understand the temptation. I’m going to spend some of my talks offering some arguments for resisting it.

A university education is already a very artificial thing. But, as with art, the point is to use this formal setting to explore the gifts you were born with, to find out what your body can do, as Spinoza put it, not just, pace Deleuze and Guattari, to plug it into a bunch of machines.

Parts

The better draughtsman has more ‘on his mind’ concerning his subject; and by embodying his knowledge and understanding in each purposeful line or passage of his drawing, achieves with apparent — even with real — ease an expression of form, character, action — whatever may be his immediate object — that the novice, lacking such equipment and relying on his vision alone, finds beyond his power.

Oliver SEnior

I mentioned the other day that I try to draw a hand every morning. This morning I decided to focus on just the thumb. Yesterday, I had tried to do my index finger. It’s a good exercise, especially because I’m also trying to capture the three-dimensionality of what I’m drawing, its volume. It’s actually thrilling to see what a few lines in the right places do to the image of a hand, how trained we are to see depth and weight when it is indicated in a drawing.

I say “trained” advisedly. As with writing, a drawing has to “read” according to all sorts of conventions. We read a lot “into” a drawing; the viewer does a lot of the work for the artist. We might say that the artists skill shows in how well the viewer is “set up” to see the object as intended. All of this is useful to me as a writer to notice.

Try working on the parts of your writing in isolation from the whole. Work on the paragraph not the paper, the sentence not the paragraph. You can even sometimes give yourself a moment or two to work on the word not the sentence, what it means and how it is spelled, or, to follow this out to its extreme, to work on the individual letters. Try to notice how the larger effects are built up from simple gestures. In the end, you’re just marking up a page.

Writing on a Saturday

When I decided to “write every day” this semester I didn’t even think to make it make it explicit that I didn’t mean to include weekends. But I happen to be working a shift in library today, so I thought I would write just a few sentences about this.

Many scholars do, of course, write on the weekend, mainly because they are almost certainly not teaching or in meetings. But I always advise people to be very deliberate in the way they use their writing time on their days off. Don’t clear a whole day for writing. Reserve a few hours and make sure that your loved ones know which hours. Make them promise to leave you alone (or understand why you’re ignoring them) by promising to give them some time later.

This is generally a good principle: Begin something knowing when you’ll stop. Inform the people around when you can and when you can’t pay attention to their needs. With that bit of planning in place, by all means, sit down to write on a Saturday too. Work at it for a few hours. Then, do go outside.

Ontology

“The notion of ontological commitment belongs to the theory of reference.”
Willard Van Orman Quine

Here’s a worry I’ve been expressing lately when talking about AI. When I was in school, certainly when I was at university, I understood the difference between “books” and “periodicals” as well as the difference between “fiction” and “non-fiction”. I recall being taught these things (or perhaps just reminded of them) by librarians who were also trying to teach me the Dewey decimal system and, like I say, by the time I was an undgraduate I understood that a library was a collection of materials, grouped into broad classes. Referencing was one place where the difference between these classes made a difference. You cited a book in one way and a journal article in another because finding a book or journal article in a library were subtly different processes.

That is, a “reference” was literally a way of pointing to a thing that existed in the library at a particular coordinate. A reference that got those coordinates wrong might still have named it correctly but made it harder to find the source. Knowing how to use a database, however, allowed you find a badly referenced source simply by hypothesizing errors in particular “fields”: maybe the writer got the date or wrong, or a title, or was citing a chapter rather than a whole book? But the writer could, in pricinple, also just “make up” a reference. One could construct something that looked like a source in the scholarly literature, a treatise or and article, and it might simply not exist. I’m not quite going with this where you think, but, yes, one of the main early complaints about language models were that they would “hallucinate” their references: they would construct plausible looking references that had, if you will, no referents.

Here’s the real worry I have. Now that our databases (like Scopus and JSTOR) are offering AI functions, our students (and we ourselves, yes, perhaps even we librarians) will increasingly interact with them, not by querying terms in data fields (author, date, title, volume, subject terms, etc.) but simply by asking them about facts and ideas in natural language. And they’ll give us fully referenced natural-language answers in return. Already today, I find that students, who do much of their reading on screens, are less clear about the difference between books and papers, treatises and articles, as kinds of text. In the future, we may come think that the facts — both of the world and of our scholarship — are “given” to us “immediately”. Our understanding of the ontology of the library will erode. Our view of what constitutes our knowledge will grow clouded. Our grip on reality will weaken.