Category Archives: Uncategorized

Take Five

with apologies to Dave Brubeck

Here’s an exercise I gave to my students (in two different classes) last week. It takes fifteen minutes in all (plus time to explain it and debrief it) and it requires your class to have reached a stage where the students can be asked to call both an object and a set of concepts to mind, they must be able to easily imagine a body of empirical fact and put it in a familiar theoretical perspective. (In one class — an undergraduate management class — the students had been listening to a podcast each week about a different innovation and reading various innovation theories; in the other — a professional master’s class — the students were working on an analysis of their own organizational practice applying an organizational discourse approach.) The exercise contains both an individual component and some work to be done in pairs. (In the case of an odd number of students, there will be one group of three.)

In the first part of the exercise, students are given five minutes to prepare an oral presentation for one of their fellow students. In five minutes, they have to pick an object that they are familiar with from the class — a product or a process, an action or an event — to focus on; they also have to pick a theoretical perspective that will give them “categories of observation” (concepts) with which to analyze it; and they have to plan out a presentation following the standard outline:

  1. Introduction
  2. Theory
  3. Analysis
  4. Discussion
  5. Conclusion

The presentation is to take five minutes. They are to imagine a presentation that devotes roughly one minute to each task, keeping them relatively pure. That is, the first minute will introduce the object, the theory, and their thesis, telling the listener something about why this interests the speaker. The second minute will focus on the theory, presenting the concepts that will be used in the analysis, but not yet doing the analysis itself. The third minute will present the analysis, taking the concepts for granted, and interpreting the facts based on observations. The fourth minute will present implications for theory or practice and the fifth minute will summarize, conclude, and round things off.

In the second part of the exercise, one student presents what they come up with for a partner. (If there is a group of three, this person has an audience of two.) Before they begin, I remind them that they have a minute to get through each part and that silence is very acceptable. If they can’t think of any more to say by way of introduction, they should just sit there and think about how to talk about theory for the remaining seconds; when they run out of things to say about theory, they should silently consider what they’ll say about their analysis. Importantly, except for minor reactions (nodding, smiling, laughing, groaning) the listener is not to contribute; it’s not the listeners task to break a silence. As every minute passes I simply announce that it is time to move on.

In the third part, the other student presents in the same way. (If there was a group of three, one person will present to the teacher, the other will present to the previous presenter.) This time, before starting the clock, I remind them of what they are about to do. The second speaker has a much better sense of what is possible because they’ve just heard someone else do it. Also, I remind them that it’s fine to be tentative and informal in the presentation; they don’t have to present a finished idea, but could just present a hunch or hypothesis. The important thing is to move through the five moments: introduction, theory, analysis, discussion, conclusion. You’re not trying to impress anyone or win an argument; you’re trying to see what you have to say. Some of the students discovered that the last minute could actually include a question or two from the listener: the speaker provides a quick summary and adds, “Any questions?”

At the end, I point out that five minutes isn’t a very long time. But it is in fact the time it would take the ideal reader to read the ideal five-paragraph essay. Such an essay would take, again ideally, two and half hours to write. So we can imagine spending two and half hours preparing the same presentation. How much better would that go? I also suggest they try framing it very formally. Make five slides to present, book a group study room, maybe invite a few more students to watch, and dress professionally for the occasion. Now do a five-minute pitch. All of this is simply to show them that putting deliberate effort into presenting their ideas in a finite amount of time is a worthwhile thing to do. And then there’s that essay they have to write for the exam anyway (both classes are working towards a 5-page, 9-11-paragraph final). I’ve just given them 15 minutes to experience how ready they are. They can give themselves that experience again any time they want. And two and half hours is plenty of time to write five actual paragraphs. They can try just doing that to prepare the presentation.

Knowledge-able-(ism?)

My view is that if you know something “for academic purposes” then you are able to compose a coherent prose paragraph about it in half an hour. As an academic writing instructor, that is what I teach; my core “learning objective” is the ability to support, elaborate, or defend a clearly stated proposition using at least six sentences and at most two-hundred words in twenty-seven minutes. I believe this is a valuable skill and that students do well to direct a significant portion of their energies while at university toward developing it. The ability to write about things you know at a rate of two paragraphs an hour is one you want to have. It will help you, not just get through university, but make a useful contribution to professional and civic life. It’s something I know how to help you get good at.

It’s also something I can show you how good you are at. (I say “show you” advisedly because I normally refuse to tell you. If you’re following my instructions you don’t need me to decide whether or not you’re improving, just as you don’t need your personal trainer to tell you that you’re getting into shape. You can feel the improvement yourself.) Take a moment at the end of each day to prepare a moment at the beginning of the next in which to write a paragraph that you will then seek feedback on from a peer. My ideal student will write 80 paragraphs in 16 weeks this way and receive (and give) 720 minutes of peer feedback. Every two weeks, they will submit a five-paragraph essay, and receive a grade that suggests how well they’re writing relative to their cohort. At the end, they will sit for a 3-hour exam, writing yet another five-paragraph essay, which they will then discuss with me for 20 minutes (ideally, on the same day they wrote it). Their grade counts for half their course grade.

Let me say at once that none of my students are ideal and I have never been given a chance to try to examine them under these conditions.

At this point, however, I am sometimes accused of “ableism”. Or maybe I have this time succeeded in presenting what I do in such a way that the charge doesn’t seem as natural? After all, I am tying the test directly to the competence that I am trying to teach, to help the students develop. If I were teaching them how to calculate bond prices I would not be accused of ableism if I asked students to do a bond price calculation. Since I am trying to teach them how to write paragraphs and essays, getting them to write one for me to show me what they have learned should also, I hope, seem entirely reasonable. It is only my suggestion that being able to write a paragraph is a reasonable test of whether or not you know something that might still cause alarm bells to go off for some disability scholars. And not just disability scholars, actually. My view is sometimes also considered “classist” — indeed, even sexist and racist. The idea is that conventional prose is the “privileged” domain of “normal” people, where they have an unfair advantage over various groups who are marginalized, socially or materially, by birth or by accident. “The normate subject is white, male, straight, upper middle class,” Jay Dolmage tells us; “the normal body is his, profoundly and impossibly unmarked and ‘able.'”

The criticism that is sometimes levelled at my approach to writing is that I privilege this subject. I have tried to address the underlying idea that conventional (or “normate”) prose is exclusionary in a previous series of posts that respond directly to Dolmage’s arguments “against normal writing”. In the rest of this post, I want to acknowledge the undeniable practical problems that my approach poses for some students and scholars and, once again, defend the radically inclusionary nature of prose as a means of communicating ideas. In my view, requiring students to express themselves in prose is the best means we have for levelling the playing field for a socially and materially, mentally and physically, diverse student body.

The idea of a level playing field was recently invoked by Jennie Young, the director of the first-year writing program at the University of Wisconsin at Green Bay in her piece in Inside Higher Ed on the “boon” that artificial intelligence will be for writing instruction. “It’s going to help level the playing field,” she argued: 

Here’s the truth about the “achievement gap” in writing skills: students who have professional parents and who went through K-12 in higher socioeconomic school districts tend to graduate knowing how to structure an essay and write grammatically correct sentences (for the most part); first-generation students who went through K-12 in underresourced and lower socioeconomic school districts do not graduate with these skills nearly as often. Here’s the very important takeaway from this disparity: the disparity is environmental, not biological. In other words: the students who know how to structure an essay and write grammatical sentences are not more intelligent than those who don’t.

My immediate reaction to this idea was: Yes, good writers are not necessarily more intelligent than bad writers, but they are better writers. And, as a writing instructor, I see my job as helping students become better writers, not to diminish the value of good writing so that those who don’t “know how to structure an essay and write grammatical sentences” can still be recognized as the capable thinkers they may yet be. In fact, I explicitly tell students that one reason to learn how to write is to make sure that they are recognized for their (brilliant) ideas, not their (struggling) style.

Also, I don’t like the idea of crediting writing ability merely to a certain kind of upbringing. I think students who can write well should be able to take pride in that ability, not have it dismissed as the unearned wealth of “privilege”. After all, many working-class students succeed at university, and, later, in academic careers, precisely because, through hard work, they master the conventions of academic prose and outperform their wealthier rivals. And, while it is true that good writing runs in certain families, it is by no means easy for all wealthy students to write well and, surely, parents who “raise their children right” (in my personal opinion), to compose themselves in coherent prose paragraphs, should not be told by first-year writing instructors that it’s a wasted effort. It may be true, as Young seems to believe, that good writing isn’t any kind of proxy for other kinds of learning, but, even if that were true (and I’m not entirely ready to grant this), it is still an indication of writing ability, which has value in its own right. Or, that, anyway, is what I’ve been telling myself all these years working as a writing instructor.

In short, I’m not willing to abandon my ideal of a timed, on-site, invigilated written exam in the name of “universal design”, because I believe that the ability to write a coherent essay under formal constraints is one that is worth having. If a student is completely prevented from doing this by a particular disability they are, unfortunately, prevented from succeeding academically, or at least severely limited in this regard. I think we need to be honest about this. Fortunately, prose, by its very nature, accommodates pretty much any disability.

To demonstrate this, by way of conclusion, let me recall the example of Jean-Dominique Bauby, the French magazine editor who suffered from “locked-in syndrome” after a stroke. He constitutes an almost pure example of someone whose in-ability to write (he couldn’t even lift a pen) cannot be interpreted as a lack of learning or intelligence. We know this because he was able to dictate an entire book by blinking his eyes.

On my approach, someone like this would of course need special accommodations. But I think I would take a somewhat hard line on the process. If he had indeed followed my class, I would also have expected him, like my other students, to compose a paragraph a day, albeit in his head, and to have “dictated” it (again, by blinking his eyes) to the stenographer, who, I expect, would have been provided to him under the terms of his accommodations. For the exam, he would be given three hours, also like everyone else, but to this would be added the time it would take for him to communicate his compositions to the stenographer.

That is, he would still be required to compose a 1000-word essay in three hours. It’s just that the physical process of getting it down on the page would take a little longer and require some additional means. The best way to do this would probably be simply to stop the clock for him every time he was ready to dictate something (I think it would be unfair to require him to hold an entire essay in his mind and then dictate it all of a piece, although, as I understand it, this was in fact Bauby’s approach, composing and dictating the book one chapter at a time.) But I think it would be unfair to the rest of his cohort to just let him write the essay in his own way at his own pace. That would simply be abandoning the principle of composition in moments that my class had been teaching.

Maybe you can think of a better way to accommodate Bauby or maybe you can think of a disability that would be more difficult to accommodate? To my mind, the ability to accommodate such an extreme disability demonstrates that requiring prose composition under formal constraints is not an ableist pedagogy. Indeed, it is the most “universal” design for education that can be imagined. Knowledge-able people are both able-to-know things and en-abled by their knowledge to do things. One of the things they can do is to write.

I could be bounded in a nutshell

…and count myself a king of infinite space.

Hamlet

I’ve been feeling a lot of resistance to (and getting some thoughtful pushback against) my suggestion that only a return to on-site examination will be able to preserve the integrity of written exams in higher education in the coming age of artificial intelligence. In particular, people don’t seem to like the idea of requiring students to produce roughly a thousand words in response to a given prompt in three hours at the end of a semester. As I understand it, they would prefer to require students to put together a portfolio of work done, both at home and in class, throughout the semester. I also get the sense that many of the people who express displeasure at my idea would prefer not to give grades at all, while I sometimes, when I’m feeling bold, go so far as to suggest grading on a curve. (This is not an option in Denmark, where it is literally against the law.) I think we all agree that students should get as much feedback on their writing as possible as part of their instruction. The point of contention is how (and, like I say, sometimes whether) a grade should be assigned at the end.

In my last post, I tried to imagine a course on Hamlet in which the final grade was based solely on four in-class essays, though students would also write four take-home essays for which they would be given an indicative grade to gauge their performance. I’ve previously described what I think of as the ideal exam (a 3-hour written exam followed immediately by a 30-minute oral examination). In all cases, the important thing is to develop a prompt that a student who has taken your course should be able answer intelligently given three hours and a set of materials you have selected for them. That is, you should be able to imagine a three-hour performance of the competence that you had been trying to inculcate all semester. If you cannot imagine something a student of yours should have become good at doing for three hours, nor how you would judge how well they have done it if they do it, I fear that your learning objectives are mere abstractions.

In order to understand what I mean, consider another classic set-up for a course on Shakespearean tragedy. As a quick aside, it recently occurred to me that Shakespeare is a great example precisely because his profound influence on the English language means that knowledge of his work is literally “baked in” to the parameters of large language models like OpenAI’s GPT. (Just try prompting ChatGPT with the question, “To be or not to be?”) That means Shakespeare is simultaneously the most important author to teach English majors and easiest to fake knowledge of using artificial intelligence. Anyway, imagine a course in which students are to read three tragedies: Hamlet, Othello, Macbeth, as well as various theories of Shakespearean tragedy. We can, again, imagine a 16-week course, here with 4 weeks devoted to each play bookended by 2 x 2 weeks devoted to general ideas. You can assign whatever homework and in-class activities you like throughout the semester; I want to focus on what happens at the end.

After the last class, the students are given a week or two to prepare for the final exam, which consists of a prompt that relates to one of the three plays. Students may use whatever theories they like from the course to frame their arguments. They are given three hours to write up to a thousand words. (Ideally, they would then also defend their writing orally. But this is not necessary for the point I’m making here.) It seems natural — but I’ll leave it up to you — to give them the text of the play they will be writing about. You may also let them have access to the theoretical literature that you discussed in class so that they can quote from it. I would not, however, let them bring notes from their reading or (for the same reason) their own copies of the texts. You want to know what their minds and bodies are capable of in direct confrontation with the materials. You don’t want to give them the challenge of putting together a binder full of good original prose about each play and each theory that they can then transcribe in the examination room. After all, that binder can be filled with paragraphs written by an AI.

To me it seems obvious that these conditions would give students ample opportunity to demonstrate their familiarity with Shakespeare’s plays and their understanding of tragedy. (If you don’t think these are important things to know about, that’s fine; imagine another course, other texts, other issues.) Since the students would not know in advance which play would be assigned, we can assume (though it’s a game of probabilities, of course) that they know the other two plays as well as the one they happen to be examined in. And since the student knows what the task is in advance, we can assume that they’ve gotten themselves and their prose into shape and are performing as writers “at the top of their game”. In part, we testing them on their ability to pull off this performance. I think we can rest assured that if they do well here, they are capable of many other things.

That’s really the thing that I think my critics underestimate. Hamlet complained that Denmark was a prison but admitted that it was, to a certain extent, all his mind (“there is nothing either good or bad, but thinking makes it so”). Were it not for his disturbing dreams, even “bounded in a nutshell” he would “count [himself] a king of infinite space”. For my part, I’m fond of quoting Ezra Pound’s ideas on form; “think of it as center around which, not a box within which,” he said. We are not confining our students to the examination room for three hours, nor limiting them to a thousand words. We are, I would argue, affording them an opportunity to exercise their knowledge and imagination, to articulate the prose of their world. Just as a trained musician who is given a score and an afternoon can offer a competent interpretation, or a skilled carpenter can turn a pile of wood into a sturdy table in a few hours, so too can a learned scholar produce an interesting analysis of a work of literature in three hours. Given only half a day, it may not be their “best work”. But knowing what the terms were, we can certainly use the result to judge their abilities.

At about this point, I imagine, you are thinking about how to accommodate their disabilities. That’s the topic of my next post…

Prompts and Conditions

In the future, it will become increasingly necessary to decide what sorts of writing students should be able to “generate” on their own. Those who argue that AI is a “powerful learning tool” will still need to tell us what their students are expected to actually learn. And this means that they will have to describe tests that their students are expected to pass under a set of well-defined constraints. Just looking at their prompts, and the conditions under which their students are expected to respond to them, will tell us what sort of “knowledge and skills” these teachers have been trying to inculcate.

While I am personally skeptical, it is possible that interacting with ChatGPT can teach students “powerful” lessons about how to become better writers. ( “A good notation,” said Bertrand Russell in his introduction to Wittgenstein’s Tractatus, “[is] almost like a live teacher … and a perfect notation would be a substitute for thought.” It’s possible that the same may be said of a good language model.) If so, the proof will be in the pudding that students are capable of making without its assistance. (A student who has employed a “substitute for thought” throughout their university education, will presumably have some difficulty thinking on their own. This is a good way to get at the issue I’m addressing.) If you want to convince me that language models should be integrated into the higher education curriculum, you should describe the minds of the students who are shaped by their use. And the only way to do this is to prescribe some mental activities that they can demonstrate they are capable of under controlled exam conditions.

To this end, I want to propose a course structure that covers 16 weeks and 8 essays. Let’s give you 3 hours of instruction per week and let us assume that the students will do at least 6 hours work outside of class per week. Every four weeks, I will need to steal 3 hours from this budget for an in-class essay. The rest of the syllabus is up to you. (I’ll of course lay out my solution to this problem in some follow up posts.)

My suggestion is that the students write a 1000-word essay every two weeks, alternating between a take-home prompt with one week in which to do the work and an in-class prompt with three hours in which to do it. If we imagine classes are held on Mondays, students can be given the take-home prompts at the end of the class and submit their essays on Friday (there’s no need to ruin their weekend). The in-class essay can likewise be held during an extra class session on Friday. Both the take-home and the in-class essays will be read and graded by the teacher, but only the grades on the in-class essay will count. That is, the students will receive qualitative and quantitative feedback on how they are doing on 8 essays, but only four of them will actually be considered a test of what they have learned.

I will imagine you have access to some sort of “computer room” for the exam. That is, students will be given standard word-processing software but — and this is of course crucial — no access to the Internet or to generative AI of any kind. You can decide whether to make a set of sources or other materials available to them, including dictionaries, handbooks, and casebooks. Remember, you are trying to show me what they are capable of, and you may indeed be trying to show me how good they are at drawing on their course readings in their writing. (What I am not interested in, however, is how good they are at prompting ChatGPT to write their essays for them!)

It seems natural to set up the take-home essays as a task that is similar to the in-class essays. That is, you want the take-home essay to give them a chance to learn, in part, how to interpret the prompt and then find the sources they need to respond intelligently. Also, you want to make sure that they are learning how to structure and flesh out an argument spanning 500-1000 words in three hours. The good student would impose a discipline on their process working at home that will train them to perform well under exam conditions. For example, a while back I suggested that a good first assignment in a course on Hamlet would be simply to ask, “What happens in Act I?” while a good second assignment might be “Why is Claudius still alive at the end of Act II?” In both cases, the text of the play might be the only source the student is required to cite; but, depending on how one has approached these types of question in class, the student might be encouraged to tackle complex issues of interpretation as these come up in the long history of critical work on the play. Like I say, it’s up to you how to design both the course (which doesn’t have to be about Hamlet) and the assignments. What interests me is how you would demonstrate what the student has learned.

It’s possible to make the assignments test only course content from the previous two weeks, or to let them be more cumulative. I imagine that the student’s writing skills will be evaluated to an increasingly higher standard as the course progresses. But their understanding of the themes of the course would no doubt also be expected to gain in sophistication. By week ten, for example, I would have my students answer the question, “Did Hamlet succeed?” And I would have exposed them to a variety of answers to this question that their essay would demonstrate their knowledge or ignorance of, even if they did not mention it all. I am confident that every two weeks, looking at the essays the students submit, I would get a good sense of what they are learning, i.e., how I am doing as a teacher. If I have integrated AI into my teaching, I would be likewise be getting a good sense of whether it is helping or hindering their learning.

“Surely, we can say what a math student at any level should be able to do without a calculator, or what a history student should be capable of without the internet,” I said in an encouragingly popular tweet yesterday. “AI generalizes this problem. In each field, we must decide what students should be able to do on their own.” That, in essence, is my challenge to those who believe that integrating generative AI into the higher education curriculum affords powerful new learning opportunities (beyond the obvious skill of learning how to use AI). It is true that in the future we will all let AI write speeches and emails that make erudite reference to Hamlet’s words in order to impress our audiences and engage their imaginations. But should an undergraduate degree in English literature teach you only how to get your AI to make you appear literate? Or do we want our students to actually internalize the language of Shakespeare?

APA, MLA, GPT, etc.

(Note: this post was drafted in April 2023, but posted October 17, 2024. It has been backdated to reflect the time of writing, not posting.)

Both the Modern Language Association and the American Psychological Association recently published guidelines for citing the output generated by large language models. The MLA called their blogpost “How do I cite generative AI in MLA style?”, while the APA called theirs “How to Cite GhatGPT”. Both are obviously making an underlying assumption that is at odds with mine in “Why you can’t cite ChatGPT” and “Why you shouldn’t cite, acknowledge, or credit an AI with authorship”. Actually, we agree on a number of important points, but they think that, at the end of the day, you can cite an AI if you want, while I think the very idea is incoherent. In this post, I want to explain why I think this is important.

Consider my last post. I showed there that it’s possible to generate a passable undergraduate essay about The Great Gatsby in 10 minutes using ChatGPT. I was directly inspired to do the experiment by the MLA’s proposed guidelines for paraphrasing and quoting ChatGPT’s output, which surprised me. After all, they rightly “do not recommend treating the AI tool as an author”; what authority does citing an AI then actually invoke? How does sourcing a claim to ChatGPT contribute to the reader’s understanding of the writer’s argument? How could I have acknowledged ChatGPT’s contribution to my essay about the Great Gatsby without simply undermining my credibility as an author?

In their examples, they work through what looks like a standard essay prompt. The student has presumably been assigned an interpretative essay on The Great Gatsby and immediately went home and gave the prompt to ChatGPT:

Describe the symbolism of the green light in the book The Great Gatsby by F. Scott Fitzgerald.

ChatGPT obliged with a numbered list of “various symbolic meanings” that MLA Style then proposed to paraphrase as follows:

While the green light in The Great Gatsby might be said to chiefly symbolize four main things: optimism, the unattainability of the American dream, greed, and covetousness (“Describe the symbolism”), arguably the most important—the one that ties all four themes together—is greed.

In the “Works Cited” list, they propose the following entry:

“Describe the symbolism of the green light in the book The Great Gatsby by F. Scott Fitzgerald” prompt. ChatGPT, 13 Feb. version, OpenAI, 8 Mar. 2023, chat.openai.com/chat.

My somewhat simple-minded question at this point is: Other than that the writer has apparently done zero reading or thinking to determine what the green light symbolizes, what has this citation actually told us? What have we learned about the basis of the idea that it symbolizes optimism, the unattainability of the American dream, greed, and covetousness? Do we know anything more about whether this is a good reading of the novel than we would if the writer had not told us where they got it? The obvious answer is no. Since we know what language models do, we know only that this is a plausible string of words about the topic.

It gets worse when we get to the issue of quotation. This time, they suggest we prompt GPT to produce some prose rather than a list. “In 200 words, describe the symbolism of the green light in The Great Gatsby.” ChatGPT obliges with some plausible prose and the MLA now suggest we ask it for sources. “What scholarly sources were used to generate this description?”

… wait, what?!?!

This question simply AI-illiterate, MLA! We know that language models don’t “use scholarly sources” to generate their output. And that is indeed what ChatGPT patiently explains: “As an AI language model, I do not have the ability to conduct research or cite sources independently.” But then it does what we know it also often does: hallucinate sources; it generates a list of plausible-looking scholarly sources that, as you do discover if you try to actually find them in a library database, simply don’t exist. Unperturbed by this discovery (I suspect because they didn’t check the sources), MLA’s citation experts happily suggest we just uncritically report what the AI told us:

When asked to describe the symbolism of the green light in The Great GatsbyChatGPT provided a summary about optimism, the unattainability of the American dream, greed, and covetousness. However, when further prompted to cite the source on which that summary was based, it noted that it lacked “the ability to conduct research or cite sources independently” but that it could “provide a list of scholarly sources related to the symbolism of the green light in The Great Gatsby” (“In 200 words”).

Here’s their suggested entry for the works-cited list:

“In 200 words, describe the symbolism of the green light in The Great Gatsby” follow-up prompt to list sources. ChatGPT, 13 Feb. version, OpenAI, 9 Mar. 2023, chat.openai.com/chat.

Notice how weird this reference is. We’re being told exactly what the original prompt was, but we’re being given a paraphrase of the “follow-up prompt”. It is completely unclear to me what this quotation accomplishes in the text and why it is being recommended by the Modern Language Association.

Next we’ll be given guidelines for “how to cite your mom”!

Much of this can apparently be traced back to a somewhat superficial understanding of what language models actually do. The MLA cites the New York Times for its definition of “generative AI”: “a tool that ‘can analyze or summarize content from a huge set of information, including web pages, books and other writing available on the internet, and use that data to create original new content'”. This is actually not what ChatGPT does. It doesn’t “summarize content”, nor can it summarize the content of “huge sets of information”. Rather, you can prompt it with a short passage from a book or article and it can summarize that.

The MLA agrees with the APA that an AI, like ChatGPT, can’t be an “author”. But neither style guide seems to understand the implications, namely, that you then can’t cite it. APA explains that

the results of a ChatGPT “chat” are not retrievable by other readers, and although nonretrievable data or quotations in APA Style papers are usually cited as personal communications, with ChatGPT-generated text there is no person communicating. Quoting ChatGPT’s text from a chat session is therefore more like sharing an algorithm’s output; thus, credit the author of the algorithm with a reference list entry and the corresponding in-text citation.

It’s great that they don’t suggest citing ChatGPT as personal communication, but what does it mean to cite OpenAI as the “author of the algorithm” when what we’re actually quoting is the algorithm’s output? I have searched the blog for pre-GPT guidelines on citing algorithmic output but came up empty. Here’s the example they give us:

When prompted with “Is the left brain right brain divide real or a metaphor?” the ChatGPT-generated text indicated that although the two brain hemispheres are somewhat specialized, “the notation that people can be characterized as ‘left-brained’ or ‘right-brained’ is considered to be an oversimplification and a popular myth” (OpenAI, 2023).

Reference
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat

Is there any context in which this is a useful reference? Do we learn anything about the brain by this means? And if the only thing we’re learning something about is ChatGPT, why do we need to cite it? We’ve already said where the quote came from and what prompted it. What more did the citation provide?

My gut tells me these guidelines will have to be revised in light of how writing practices develop and AI improves.