That’s obviously not a very good question. When professors complain that their students “can’t write”, what they usually mean is that they’re getting worse, usually to some distressing degree. But they don’t mean that all the students are getting worse, of course; there will always be students in our classes who write clearly and lucidly, even when they are struggling with difficult material. And the mere fact that some students “can’t write a sentence to save their lives” is also entirely normal — the other end of the same, normal distribution. What they are worried about is the bulge in the middle, which they feel is moving in the wrong direction. As I argued in my last post, this feeling might not be grounded in rigorous scientific research, but it is usually not unfounded. It’s based on what is often decades of experience with the writing of cohort after cohort of undergraduates (and even graduate students), often at the same institution in the same program, and often taking the same classes covering the same content. Every now and then, someone sounds the alarm. They are, inevitably, called “alarmists”.
They aren’t always professors, however. Consider another group of stakeholders in student writing who can also boast of decades of experience judging the writing abilities of cohort after student cohort — the employers of university graduates. Leaving aside the employers of graduate students, i.e., dissertation supervisors, consider a complaint we sometimes hear from industry representatives. “Ten years ago, if we hired someone with decent grades from one of your top programs, we could be sure they could write with clarity and confidence. These days, we have to test their writing ability before we hire them. What gives?” Notice, again, that they’re not making an absolute judgment about all students. They’re registering a trend — though in this case one that affects the top end of the grade distribution, which, they note with consternation, doesn’t overlap neatly with the distribution of writing ability. They wish it would. Like it did in the old days. (Update: To get a measure of this problem, see John Almy’s piece in IHE, where he cites an Inc. article that says that US companies spend over 3 billion dollars annually on remedial writing instruction.)
Because we’re talking about a trend in the distribution of an ability (writing) that can be leveraged (whether by educators or employers) in other pursuits (like knowing, creating, and selling things), this discussion can’t be approached as an attack on, or defense of, students as a whole. And this is why I don’t find the dismissal of “critics of student writing” as “alarmists” convincing. Not only does a general defense of the whole population of students not actually address the problem that the critics are interested in, it is completely implausible to me that the student population, in its entirety, has been subject to no change in writing ability over the past, say, three decades. Given the profound changes in the composition of the student body, the changes in university culture and pedagogy, and the rapid technological transformation of media, it would be very surprising if student abilities hadn’t adapted. Some of the changes speak to improvements (presumably the goal of pedagogical innovation) others suggest the opposite (here we have the old chestnut about the effect of social media on attention and literacy). But, surely, given all of the change we see around us, we wouldn’t expect the students’ writing skills alone to remain static. Surely, we would expect some change here too.
In her response to Robert Zaretsky (and his ilk), Elizabeth Wardle deftly sidesteps this issue by asserting, on the basis of “decades [of] resarch about student writing”, that “students are what they’ve always been: learners. There is no evidence that student writing over all is any better or worse than it has ever been. What is true is that faculty members have been complaining about student writing for as long as students have been writing.” The only studies I know of that somewhat support the claim that students are “no better or worse” than they’ve ever been show that they make about the same amount of language errors, while the kinds of errors have been changing. But until someone shows me some medical-grade randomized control trials of student writing, I’m not really buying that we’ve adequately studied the problem. In fact, the “evidence” for the perennial nature of faculty complaints is probably more rigorous than then evidence we have to base our judgments of student writing on. To say that there is “no evidence” for a decline is merely to say the problem is very difficult to study, and we’re probably going to have to rely on the expert testimony of, you guessed it, faculty.
I think there’s good reason to think that the quality of student writing is declining. I also think that experienced faculty members are qualified, by precisely that experience, not just to render a judgment, but to sound the alarm when they feel the need. And when they do sound the alarm, I think it is utterly counterproductive for composition teachers, and directors of writing centers, to tell them that they are simply wrong, “plug ignorant about how writing works,” and that they should let up on their students. When they do this, they only confirm what Robert Zaretsky suspects. “We might urge the student to pay a visit to the writing center. Such centers, however, are as easily abused as used, often reduced to the pedagogical equivalent of the confessional, a place where students are absolved, not cured, of their writing sins.” Academic writing instruction should address the concerns of faculty, not dismiss them. The truth is that many students need to work harder at their writing if they are to reach a level that will impress both their current professors and their future employers. The amount of students who need to do this work, and who don’t do enough of it, is, I suspect, growing in some programs of some institutions, while, in others, the trend is more promising. It depends, in part, on the students themselves, but also on their teachers, both of content and of writing. And it depends especially, I want to say, on the coordination that is achieved between content teachers and writing teachers. This rhetoric that pits us against each other is simply not helping matters.
I came back to this post as I stumble closer to the end of term, and most especially, to the conclusion of a writing-intensive capstone course for 4th-year undergraduates. Given the pandemical lockdown, I was forced to restructure a course that requires a serious group writing project — a four person venture plan with discrete milestones, re-writing, oral challenges and defense, and requisite intra-team discourse — to more asynchronous writing and review. This meant two additional HBS case studies with peer review in written form. It is more obvious in the latter pedagogical mode that the distribution of writing skills frustrates me more than the mean. What I “see” is that the distribution looks less like a Gaussian curve or a uniform array than a “bathtub” bimodal distribution. The students who have developed good writing skills and habits are better peer reviewers (more caring and helpful). They also are quite capable writers — a pleasure to be engaged with. The students at the low end of the distribution may be coachable, save for their nearly complete disinterest. They may not be plug ignorant, but the outcomes nearly are.
That’s a really interesting observation, Randy. Thanks. It is possible that what’s distressing to faculty is not that the median is moving to the left, but thier sense that the student body is “coming apart” (in Charles Murray’s sense?). This is more worrying because, as you point out, it suggests completely different pedagogies. The different kinds of students you’re describing should probably not be taking the same class.
I’m not sure what this does for my “grading on a curve” hobbyhorse. But for now I’m going to speculate that one cause of the problem is that we don’t grade competetively. We don’t reward that marginal bit of extra effort that gets a student just a little further to the right in the distribution. And we don’t give out enough Cs, Ds and Fs to indicate how students are “really” — i.e., “relatively” — doing.