The Will to Discourse

One way to decide whether you should include an idea in a paper you’re writing is to ask yourself how willing you are to discuss it. You should, of course, also be able to discuss it; that is, you should be aware of your reasons for believing what you believe and have some sense of how someone else might think otherwise. But this issue won’t arise if you have already decided not to engage with someone who disagrees with you. My advice is to leave such ideas out of your scholarly writing.

You’ll never be able to do that completely, but it is worth trying. It is a companion rule to Oliver Smithies’ suggestion to “never write something you don’t understand”. You will find, I suspect, that your unwillingness to discuss something is often grounded in not quite understanding it. You may be quite certain that it is true, but on closer inspection you realize that this is just because so many people you respect are saying it. Worse, you may remember that the only reason you think what you think is that it sounded good in the “hot take” you read a few months ago. Better go back to the source and do a little fact checking. Once you’ve got your facts in order, you may still believe it–after all, it may still be true–but now you also know why. And this, you will sometimes find, has the added benefit of giving you the will to talk to other people about it. It’s at this point that it should go into your paper.

Now, since you originally didn’t find them worth discussing, these ideas often go unnoticed in your writing. The issue will only really come up when you run into difficulties with a sentence, or when one of your readers does. If the paper is already published, it’s of course too late, and you will just have to step up and admit that you hadn’t thought that point through well enough. (There should be no shame in this.) But you might also notice it during the writing and revision process, when you are struggling with how to express a particular thought. That’s when you should take a step back and ask yourself whether you’re actually willing to discuss it. How would you react if someone tells you you are wrong. And if your answer is that you’d just stop talking to that person then I strongly advise you just to delete it. The implicit subtext of all scholarly writing is (or, rather, should be) “Here are few things I’m happy to discuss.”

Please note that I’m not saying you should not hold beliefs you don’t want to discuss. Nor even that your published work be entirely insulated from those beliefs. It can certainly be useful to put some distance between, say, your scientific investigations and your religious convictions, so that your results can stand or fall independent of your faith, but, through a long and varied life, most of your beliefs will come into contact with each other at some point and, when they do, they will demand a modicum of consistency. So you will sometimes find that the discussion you are willing to have leads (sometimes surprisingly quickly) to one you don’t want to have. That possibility should not prevent you from writing things down either.

I am suggesting mainly that you let ideas you don’t want to discuss go without saying. If you hold beliefs that you really think can be held without question, then you should give your reader the credit of presumably holding them too. You should write some sentences about ideas you do want to discuss in such a way that they sometimes, and sometimes shamelessly, presume things you don’t want to discuss. A reader who would question those things should get the sense, even before raising the question, that doing so will not begin a conversation, but rather end it. You are, in a sense, granting that if you are wrong about this then your point will fall. But your interlocutor will have to wait until you discover your error for yourself before a retraction is forthcoming. That’s a limiting case, however. Your paper should not consist solely of such sentences. Your readers should not be only those that take your hints about what you don’t want to discuss.

Scholarly writing, I always say, is the art of writing down what you know for the purpose of discussing it with other knowledgeable people. It presumes a will to discuss things. This will, I would add, is grounded in a commitment to what Foucault called “the law of coherence”:

a procedural obligation, almost a moral constraint of research: not to multiply contradictions uselessly; not to be taken in by small differences; not to give too much weight to changes, disavowals, returns to the past, and polemics; not to suppose that men’s discourse is perpetually undermined from within by the contradiction of their desires, the influences that they have been subjected to, or the conditions in which they live. (Archaeology of Knowledge, p. 149)

Of course, he talked about this law with some irony. He was not proposing that we do or should at all times obey it, only that it is presumed in discourse and, especially, in our history of ideas. I think this obligation is worth considering from time to time in your own procedures. When you feel undermined by your desires, the influences that you have been subjected to, or the conditions under which you live, ask yourself whether or not you are open to discussing them. If not, leave the idea out of your writing for now. Come back to it when you’re feeling stronger.

How Do We Know?

This is a question that has occupied me all my professional life. How, in fact, do we know what is true and what is false? I have at times thought about this question in metaphysical terms and at times in sociological terms, at times as a problem of phenomenology and at times as a problem of psychology. I believe, quite generally, that knowing is something we do with our bodies, which is to say with our minds and our hearts, and that we do it together. We can’t know anything by ourselves. But we do have to do something. We can’t leave it altogether to others to know things for us.

All of this is quite trivial, perhaps. But I think our epistemological situation today has a certain urgency. Sometimes it feels like an outright emergency. Once we have raised the question of how we know things, and looked at the chaos of our means to do it, we can easily find ourselves wondering whether it is possible to know anything at all. For my part, I am optimistic. I not only believe that we can know a great many things, but that we in fact do know a great many things. And I’m confident that we will learn a great many more. When I say “we” I mean us as a culture, of course, but also you and me. I am entirely hopeful that I’m going to learn something in the years to come.

How, then, do I propose we do this? My epistemology these days is built out of three basic competences. To know something we have to be able to (1) make up our minds, (2) say what’s on them and, (3) write it down. More specifically, we have to be able to (1) form justified, true beliefs, (2) discuss them with other knowledgeable people, and (3) compose coherent prose paragraphs about them. While I’m happy discuss the standards against which we might evaluate these competencies, and while I of course grant that mastery is always relative, I will not consider someone knowledgeable (not even you, dear reader) if they declare that they lack these abilities. If you haven’t made up your mind about something, are unwilling to discuss it, and/or refuse to put it in writing, I will not grant that you know it. I think it would be great if we approached all claims to know things in this way.

Remember that every time someone expects you to believe them they are claiming to have knowledge. If not — if, when you scratch them, they say that they don’t actually know — then you have no reason to believe what they are telling you. The reason for this isn’t really philosophical but rhetorical: when someone claims to know something and asks you to believe them there is a conversation you can have to help you make up your mind. You can ask how they know. What tools and materials did they use to attain this knowledge? What theories framed their inquiries, what methods guided their investigations? What concepts did they use to grasp the data they were given? And their stories here will be plausible to you or not. You may find the stories so convincing that you simply believe them then and there. Or you may choose to retrace some of their steps, to try to replicate their findings. You may suspend belief as long as you like. As some point, however, if you want to know this thing, you will have to make up your mind.

Again, trivial notions that, to me, have some urgency in these times. I sometimes sense an indignation in my fellow human beings that it should be this hard to know things. Why can’t we just believe what we are told? people seem to say. They seem disappointed in their “post truth” politicians and “fake news” journalists. They seem to feel entitled to being told the truth at all times. They won’t accept the burden of checking the facts for themselves. We seem to have lost our sense of charity about the “honest error”, both in the thought and the speech of our interlocutors and in our own hearing and thinking. We imagine everyone is always either speaking their minds plainly or deliberately concealing what they think. We imagine that what we hear is always what was said, and what we understand by it is always what was meant. We have lost our taste for the difficult work of believing things.

The knowledge of a culture is the product of countless well-meaning, but often misguided, people trying to figure out what the facts are, working among countless more who are trying to accomplish their goals, with or without our knowledge. The facts, of course, don’t make themselves known, even under the best of conditions. We must do it for them. But at some point, it sometimes seems to  me, a generation decided that the work was more or less finished and there was nothing left for us but to believe. I think we’re beginning to see that this won’t work. I’m hopeful we’ll soon learn how to know again.

Flipped, Blended

There’s a lot of talk about “blended learning” and “flipped classrooms” these days, also here at CBS. I, too, deliver much of my writing instruction using a combination of online and in-person interaction. But I sometimes wonder if we’re not a little too excited about all this. I mean, hasn’t education always involved a “blend” of media, hasn’t the classroom always been “flipped”?

Consider the most traditional of classrooms. The students have been given an assigned reading and are told to show up to class prepared to discuss it. How is this less “flipped” than asking the students to watch a lecture or documentary at home and having them show up for class prepared to engage in some “interactive” task based on it? Likewise, isn’t traditional education already a “blending” of reading assignments, writing assignments, group work, classroom discussion and lecturing? The real issue, it seems to me, is not whether we are using multiple modalities to educate the students, but whether we can get them to do the activities we assign, show up for the encounters we have planned.

The introduction of IT has, indeed, changed this game somewhat. There is, especially, a tendency to immerse students in one or another form of “social media”, through which their learning process is continuously in contact with that of their peers and, at least as sometimes experienced, their teachers. Such contact is important, but I think we do well to reflect on whether it really does have to be continuous. I don’t want to get into the very deepest layers of this thorny issue, but we do have to be mindful of the difference between the slogan “Social media means you never have to be alone” and “Social media means you can never be alone.” One problem with our online lives is that they don’t really know when to leave us alone. Students participating in a course cannot be expected to react to updates at all hours, nor can they expect to receive information that is given out in a particular time and place that they were prevented from attending. We need to maintain a certain amount of order.

I think this is an important challenge for education in the future. While I am embracing the new technologies in my own practice (I’ve been blogging since before it was mainstream), I still defend a lot of “old school” sensibilities–brick and mortar, chalk and talk. I don’t, in particular, like the anti-lecturing rhetoric of those who are promoting the new pedagogies, just as I’m wary of the anti-prose rhetoric of the promulgators of “new literacies”. If we really did train the ability of students to read treatises, write essays, and attend lectures, I think our culture would be stronger as a result of it. The ability to read a 25-page chapter, write a 5-paragraph essay, or listen to a 45-minute lecture is not trivial. Training these abilities shapes minds of a particular kind. I believe those minds have a value.

Charity and Literacy

“Charity I have had sometimes…” (Ezra Pound)

I’ve been thinking about Twitter lately. When Carlo Scannella suggested, back in 2008, that Twitter might be an “oral” (sub-)culture, not a strictly “literate” one, the project was still in its infancy. Like him, I think a strong case can be made that the way people ordinarily use the platform does not exhibit their literacy. That sounds almost like I’m trying to be funny, but there’s a serious meaning behind it. In this post, I want to say a little more about that but keep in mind that Scannella has made a similar argument for blogging. I’ll look at that another time, as a way of picking up a project I started on my now-retired blog but never developed fully.

Let’s begin with the strongest argument for considering Twitter a site of literacy practices. There are explicitly literary projects on the platform, like those of Teju Cole and Eric Jarosinski. These are clearly meant to be read, however ironically, as writing within a symbolic order that connects it to the full corpus of our literature. In other words, Twitter certainly can be a site of literacy practices–it can even be used to produce literature. Of course, you don’t have to produce literature in order to demonstrate literacy, so the question remains. Is Twitter willy-nilly a site of literacy–simply, for example, because it requires people to use written signs?

Well, first of all, it doesn’t actually require that, does it? Twitter now lets people share images, still and moving, as well as recorded sounds, as easily as strings of words. It is possible to retweet someone else’s picture, with a comment that (literally!) is nothing more than a thumbs up. Being able to use Twitter, then, requires a very limited mastery of written signs. But that’s too easy an argument, I’ll grant. The interesting question (especially back in 2008) is whether Twitter is a literate culture even when it uses written signs, letters, words, sentences.

My first hint that it is not comes from my own experience. I have found distressingly many typos in my tweets. Apparently, I don’t proofread them properly; I just fire them off — just as I sometimes deploy imperfect sentences in conversation or extemporaneous lecturing. I treat my sentences on Twitter as utterances that I don’t know how will turn out when I begin. I don’t seem to feel nearly as compelled to edit them as I do when I’m actually writing. I’m also not especially bothered by other people’s typos. While they might make me stop reading a book they had published, spelling mistakes and missing words don’t make me abandon someone’s Twitter feed. It seems perfectly understandable and, where errors make the tweet difficult to understand, I proceed as I would in conversation, asking my interlocutor to clarify what they must have meant. I don’t “correct” them. I think that’s a telling sign.

The same can be said about serious disagreements over facts and ideas. I approach any particular utterance as a tiny piece of a larger puzzle that doesn’t have to make sense on its own but will only be given a meaning as the conversation proceeds. Twitter is, ideally, located on what Rosmarie Waldrop called “the lawn of excluded middle”, “where full maturity of meaning takes time the way you eat a fish, morsel by morsel, off the bone.” Or, as Pound would perhaps have suggested, we must read each other’s tweets with charity, for maximum truth and meaning. Perhaps “it coheres all right / even if my notes do not cohere.”

But now we come to crux of the issue. Are we engaging correctly with what Twitter is when we seize on an individual tweet and denounce it as an expression of someone’s racism, sexism or other vile disposition? Should we not approach an out-of-context tweet initially as a mere fragment of a conversation we’ve overheard? “Did s/he really say that?” does not have to be a rhetorical question. Perhaps there’s a typo or a missing word that accounts for the apparent error in thought, i.e., “incorrectness” of opinion. Perhaps it “coheres all right” in the larger of whole of a detailed argument.

One of the characteristics of a literate culture is that it has a “record”, a body of “documents”. But not every utterance of a literate culture is, in fact, a document, even when, technically speaking, “written down”. Not every receipt or shopping list or scribbled note to oneself is a record of what a culture thinks and feels. We are pretty good at distinguishing passing remarks from serious propositions in speech. (We get a bit confused, however, when someone makes a recording of them.) Perhaps we need to practice greater charity in our approach to tweets. They do not document what any of us believes. They are merely part of the stream of language within which we have to find our composure. It is when we find it that we can produce a proper “text”, i.e., practice our literacy. Only then are we actually writing.

Curb Your Impact

Researchers are often advised to have an impact “beyond academia”. Indeed, Andrew Gunn and Michael Mintrum (2016) see the so-called “impact agenda” as focused on “university-based research that has non-academic impact, meaning it delivers some change or benefit outside the research community.” The basic idea is that the value of our universities lies in their contribution to society, for which the economy often serves as a proxy.

The EU’s 2020 target is to spend 3% of GDP on research and development, about a third of which is likely to be invested in university-based research (the rest is government and business sector R&D.) While 1% of GDP may not seem like much, it’s certainly a significant amount of money (about €150 billion) and it’s reasonable to expect a return. Indeed, Gunn and Mintrum detect “an impatience to see rapid and significant returns on the investments being made.”  This agenda can certainly be felt by researchers, whose performance is increasingly measured through indicators of societal impact. They are now acutely aware of their stakeholders outside the university and they are earnestly engaged in catching their attention. But perhaps the impatience that Gunn and Mintrum detect in all this activity should give us pause.

Consider another target in the Europe 2020 strategy: 40% of people aged 30–34 are to have completed higher education. Already today, about  25% of 20-29-year-olds are enrolled in university in most EU countries, so this target (which is already close to being reached) is entirely realistic. In any case, a few percentage points either way won’t affect the point I’m going to try to make. With such a high percentage of the population being either the product of higher education or in the process of getting such an education, it seems odd to get the people (researchers, faculty) who are spending a mere 1% of GDP to invest a great deal of their time (a portion of that economic activity) thinking about how their research makes a contribution “beyond academia”.  If the  researchers confined their efforts to their peers and students, their impact would by no means be insignificant.

With the passage of time this impact would spread throughout the population as the students graduate and begin to apply the skills, facts and theories they have learned. While this will be slower than the impact a piece of research might have by directly shaping government polices and corporate strategies, it will be deeper and more lasting. This is especially true when we consider that the 40% of thirty-somethings who have completed higher education will presumably have a disproportionate impact on GDP (i.e., as a group they will contribute more than 40% of their cohort’s GDP-contribution). Finally, by letting students round off their understanding of the theories in “the school of hard knocks” before they are implemented at the policy level some of the worst excesses of Ivory Tower thinking can be avoided.

Imagine for a moment what would happen if academics came to measure themselves only by the external performance indicators stipulated by the “impact agenda”. (Perhaps we could add to this an interest mainly in student evaluations as regards their teaching.) Suppose it was a matter of complete indifference to them what the students actually learned (so long as they were “satisfied” while at university) and what happened to the society and economy in the long run (so long as they were feted by bureaucrats and entrepreneurs.) In the short term, it is possible that high-quality research would inform well-crafted policy initiatives. But, after a time, researchers will learn (for they are not stupid) how to game the system to maximize their immediate, apparent impact. Mediocre but shiny ideas will crowd out the ideas that might actually work in practice. Many mistakes will be made. Meanwhile, since the minds of the students have been mostly neglected, the next generation of researchers won’t be up to the task of delivering what the best minds of the last generation were capable of. The race to the bottom will be on.

To avoid this, I would recommend evaluating academic research by measuring the value of higher education according to its long term impact on the economy and society, not the immediate measurable effects of discrete pieces of research on the policy process. I think the short-term indicators (student satisfaction, policy impact) are ultimately distractions, and we might find that academia is performing well by them but driving the culture into the ground. If researchers think the most important audiences are outside their own disciplines and classrooms, they will stop giving their best thinking to their students and colleagues, waiting instead for them to occupy some position of power wherefrom they can commission a policy paper. Education becomes merely a way of conditioning students to draw on “academic” expertise when they get out into the real world. “Life -long learning” becomes a kind of conditioned dependence on epistemic authority. And peer review becomes the mutual appreciation society it has long been accused of being.

In short, I propose we curb our socio-economic enthusiasm, our pursuit of “impact”. Let’s direct our energy back into academia and wait for it to diffuse organically throughout society. As T.S. Eliot once said, you don’t make flowers grow by pulling on them.

Update: See also Tina Basi and Mona Sloane’s post at the LSE Impact Blog.