Category Archives: Uncategorized

The Future of Objectivity (3)

Some of the most successful challenges to the objectivity of scholarly writing have come from feminist thinkers. Amy Katz Kaminsky raises the issue briefly in her contribution to The Future of Scholarly Writing and suggests it represents a tension between “academic” and “ethical” principles. Fortunately, she asserts the importance of “reconciling the two” (184) and not, as others have, of abandoning objectivity altogether in pursuit of some higher aim. As in my previous engagements with this book (see parts 1 and 2 in this series), please remember that I’m taking Kaminsky’s views on objectivity out of their larger context, both of the chapter they appear in and the book that it, in turn, is a contribution to. I will eventually read this book from front to back like we normally do.

Kaminsky begins (183-4) by questioning traditional standards of “mastery” and “authority” in scholarly work. Women’s Studies, as she points out, is a relatively young field and many of its practitioners were therefore trained in other disciplines. But she notes that authority can be problematic in any case, when, for example (I imagine), a scholar of Latin American literature who does not have Spanish as a first language proposes to teach, say, Latin American students. One is always, in this sense “between cultures”, she suggests. Moreover, the “stark” history of the relationship between the United States and Latin America makes this cultural encounter even more difficult to navigate.

The notion of objectivity comes up when she turns her attention to the legitimacy of feminist scholarship in the academy. The problem, she says, has been one of “carving out a space for situated knowledge … in a realm where objectivity and neutrality have been key values” (184). She argues that the “neutrality” that is invoked is often simply the “generic masculinity” of the “dominant group”. This defines a “norm” and maintains the “status quo” that it is the goal of feminist scholarship to change. Presumably, “situated knowledge” is neither objective nor neutral because it involves something like Susan McClary’s “particular investments” in political and ethical projects of various kinds, which, the argument might go, are inexorably partisan and subjective.  The challenge, then, is to bring about a transformation of dominant group commitments (shades of Kuhn) without losing the legitimacy that adhering to those commitments confers. This is arguably the dilemma of all social change projects.

It is not entirely clear in this passage what the endgame is, only that Kaminsky does not wish to maintain the status quo. I can’t tell whether she wants to maintain a semblance of objectivity and neutrality only long enough to do away with it, so that the future of scholarly writing will be liberated from the “high seriousness of academic standards” and be free to pursue more “situated” concerns, or whether she wants merely to challenge the “masculinity” of the current norms and achieve a new kind of neutrality (gender neutrality?) with its own kind of seriousness even after “the foundations of those very standards” have been challenged. I do know that some of the conversations about the current replication crisis have turned on whether traditional criticism, which involves directly pointing out the errors in the work of other scholars, is actually a distinctly male form of bullying. I hope the pursuit of objective truth is not destined to be seen as a “generic masculine” form of harassment.

Like I say, I am entirely encouraged by Kaminsky’s suggestion that we must find a way to reconcile traditional norms of objectivity and neutrality with the increasingly political and engaged desires of modern academics, who, as if to adopt Karl Marx’s famous slogan, are not content to interpret the world, but hope also to change it. I’m not sure that this tension is as gendered as some people seem to think it is (I’m not ready to say how gendered Kaminsky thinks it is)–Marx, after all, was very much a man, and revolution has always, it seems to me, had a certain machismo about it. But I will admit that, at this moment, I am more concerned about preserving, and even conserving, the objectivity and neutrality of our scholarship in the face of the “post-factual” dystopia that seems to be looming, than I am about finding room for the “situated knowledge” of any number of political projects that seek the authority of “academic” work.

[Part 4]

A Rejection Plan

Most scholars have a publication plan. For a given research project, they have a list of planned “deliverables”, specifying some number of articles to be published in specific journals. Collaborative research projects, too, have a such a plan, distributing responsibility for authoring and co-authoring a series of papers among members of the research group. On a still larger scale and over a longer term, departments and whole universities have goals defined in terms of a certain amount of publications in a certain set of journals. Researchers internalize these goals and make a plan for themselves for the coming year, five years, and so on. All of this is perfectly reasonable (or at least rational) in a time where publication has a such a decisive influence on the course of one’s career.

But a publication plan has the very important drawback that it is almost inevitably an overview of one’s coming failures. The most attractive journals are characterized by very high rejection rates. One cannot simply plan to be published in a good journal, just as one cannot just plan to make an important scientific discovery. One can hope to do these things, to be sure, but one cannot simply undertake to get oneself published. It’s not the sort of goal that a deliberate plan can help one to accomplish. Success is almost entirely out of one’s own hands.

For many years, therefore, I have argued that one should plan to submit work, not to publish it. Indeed, when I talk to department heads and university administrators I encourage them not to keep asking their faculty members what they have published, but what they have submitted. In this regard, I’ve compared myself to Moneyball‘s Billy Beane. A researcher who is regularly submitting work for publication is worth more to a department than one who occasionally publishes in a top journal.  (I’m happy to discuss exceptions here, but do note that the willingness to discuss what one knows is an important part of being a scholar. Those who rarely submit work for peer review are not really demonstrating this willingness.) A submission plan, moreover, is one you control yourself. While there are all manner of barriers to publication, no one can prevent you from submitting a paper when you have done your work as planned.

I recently had a conversation with an author that suggested an even more striking, perhaps even jarring, image. Make a rejection plan. That is, plan to have your papers rejected by three or four journals before it is published. Normalise the experience of rejection as something to be expected. Write your paper and submit to the best journal you think it is suitable for. But make sure you have a list of three or four other journals that it is also suitable for. When you get rejected, incorporate whatever criticism the rejection involved and send the paper on to another journal on the list. Don’t give up until the list is exhausted, but perhaps make sure that there’s always some kind of published end-game, even if it is merely making the paper available in an institutional repository. As Brad Pitt says in Moneyball, “It’s a process.”

Obviously, there will be exceptions. If a reviewer convinces you that your study is fundamentally flawed, you might decide not to waste anyone else’s time with it. But most people retain some confidence in their work, even after a reviewer has found shortcomings in it or an editor has deemed it a poor “match” for the journal. Our general attitude is that errors can be fixed and there are other fish in the sea (or perhaps other seas in which to swim).  It is rare that we learn from any one experience with a journal that our research is altogether worthless. In fact, I would argue that to take this as the lesson of any one rejection is always a mistake.

Here’s another interesting feature of this plan: when you get a “revise and resubmit”, you can decide whether the suggested revision is worth the effort when compared to making just a few minor changes and sending it to the next journal on your list. It lets you gauge the amount of effort you are willing to put into the next step.

But the most important reason to think in terms of a series of predictable rejections, rather than planning for publication in a particular journal, is that it forces you to write your paper for an audience that is more general than a particular journal’s reviewers and editors. In fact, it gets you to think in terms of your actual readers rather than the “gatekeepers” that stand in your way of reaching them. You will have to write a paper that, with minor adjustments, is relevant to one of several journals, all of which your peers (the members of your discipline) read and cite in the course of their own work. Perhaps ironically, writing the paper in this spirit–with no one particular journal in mind–will produce prose that is more authoritative, more objective, more “classic”. It is altogether likely that your reviewers will prefer a paper that wasn’t mainly written to “get past” the particular filter they happen to represent. It will have been written to accomplish something on the other side of it.

The author and I quickly agreed that this was a refreshing way to look at the publication problem. It recognizes that the most likely result of submitting to a journal (in which you’d like to be published) is rejection. It is altogether sanguine about this prospect. It increases the odds of publication by planning the next step after rejection already before this situation arises, a situation that can’t be taken to be “unfortunate” because it is so very probable. This both tempers the disappointment of rejection and increases the joy of  acceptance.  Unlike a publication plan, a rejection plan is not a list of planned failures. It is a plan for how to move forward after an entirely imaginable outcome occurs.

Knowledge and Imagination

Simplifying somewhat … In his Critique of Pure Reason, Kant was trying to understand how imagination makes knowledge possible. The implication, of course, is that without imagination there can be no knowledge, i.e., it would be impossible to know anything if we could not imagine anything. Indeed (and I think I’m remaining faithful, in my simple-minded way, to Kant here) it is possible to know things only in so far as we imagine them as objects. Objects are just things construed in terms of the conditions of the possibility of knowing them, and those conditions are, let’s say, “imaginary”.

I’m not just trying to be profound here. I think many students and scholars today have forgotten the importance of imagination, not in the romantic sense often associated with artists, but in the more mundane sense of picturing facts. “We make ourselves pictures of the facts,” said Wittgenstein. And a fact is merely a collection of things in one of many possible arrangements. The possibility of these arrangements is determined by the objective properties of the things in question. That is, an object is a thing construed in terms of its possible combination with other things to form facts. To know a thing, then, requires us to literally imagine its possibilities.

Like I say, I think we need to remind ourselves of the importance of imagination. We need to keep it in mind when reflecting on what we know. In fact, there are a number of cognitive steps between our knowledge and our imagination that are worth keeping in mind as well.

First, remember that you can’t know anything if you don’t form a belief about it. This belief will, of course, have to be true in order to really count as knowledge, and in that sense we have to accept that we can’t ever be certain that we know anything. The best we can do is to have good reasons for holding beliefs and be prepared to abandon them when better reasons against our beliefs emerge. At the end of the day, whether or not we really know something depends on the facts and not only are they not always in plain sight, they have a habit of changing. What’s true today may be false tomorrow, and the things themselves seem in no hurry to inform of us of their new arrangements. Nonetheless, at any given moment, we must hold beliefs, and those beliefs, when true and justified, are our knowledge. If we believe nothing, we know nothing.

But this should not render us simply credulous. We should not easily believe the things we want to know about, for appearances can be deceiving. Before forming a belief about something, therefore, we must investigate the matter carefully and this will require us to, first of all, understand what we were are looking at, or what we are being told. Often we will form a belief on the basis of nothing other than someone else’s testimony, often in writing. We read an account of a historical event or the analysis of a dataset and we form a belief about the reality it represents. Here it is clearly not enough to believe; we might well misunderstand the conclusions reached by author. While the account or analysis we’ve read is perfectly correct, and while the author therefore holds a true belief, i.e., knows what they’re talking about, we might yet form a false belief simply by misreading the text. Without belief there can be no knowledge, then, but we should not believe things we cannot understand. As scholars, in particular, we should not believe things we have not made a serious effort to understand.

And this brings us back to where I started. In order to understand something we must be able to imagine it. We should be able to “picture” the situation or population that we are forming a belief about. What’s really happening here is that we are ensuring that we are not forming a belief about an impossible arrangement of things. To say that something is imaginable is simply to say that it is “possible” in the broadest possible sense. It is logically possible. We can imagine a horse with a horn (a unicorn) or a horse with wings (Pegasus) but not a horse that is both all white and all black (not like a zebra, but like one horse that would have to be two horses at the same time). But notice that fantastic creatures are only “possible” on the background of a good deal of ignorance. An evolutionary biologist might have a much harder time imagining a winged horse than you or me simply because they know that wings evolved from arms, so a four-legged creature with an extra set of wings doesn’t really fit anywhere on the tree of life. I don’t know if it’s easier to imagine a unicorn in nature, but I suspect it is since there are rhinoceroses and reindeer in nature already. You and I can imagine Pegasus because we don’t know that much about horses and wings. A biologist, however, must suspend disbelief.

These considerations, like I say, are pretty elementary, simple-minded even. But they do suggest a heuristic for thinking about what you know and how you know it: imagine, understand, believe. Don’t claim to know something you don’t believe; don’t believe something you don’t understand; and don’t try to understand something you can’t imagine. As a scholar, your job is to get things to make sense. As a student, you’re trying to make sense of things too. It takes imagination to do it well.

(See also: my post on knowledge, imagination and intuition, feat. Ezra Zuckerman and Catherine Turco.)

Writing and Drawing

“Is a bit of white paper with black lines on it like a human body?”
Ludwig Wittgenstein

I know a painter whose instinctive response to people who claim they don’t know how to draw is: “How do you see?” I sometimes feel the same way about people who claim they can’t write: How can you think? How can you be sure you know anything at all? Here’s a post I wrote on my old blog on this theme:

One sense that the OED gives to the verb “to draw” is “to make (a picture or representation of an object) by drawing lines.” There’s something unsatisfactory about the circularity of this definition (it uses the word “drawing” to define the verb “to draw”) but I suppose we all know what it means. To draw is to make a picture of an object out of lines, and a picture is a two-dimensional representation of an object (or scene). The lines are important. A photograph, though two-dimensional, is not a drawing, nor is a painting (which makes the picture out of broad and fine strokes rather than lines.)

The status of the “object” in this definition also needs some clarification. After all, it is possible to “draw” a unicorn, so the object in question need not actually exist. You can draw a line or square, too, so it doesn’t have to be three-dimensional, though the representation will always be two-dimensional.

For some time, now, I have been trying to get writers to understand their work in similarly straight-forward terms. They have some object in mind, and they want to render it on the surface of the page. Their object is often four-dimensional—a story that unfolds in space and time, for example, or a data set from a time series—but their “picture” (the writing not the drawing) is always one-dimensional. Writing is linear: one word follows another in a sentence. One sentence follows another in a paragraph. Calligrams and other stunts not withstanding, the sense of a piece of writing is whatever emerges from reading the words in an order determined by convention.

Just as the meaning of one line in a drawing depends entirely on the meaning of the lines around it, so, too, do the words in a piece of writing depend on the words around them. Writing and drawing are both arts of arrangement. If you want to master either art, it is worth approaching it in the simplest form first. Consider the problem in terms of “marking up” a piece of white paper, either with two or three pencils of different grades (perhaps also an eraser), or with the letters of the alphabet and basic punctuation marks. (I would include italics among your basic resources for writing, but not boldface.) Imagine the drawing occupying about two thirds of the space of the page (leave a lot of white space) and imagine a paragraph of at least six sentences and at most two-hundred words, in a nice easy-to-read serif font, double-spaced, with no right justification.

The challenge is to render an object or fact accurately in that form and to do so within a manageable amount of time—twenty-seven minutes, for example. If you don’t choose something to draw—a hand, a face, an apple, a cup—you won’t expect to succeed. The same is true of writing. Choose some fact you know to be true or some event you know has occurred. Then describe it; write it down. It will help you immensely if you choose the fact to write down (or even the object you want to draw) the day before. This will give your subconscious time to prepare.

Once you have made your attempt, step back from it and look at it, or read it out loud. Do you like the way it looks or sounds? Consider again the object or fact or event you were trying to represent. Did you do it justice? Be honest with yourself, but not mean. Don’t dwell on it too long. Tomorrow, do it again.

Power Poses and Learning Postures

There’s a long and interesting article in the New York Times Magazine that all researchers in the social (and perhaps all) sciences do well to read. It’s about the “revolution” in statistical methods that has been going on for some time now and that we ignore at our peril. (The key text is already mentioned in my readings section.) But it’s also about more inframethodological concerns, specifically the way we deal with our mistakes.

The article’s author, Susan Dominus, clearly has a great deal of sympathy for the predicament that her subject, Amy Cuddy, has found herself in. As result, we get a great deal of information about Cuddy’s emotional response to having her work on “power posing” criticized in a very public way. I strongly recommend reading Andrew Gelman’s reflections on the article and the issue at his blog as well (also on my blogroll, of course). There’s some lively discussion in the comments, which is both a discussion of critical posture and a series of examples. Indeed, I think this whole thing is a master class by Andrew Gelman in giving and taking criticism!

For my part, I think Cuddy should just have acknowledged that the effect of power posing has not been scientifically demonstrated after all and stayed on the tenure track. I don’t want to get too much into it here, but I do want to make a confession of sorts. In my writing seminars I actually recommend a form of “power posing” that I learned from Benjamin Zander:

My version of this advice isn’t about making mistakes but discovering you don’t know what you are are talking about as you begin your writing moment. Don’t put your head in your hands and moan about how stupid you are. Throw up your arms and say, “Interesting! Ignorance!” and then spend 18 or 27 minutes exploring the depth and breadth of your own own unknowning. Ignorance is an important experience to face in research; indeed, it should be a familiar one. People who are afraid of their ignorance will have a hard time learning anything. Let’s call this the Learning Posture.

Now, I’m careful not to claim that I have science to back me up on this. It just strikes me as a good attitude to have, a good pose to strike, when you’re trying to write down what you know. And, as Zander points out, it’s an excellent attitude to adopt when you make mistakes. Be fascinated by them! When you make them, get into them, be curious about them, try to figure them out. This is where you’re going to learn something.

Ironically (and as Dominus begins by suggesting), perhaps Cuddy should have taken her own advice and struck a power pose when she began to receive criticism. “I’m wrong? How fascinating! Let’s get into it.” This is certainly what I recommend doing. Hopefully that is also the lesson that you, dear reader, will take away from all this.