A Dim View of Criticism

There’s been a lot of great discussion over at Andrew Gelman’s blog in the wake of Susan Dominus’s piece in the NYTimes about Amy Cuddy and power posing. I wrote about it here when the story broke, and Andrew has since published a number of posts about criticism in science (see this one and this one in particular). It reminded me of a post I wrote six years ago while reading Michael Lewis’s The Big Short, which I want to re-purpose for this blog today.

Lewis’s book is about the Wall Street outsiders and oddballs who “shorted” (i.e., bet against) the subprime mortgage market and made a killing when it finally collapsed. Interestingly, after they had decided that the market was going to collapse, it was not, actually, a straightforward matter to bet against it. Had they thought that a company was going to go bust, there’d be a standard way of making money on that belief: they could borrow stock in the company, sell it, and then wait for its share price to crash. At that point, they buy back the shares (cheap) and pay off their debt. But, as Lewis points out, things were very different with mortgage bonds:

To sell a stock or bond short you need to borrow it, and [the bonds they were interested in] were tiny and impossible to find. You could buy them or not buy them but you couldn’t bet explicitly against them; the market for subprime mortages simply had no place for people in it who took a dim view of them. You might know with certainty that the entire mortgage bond market was doomed, but you could do nothing about it. (29)

I had a shock of recognition when I read that. Back in those days, I was working very hard to find a way to “bet against” a number of stories that have been told in the organization studies literature. I have now somewhat resigned myself to the fact that there’s no place in that literature for people who take a dim view of them. While some people say encouraging things to me in person about what I do, there isn’t really a genre (in the area of management studies) of papers that only points out errors in other people’s work. You have to make a “contribution” too. In a sense, you can buy the stories people are telling you or not buy them but you can’t explicitly criticize them.

Back then, I thought about this in terms of the difference between faith and knowledge. Knowledge is a belief held in a critical environment, while faith is a belief held in an “evangelical” environment. The mortgage bond market was an evangelical environment in which to hold beliefs about housing prices, default rates, and credit ratings on CDOs. There was no simple way to critique the “good news”. So it took some dedicated outsiders to see what was really going on. These were people who insisted on looking at the basis of the mortgage bonds that were being pooled and traded on Wall Street in increasingly exotic ways.

One of these guys was Steve Eisman, who was a notoriously cantankerous personality. (He was fictionalized brilliantly by Steve Carell in the movie.) He recalls meeting Ken Lewis, the CEO of Bank of America. “[The CEO’s on Wall Street] didn’t know their own balance sheet … I was sitting there listening to [Ken Lewis]. I had an epiphany. I said to myself, ‘Oh my God, he’s dumb!’ A lightbulb went off. The guy running one of the biggest banks in the world is dumb” (TBS, p. 174). Yes, or perhaps he was just working in an evangelical rather than critical environment. Here, “any old balance sheet” will do … as long as you think it’s bringing good news.

I think, sadly, the same thing can be said about various corners of the social sciences today. Amy Cuddy’s work is being defended by many as “good news”, and there is little room in the mainstream literature to publish critiques (and replications with null results) that suggest that power posing does not have the effect it claims to have. As in the case of the housing bubble, these things can be more easily discussed now that there actually is a crisis, but we mustn’t forget the incredible amount of hard work that was done by Uri Simonsohn, Joe Simmons, Lief Nelson, Andrew Gelman and others to reach this point. It was and still is a somewhat thankless task and, unlike Burry and Eisman, they don’t stand to make a billion dollars on their bet. Fortunately, the work of the Amy Cuddys and Brian Wansinks of the world isn’t likely to bring the global economy to its knees either.

It is sad, however, that so many social scientists take such a dim view criticism. Back in the mid-nineteenth century, the Danish philosopher Søren Kierkegaard–who was, incidentally, born in the year of a financial crisis–raised the question of the sense in which sin is simply ignorance. If so, he asked, is it

the  state of  someone  who  has  not  known  and  up  until  now  has  not been  capable  of  knowing  anything  about  truth,  or  is  it  a  resultant,  a  later ignorance? If it is the latter, then sin must essentially lodge somewhere else than in ignorance. It must lodge in a person’s efforts to obscure his knowing. (The Sickness Unto Death)

Dominus tells the story of Amy Cuddy as someone who was following all the rules until the rules suddenly changed. That may be partly true. But a lot of the problems in the social sciences today, and the reason that they have gathered themselves into something like a full blown crisis, is, I fear, that people have been making a real effort to obscure their knowing, as Kierkegaard put it. Or perhaps they’re just not, as Andrew somewhat charitably suggests, making the effort to do something difficult (statistics, scholarship) well.  I hope that the social sciences will stop taking such a dim view of criticism going forward and give more space in the literature to people who take a dim view of underpowered studies with overblown publicity. Kierkegaard’s works are traditionally divided into “edifying” and “existential” discourses. Perhaps all of us need to be both evangelists of science and critics of it? Perhaps we need to be evangelists for criticism?

Clarity, Truth and Writing

If you haven’t already done so, I strongly recommend you read Francis-Noël Thomas and Mark Turner’s Clear and Simple as the Truth. In many ways, my approach to academic writing is a training regimen in the “classic style”. What I call the Writing Moment, in particular, embodies a core principle of this style, namely, that thought precedes speech. As Thomas and Turner point out, this principle runs counter to what a great many people have been taught (and go on to teach) about the role of writing in inquiry. Thomas and Turner do a good job of describing this influential and somewhat pernicious doctrine:

Records are understood as a sort of external memory, and memory as internal records. Writing is thinking on paper, and thought is writing in the mind. The author’s mind is an endless paper on which he writes, making mind internal writing; and the book he writes is external mind, the external form of that writing. The author is the self thinking. The self is the author writing the mind. (59)

Like Thomas and Turner, I caution against this view of yourself (your self) as a writer. They describe the alternative in compelling terms:

Thinking is not writing; even more important, writing is not thinking. This does not mean that in classic style all of the thinking precedes all of the writing, but rather that the classic writer does not write as he is thinking something out and does not think by writing something out. Between the period of one sentence and the beginning of the next, there is space for the flash of a perfect thought, which is all the classic writer needs. (59-60)

Notice that this space is one that the reader’s mind can occupy as well the writer’s. Indeed, that’s the whole point of the writing, to instantiate in the reader’s mind the “flash” of what Descartes (the patron saint of classic style) called a “clear and distinct idea”, a “perfect thought”. Classic writers don’t make a big deal of their imperfections; they know that their own thoughts, and those of their readers, are often less perfect than they would like, but they don’t show this in their writing. Instead, they do the best they can to present only ideas that they have thought through, as clearly and truthfully as they can. Simply put, they try to say only things they know are true in their writing, and they make sure that their text leaves this space for real thought to flash before the reader.

If you want to train this ability–which is, you’ll notice, as much a training of your mind (to think) as your hands (to write)–I recommend trying my rules for a few weeks. End the day with a clear and distinct idea of what you’ll write in the morning. Articulate a thought in a key sentence and relax for the rest of the evening. Then, in the morning, spend 27 minutes composing that thought into at least six sentences and at most 200 words that present it to an intellectual peer. Imagine your reader’s mind to be as spacious and brilliant as yours.

This will not just make you a better writer. By uncluttering your mind of the multiple “drafts” of your “internal writing” and distilling it, if only for a moment, into an actual thought–one that can live independent of your text–you are strengthening a mental faculty that too many of us neglect. You are learning to put the writing where it belongs: on the surface. This will free you to explore the depths of your own mind. And that, friends, is where the truth is ultimately found.

The Future of Objectivity (3)

Some of the most successful challenges to the objectivity of scholarly writing have come from feminist thinkers. Amy Katz Kaminsky raises the issue briefly in her contribution to The Future of Scholarly Writing and suggests it represents a tension between “academic” and “ethical” principles. Fortunately, she asserts the importance of “reconciling the two” (184) and not, as others have, of abandoning objectivity altogether in pursuit of some higher aim. As in my previous engagements with this book (see parts 1 and 2 in this series), please remember that I’m taking Kaminsky’s views on objectivity out of their larger context, both of the chapter they appear in and the book that it, in turn, is a contribution to. I will eventually read this book from front to back like we normally do.

Kaminsky begins (183-4) by questioning traditional standards of “mastery” and “authority” in scholarly work. Women’s Studies, as she points out, is a relatively young field and many of its practitioners were therefore trained in other disciplines. But she notes that authority can be problematic in any case, when, for example (I imagine), a scholar of Latin American literature who does not have Spanish as a first language proposes to teach, say, Latin American students. One is always, in this sense “between cultures”, she suggests. Moreover, the “stark” history of the relationship between the United States and Latin America makes this cultural encounter even more difficult to navigate.

The notion of objectivity comes up when she turns her attention to the legitimacy of feminist scholarship in the academy. The problem, she says, has been one of “carving out a space for situated knowledge … in a realm where objectivity and neutrality have been key values” (184). She argues that the “neutrality” that is invoked is often simply the “generic masculinity” of the “dominant group”. This defines a “norm” and maintains the “status quo” that it is the goal of feminist scholarship to change. Presumably, “situated knowledge” is neither objective nor neutral because it involves something like Susan McClary’s “particular investments” in political and ethical projects of various kinds, which, the argument might go, are inexorably partisan and subjective.  The challenge, then, is to bring about a transformation of dominant group commitments (shades of Kuhn) without losing the legitimacy that adhering to those commitments confers. This is arguably the dilemma of all social change projects.

It is not entirely clear in this passage what the endgame is, only that Kaminsky does not wish to maintain the status quo. I can’t tell whether she wants to maintain a semblance of objectivity and neutrality only long enough to do away with it, so that the future of scholarly writing will be liberated from the “high seriousness of academic standards” and be free to pursue more “situated” concerns, or whether she wants merely to challenge the “masculinity” of the current norms and achieve a new kind of neutrality (gender neutrality?) with its own kind of seriousness even after “the foundations of those very standards” have been challenged. I do know that some of the conversations about the current replication crisis have turned on whether traditional criticism, which involves directly pointing out the errors in the work of other scholars, is actually a distinctly male form of bullying. I hope the pursuit of objective truth is not destined to be seen as a “generic masculine” form of harassment.

Like I say, I am entirely encouraged by Kaminsky’s suggestion that we must find a way to reconcile traditional norms of objectivity and neutrality with the increasingly political and engaged desires of modern academics, who, as if to adopt Karl Marx’s famous slogan, are not content to interpret the world, but hope also to change it. I’m not sure that this tension is as gendered as some people seem to think it is (I’m not ready to say how gendered Kaminsky thinks it is)–Marx, after all, was very much a man, and revolution has always, it seems to me, had a certain machismo about it. But I will admit that, at this moment, I am more concerned about preserving, and even conserving, the objectivity and neutrality of our scholarship in the face of the “post-factual” dystopia that seems to be looming, than I am about finding room for the “situated knowledge” of any number of political projects that seek the authority of “academic” work.

[Part 4]

A Rejection Plan

Most scholars have a publication plan. For a given research project, they have a list of planned “deliverables”, specifying some number of articles to be published in specific journals. Collaborative research projects, too, have a such a plan, distributing responsibility for authoring and co-authoring a series of papers among members of the research group. On a still larger scale and over a longer term, departments and whole universities have goals defined in terms of a certain amount of publications in a certain set of journals. Researchers internalize these goals and make a plan for themselves for the coming year, five years, and so on. All of this is perfectly reasonable (or at least rational) in a time where publication has a such a decisive influence on the course of one’s career.

But a publication plan has the very important drawback that it is almost inevitably an overview of one’s coming failures. The most attractive journals are characterized by very high rejection rates. One cannot simply plan to be published in a good journal, just as one cannot just plan to make an important scientific discovery. One can hope to do these things, to be sure, but one cannot simply undertake to get oneself published. It’s not the sort of goal that a deliberate plan can help one to accomplish. Success is almost entirely out of one’s own hands.

For many years, therefore, I have argued that one should plan to submit work, not to publish it. Indeed, when I talk to department heads and university administrators I encourage them not to keep asking their faculty members what they have published, but what they have submitted. In this regard, I’ve compared myself to Moneyball‘s Billy Beane. A researcher who is regularly submitting work for publication is worth more to a department than one who occasionally publishes in a top journal.  (I’m happy to discuss exceptions here, but do note that the willingness to discuss what one knows is an important part of being a scholar. Those who rarely submit work for peer review are not really demonstrating this willingness.) A submission plan, moreover, is one you control yourself. While there are all manner of barriers to publication, no one can prevent you from submitting a paper when you have done your work as planned.

I recently had a conversation with an author that suggested an even more striking, perhaps even jarring, image. Make a rejection plan. That is, plan to have your papers rejected by three or four journals before it is published. Normalise the experience of rejection as something to be expected. Write your paper and submit to the best journal you think it is suitable for. But make sure you have a list of three or four other journals that it is also suitable for. When you get rejected, incorporate whatever criticism the rejection involved and send the paper on to another journal on the list. Don’t give up until the list is exhausted, but perhaps make sure that there’s always some kind of published end-game, even if it is merely making the paper available in an institutional repository. As Brad Pitt says in Moneyball, “It’s a process.”

Obviously, there will be exceptions. If a reviewer convinces you that your study is fundamentally flawed, you might decide not to waste anyone else’s time with it. But most people retain some confidence in their work, even after a reviewer has found shortcomings in it or an editor has deemed it a poor “match” for the journal. Our general attitude is that errors can be fixed and there are other fish in the sea (or perhaps other seas in which to swim).  It is rare that we learn from any one experience with a journal that our research is altogether worthless. In fact, I would argue that to take this as the lesson of any one rejection is always a mistake.

Here’s another interesting feature of this plan: when you get a “revise and resubmit”, you can decide whether the suggested revision is worth the effort when compared to making just a few minor changes and sending it to the next journal on your list. It lets you gauge the amount of effort you are willing to put into the next step.

But the most important reason to think in terms of a series of predictable rejections, rather than planning for publication in a particular journal, is that it forces you to write your paper for an audience that is more general than a particular journal’s reviewers and editors. In fact, it gets you to think in terms of your actual readers rather than the “gatekeepers” that stand in your way of reaching them. You will have to write a paper that, with minor adjustments, is relevant to one of several journals, all of which your peers (the members of your discipline) read and cite in the course of their own work. Perhaps ironically, writing the paper in this spirit–with no one particular journal in mind–will produce prose that is more authoritative, more objective, more “classic”. It is altogether likely that your reviewers will prefer a paper that wasn’t mainly written to “get past” the particular filter they happen to represent. It will have been written to accomplish something on the other side of it.

The author and I quickly agreed that this was a refreshing way to look at the publication problem. It recognizes that the most likely result of submitting to a journal (in which you’d like to be published) is rejection. It is altogether sanguine about this prospect. It increases the odds of publication by planning the next step after rejection already before this situation arises, a situation that can’t be taken to be “unfortunate” because it is so very probable. This both tempers the disappointment of rejection and increases the joy of  acceptance.  Unlike a publication plan, a rejection plan is not a list of planned failures. It is a plan for how to move forward after an entirely imaginable outcome occurs.

Knowledge and Imagination

Simplifying somewhat … In his Critique of Pure Reason, Kant was trying to understand how imagination makes knowledge possible. The implication, of course, is that without imagination there can be no knowledge, i.e., it would be impossible to know anything if we could not imagine anything. Indeed (and I think I’m remaining faithful, in my simple-minded way, to Kant here) it is possible to know things only in so far as we imagine them as objects. Objects are just things construed in terms of the conditions of the possibility of knowing them, and those conditions are, let’s say, “imaginary”.

I’m not just trying to be profound here. I think many students and scholars today have forgotten the importance of imagination, not in the romantic sense often associated with artists, but in the more mundane sense of picturing facts. “We make ourselves pictures of the facts,” said Wittgenstein. And a fact is merely a collection of things in one of many possible arrangements. The possibility of these arrangements is determined by the objective properties of the things in question. That is, an object is a thing construed in terms of its possible combination with other things to form facts. To know a thing, then, requires us to literally imagine its possibilities.

Like I say, I think we need to remind ourselves of the importance of imagination. We need to keep it in mind when reflecting on what we know. In fact, there are a number of cognitive steps between our knowledge and our imagination that are worth keeping in mind as well.

First, remember that you can’t know anything if you don’t form a belief about it. This belief will, of course, have to be true in order to really count as knowledge, and in that sense we have to accept that we can’t ever be certain that we know anything. The best we can do is to have good reasons for holding beliefs and be prepared to abandon them when better reasons against our beliefs emerge. At the end of the day, whether or not we really know something depends on the facts and not only are they not always in plain sight, they have a habit of changing. What’s true today may be false tomorrow, and the things themselves seem in no hurry to inform of us of their new arrangements. Nonetheless, at any given moment, we must hold beliefs, and those beliefs, when true and justified, are our knowledge. If we believe nothing, we know nothing.

But this should not render us simply credulous. We should not easily believe the things we want to know about, for appearances can be deceiving. Before forming a belief about something, therefore, we must investigate the matter carefully and this will require us to, first of all, understand what we were are looking at, or what we are being told. Often we will form a belief on the basis of nothing other than someone else’s testimony, often in writing. We read an account of a historical event or the analysis of a dataset and we form a belief about the reality it represents. Here it is clearly not enough to believe; we might well misunderstand the conclusions reached by author. While the account or analysis we’ve read is perfectly correct, and while the author therefore holds a true belief, i.e., knows what they’re talking about, we might yet form a false belief simply by misreading the text. Without belief there can be no knowledge, then, but we should not believe things we cannot understand. As scholars, in particular, we should not believe things we have not made a serious effort to understand.

And this brings us back to where I started. In order to understand something we must be able to imagine it. We should be able to “picture” the situation or population that we are forming a belief about. What’s really happening here is that we are ensuring that we are not forming a belief about an impossible arrangement of things. To say that something is imaginable is simply to say that it is “possible” in the broadest possible sense. It is logically possible. We can imagine a horse with a horn (a unicorn) or a horse with wings (Pegasus) but not a horse that is both all white and all black (not like a zebra, but like one horse that would have to be two horses at the same time). But notice that fantastic creatures are only “possible” on the background of a good deal of ignorance. An evolutionary biologist might have a much harder time imagining a winged horse than you or me simply because they know that wings evolved from arms, so a four-legged creature with an extra set of wings doesn’t really fit anywhere on the tree of life. I don’t know if it’s easier to imagine a unicorn in nature, but I suspect it is since there are rhinoceroses and reindeer in nature already. You and I can imagine Pegasus because we don’t know that much about horses and wings. A biologist, however, must suspend disbelief.

These considerations, like I say, are pretty elementary, simple-minded even. But they do suggest a heuristic for thinking about what you know and how you know it: imagine, understand, believe. Don’t claim to know something you don’t believe; don’t believe something you don’t understand; and don’t try to understand something you can’t imagine. As a scholar, your job is to get things to make sense. As a student, you’re trying to make sense of things too. It takes imagination to do it well.

(See also: my post on knowledge, imagination and intuition, feat. Ezra Zuckerman and Catherine Turco.)