The Presumption of Criticism

Scholars often make claims based on research done by other scholars. It is standard practice to rely on the work of others to support or frame your own work. This practice is justified by a set of presumptions that it is our obligation, as scholars, to make true. Doing so does not guarantee that everything you read in a peer-reviewed article is true, but it does justify the (measured) confidence with which we draw on such claims when conducting our research.

In  a word, we presume that the claims made in the literature are subject to ongoing critical scrutiny by qualified peers. Suppose you read in a journal article from 2014 that “between 16% and 40% of expatriate managers return prematurely from their assignment” abroad. What impact should that fact have on your own research? Well, you could be happy to see that the subject you are interested in is, it seems, part of a big problem in the real word. Your ethnographic work on cross-cultural business appears much more relevant in that light. In your own introduction, then, you make this claim, duly citing the source that you found the figure in. You submit the paper for publication, your reviewers recommend publication, and the paper is published. Your claims, including the 16-40% expatriate failure is now opened to the aforementioned “critical scrutiny” of your peers. What happens next?

Well, the reason that you provided a source is that people want to be able to check your facts. Not all readers will do this, but some might. Suppose someone does. And suppose they find the claim embedded in a sentence like the following: “Previous research, reported on by Black and Mendenhall (1989), reveals that between 16% and 40% of expatriate managers return prematurely from their assignment.” Please understand how shocking that is. Your paper made it look like the rate was reported in 2014. We find here that this rate is almost thirty years old! But it gets worse than that. Checking Mendenhall 1989 they will see that the figure is asserted, not on the basis of empirical evidence, but still other studies, going back as far 1971. Looking at those studies, finally, does not solve the mystery either. It’s simply not possible to track down anyone who provides evidence of the 16-40% range.  This is what’s not supposed to happen in scholarship. You should not have cited the rate you did because you, too, should have tried to trace it to its source and failed. You should then have written to the authors of the 2014 paper and pointed out their mistake. The journal should have issued a correction.

It’s only when we believe that such an error-correcting mechanism exists that we can trust the literature on a particular subject. Seeing something we think we can use in an a journal article from four or five years ago, we go to the library and try to see if there’s been any published criticism of it. If not, we check the underlying sources (or evaluate the methods) of the paper in question. We decide that we trust this result and that our readers would trust it too. Then we include it in our own paper. Simply citing the first appearance of a convenient fact is not good enough.

I use the example of expatriate failure rates advisedly. Over twenty years ago, Anne-Wil Harzing discovered that her peers had not been as critical as they should have been when citing high reported rates of expatriate failure. As she put it in a follow up paper in 2002, the paper she wrote as a PhD student about this problem was “was borne out of sheer amazement and indignation that serious academics seemed to get away with something students at all levels were warned not to do.” (Indeed, my example wasn’t pulled out of thin air either, though I have left out the names to protect the guilty. Click here for a more detailed critique.)

We can’t make too much of the courage it takes to challenge your entire discipline in this way as a PhD student. Indeed, I’m not sure it’s even advisable, though Harzing’s hard work, also on other topics, has clearly paid off for her in the long run. What she did was “presumptuous” in a good way. She assumed that standards of scholarly rigor applied in her field even if many scholars seemed to be entirely innocent of them. She acted as though good research was a norm. That’s how we should all work.

Indeed, that’s how most people presume academia works. Mistakes are made but they don’t remain for long. They are caught by critically minded peers and eventually corrected. You can play your part. I highly recommend reading Harzing’s 2002 paper, which is organized around the rules you should be following and examples of how they are broken. Learn them the easy way now. The hard way is not pleasant to think about.

Leave a Reply

Your email address will not be published. Required fields are marked *