Category Archives: Uncategorized

The Future of Objectivity (1)

Does objectivity have a future? Do objects have a place in a post-factual world? I certainly hope so. But the more I read about the state of academic writing today, the more uncertain I grow. The emerging ethos of academic writing instruction seems poised to jettison objectivity from our scholarship altogether. Angelika Bammer and Ruth-Ellen Boetcher Joeres’s anthology The Future of Scholarly Writing (Palgrave, 2015) is an excellent case in point, and I’m going to devote a few posts to it in the weeks to come. I will structure my reading as an annotation of each of the indexed appearances of “objectivity” in the book’s contributions. I will start at its last appearance and work my way forward, taking issue with the authors’ various treatments  of this famously “academic” notion as I go.

To begin, then, on page 206 Susan McClary explains the valorization of objectivity by way of the “dominance of the left hemisphere [of the brain]” in academic contexts. As “a product of the analytic predisposition [of the left hemisphere, the binarism of ‘subjective’ and ‘objective’] has the effect of acknowledging as valid only those observations that can be verified regardless of the researcher’s particular investments,” she tells us; “anything else is relegated to the scrapheap of the ‘merely’ subjective.”

I think this is putting the point somewhat too strongly. I don’t think the distinction between objective and subjective is a hard and fast binary. Most people will say that objectivity and subjectivity are relative notions and that any particular observation will have inexorably subjective and objective components or aspects. I personally think of objectivity as a “socially constructed” affair, always accomplished and, indeed, never more than approximated, through inter-subjective triangulation and negotiation. At the other end of the spectrum, it is hard to imagine the position of extreme or “pure” subjectivity would yield any particular observation, and it is often instead identified with the radical passivity of the transcendental subject. Indeed, it is often a gesture at mystical forms of experience. In between, I like to think, and each from our own subjective points of view, we try to accomplish our objectivity as best we can.

This does indeed mean trying to present our views “regardless of [our] particular investments” in them. That is, we want to open our beliefs to criticism from people who may not be invested exactly as we are in the outcome. The financially inflected language here is telling since we would certainly treat medical researchers who had “particular investments” in the drug company whose medicine they are testing with some skepticism. But skepticism is not, I want to emphasize, tantamount to “scrapping” the relevant observations. Objectivity does not actually mean that we only acknowledge observations that have been completely divested (if you will) of a personal stake. It normally just means that we should declare this interest and accept that our contribution will be taken with a correlative amount of salt.

“In most academic disciplines,” McClary continues, “the premium put on objectivity has strangled not only prose style–the exclusive emphasis on documentation and a deliberately drab vocabulary–but also methods: the questions we may ask and the ways we go about trying to engage with those questions” (206-7). This, again, is some strong language and I must say I don’t recognize this picture of academia at all. (See also “Academic Discourse, Folk Psychology and Intelligent Cat Pictures”.)

I’ve never read a paper that confines itself exclusively to documentation, nor is there any shortage of papers and books that manage to present their ideas using lively and evocative language. Even texts that appear deliberately restrained in their prose are not always “strangled” by this effort. Indeed, we sometimes appreciate the admirable parsimony of a writer’s vocabulary as a breath of fresh air in a discourse that is too often overwrought in its terminology. Now, it is true that objectivity demands a certain (perhaps narrow) range of methods and that it can only be achieved in the pursuit of answers to a particular class of questions. But here, too, it’s hard to see researchers as “strangled”. Rather, it seems to me that their adherence to these approaches make particular observations possible that otherwise wouldn’t be.

For the past 50 years, in any case, there has been a cultivation, not only of much inter-disciplinarity, but a methodological pluralism, which should have afforded almost any researcher an opportunity to ask and answer almost any question in pretty much any way they choose. When I look at the wide range scholarship published since, say, May of 1968, I just can’t recognize the “dominance” of an objective ethos, nor any particular hemisphere of the brain. Rather, I see a struggle for dominance by multiple scholarly discourses in which objectivity is an increasingly embattled notion.

I would much prefer that no one asserted dominance and that we instead let objectivity be one among several values to pursue. I enjoy an intensely subjective paragraph as much as clear, objective one. Today, I’m afraid, if anything risks ending up on the scrapheap it is allegedly “drab” presentations of what were once called “facts”. Indeed, I have a feeling that Bammer and Boetcher Joeres are as concerned about the turn that our public discourse has taken in the years after the publication of their book. I think we need to think about reasserting, and perhaps reclaiming, the virtue of writing that is anchored in an objective sense of reality. Perhaps it is time to give our analytic predisposition a little space (on the left?) in which to work?

[Part 2]

Being Open

“The point is to experience being there, in the sense that I, the human being, am the there, the openness of being for me, insofar as I undertake to preserve this openness, and in preserving it, to unfold it.” (Martin Heidegger)

Let me wax philosophical for a moment. To be really ‘present’, to be really ‘there’, is to be open to what is going on around you. Human existence, perhaps, is uniquely defined by this openness, this capacity to be present in the now, to “be there”. This is something Heidegger argued very forcefully for, and he included a social element; existence, he said, is always bound to the existence of others, to “them”. So being open is also a matter of “being there” for others.

Indeed, Heidegger distinguished the “logical conception of science”, according to which it is “an interconnection of true propositions”, from an “existential conception”, in which it is a mode of being with others and engaging with things of practical value. It’s not just a matter of being open to the facts, we might say, but a way of being open to what other people think of those facts and what we can do with them. I think this is enormously important to keep in mind, and much of the success of post-WWII “science studies” has come from pushing this awareness on people whose natural inclination is to stick to “the facts” alone.

Ironically, however, our awareness of the social conditions of “knowledge production” has at times made us less open to the idea that another person’s view of the world might be more valid than our own. Many of us are inclined to rely on the views of our closest peers, like-minded people who appreciate the value of what we are doing in our research. We are, though we are loath to admit it, a bit too eager to believe what is said in our own research community and we close ourselves off to input from people who might come at our problem from a completely different perspective. Though they come at it differently, however, they may well arrive at the same place you are. Here.

In a recent essay in the Chronicle, Alice Dreger has made a strong case for cultivating greater openness in our thinking to the ideas of people who disagree with us, even to ideas that outright offend us. In a key paragraph, she shares a formative experience from her grad school days.

Let us require our students to read difficult work and learn to respond to uncomfortable chalk by chalking back. Teach them histories of censorship and blacklisting on the right and the left. Require them to reflect upon their (and our) uncertainty. Teach reliable methodologies, not infallible ideologies. Let us always be implicitly asking what one graduate professor explicitly asked me when I was being an intellectually recalcitrant pig: If you haven’t changed your mind lately, how do you know it’s working?

A reliable methodology is one that opens you meaningfully to the world of facts. An infallible ideology, by contrast, closes us off even to the input of other knowledgeable people about those facts. It’s all well and good to be certain that racism or sexism is wrong. The problem arises when you are so certain that your interlocutor is an incorrigible racist or sexist that you close yourself off from their criticism of your views. They may be wrong. But so may you. Indeed, you may both be wrong and this encounter with another mind might ultimately only have made you see your error. That would have been good. For you.

I hope pigs won’t take offence at Dreger’s slur. As I said at the beginning, the open nature of our existence may be what makes us uniquely human. And our recalcitrance in the business of changing our minds is, indeed, often a little piggish, i.e., less than human. I wonder if it is too much of a philosophical inside joke to say that pigs live in a pen while human beings, as Heidegger suggested, live in the clearing. Out in the open.

Update: See also “The Place of Form” and “Academic Purpose”

Sustainable Discourse

Scholars are adept at forming their beliefs on the basis what other people know. Think of a historian’s views about the rise of “scientific management” in the early twentieth century, for example. She will no doubt have done some research of her own, perhaps in the archives of the Bethlehem Steel Corporation, but she will have a learned a great deal more about the subject by reading books and papers from her fellow historians. Indeed, she won’t have learned just about scientific management from these peers, she will have learned about the entire history of the world, going all the way back to her undergraduate studies. All this knowledge forms a frame around her specialization and a foundation beneath it.

When you think about it, this is a marvelous cultural achievement. We don’t just believe that Frederick Winslow Taylor had a profound influence on the organization and management of the modern corporation; we know this. Some of us know this in great detail and others only know the broad outlines. But these are not just “opinions” we hold. It is knowledge we have acquired. And we’ve been able to acquire this knowledge much more easily than the hardworking historians who have uncovered all the documentation and brought it together in their work. All we had to do was read what they had written. Then we knew.

But is that really all there is to it? Is this not almost a magical theory of literary meaning? All I have to do is pass my eyes over the pages of a book, it seems, and suddenly my mind is in a state of knowing! Well, no. As every student knows, it’s not that easy. You read the words and try to understand them. You struggle and you learn.

A very important part of this struggle comes in the confrontation of our own reading with that of others. After we have read a book or essay we discuss it with our peers–be they fellow professors or fellow students. Sometimes, we discuss it with “authorities” or “superiors”, i.e., experts outside our own field or teachers with a better understanding than us students. In those conversations, we find out how well we understand the book we were reading, how effective our struggle with those pages was.

What I want to emphasize is that we did, in fact, form beliefs while reading. We thought we knew something about scientific management after reading a chapter about it. But then, when we discuss with other people who have also read that chapter, we come to see the matter from a different point of view. Sometimes we recognize that we had misunderstood what the book was saying. Sometimes we realize that, however well researched and argued the book may be, the author seems simply to have gotten the facts wrong. Our reading, that is, may turn out not to “hold up” under the pressure of another reader’s take on it.

This is something to be mindful of as you go about your scholarly work. It’s one thing to make up your mind about something; it is another to speak your mind to others. You want to become good at making a claim, i.e., saying (claiming) that something is true. You then want to observe what happens to that claim in a conversation with qualified peers–people who make similar claims about similar things for similar reasons. Does the claim survive the criticism of your peers? Is the claim sustainable in discourse?

The Presumption of Criticism

Scholars often make claims based on research done by other scholars. It is standard practice to rely on the work of others to support or frame your own work. This practice is justified by a set of presumptions that it is our obligation, as scholars, to make true. Doing so does not guarantee that everything you read in a peer-reviewed article is true, but it does justify the (measured) confidence with which we draw on such claims when conducting our research.

In  a word, we presume that the claims made in the literature are subject to ongoing critical scrutiny by qualified peers. Suppose you read in a journal article from 2014 that “between 16% and 40% of expatriate managers return prematurely from their assignment” abroad. What impact should that fact have on your own research? Well, you could be happy to see that the subject you are interested in is, it seems, part of a big problem in the real word. Your ethnographic work on cross-cultural business appears much more relevant in that light. In your own introduction, then, you make this claim, duly citing the source that you found the figure in. You submit the paper for publication, your reviewers recommend publication, and the paper is published. Your claims, including the 16-40% expatriate failure is now opened to the aforementioned “critical scrutiny” of your peers. What happens next?

Well, the reason that you provided a source is that people want to be able to check your facts. Not all readers will do this, but some might. Suppose someone does. And suppose they find the claim embedded in a sentence like the following: “Previous research, reported on by Black and Mendenhall (1989), reveals that between 16% and 40% of expatriate managers return prematurely from their assignment.” Please understand how shocking that is. Your paper made it look like the rate was reported in 2014. We find here that this rate is almost thirty years old! But it gets worse than that. Checking Mendenhall 1989 they will see that the figure is asserted, not on the basis of empirical evidence, but still other studies, going back as far 1971. Looking at those studies, finally, does not solve the mystery either. It’s simply not possible to track down anyone who provides evidence of the 16-40% range.  This is what’s not supposed to happen in scholarship. You should not have cited the rate you did because you, too, should have tried to trace it to its source and failed. You should then have written to the authors of the 2014 paper and pointed out their mistake. The journal should have issued a correction.

It’s only when we believe that such an error-correcting mechanism exists that we can trust the literature on a particular subject. Seeing something we think we can use in an a journal article from four or five years ago, we go to the library and try to see if there’s been any published criticism of it. If not, we check the underlying sources (or evaluate the methods) of the paper in question. We decide that we trust this result and that our readers would trust it too. Then we include it in our own paper. Simply citing the first appearance of a convenient fact is not good enough.

I use the example of expatriate failure rates advisedly. Over twenty years ago, Anne-Wil Harzing discovered that her peers had not been as critical as they should have been when citing high reported rates of expatriate failure. As she put it in a follow up paper in 2002, the paper she wrote as a PhD student about this problem was “was borne out of sheer amazement and indignation that serious academics seemed to get away with something students at all levels were warned not to do.” (Indeed, my example wasn’t pulled out of thin air either, though I have left out the names to protect the guilty. Click here for a more detailed critique.)

We can’t make too much of the courage it takes to challenge your entire discipline in this way as a PhD student. Indeed, I’m not sure it’s even advisable, though Harzing’s hard work, also on other topics, has clearly paid off for her in the long run. What she did was “presumptuous” in a good way. She assumed that standards of scholarly rigor applied in her field even if many scholars seemed to be entirely innocent of them. She acted as though good research was a norm. That’s how we should all work.

Indeed, that’s how most people presume academia works. Mistakes are made but they don’t remain for long. They are caught by critically minded peers and eventually corrected. You can play your part. I highly recommend reading Harzing’s 2002 paper, which is organized around the rules you should be following and examples of how they are broken. Learn them the easy way now. The hard way is not pleasant to think about.

Knowing with Others

What impact should what someone else knows have on your life? That sounds like a pretty big question, but let’s think about this in epistemological terms, as a problem of the theory of knowledge. I want to show that this ultimately tells us something important about specifically academic knowledge and, even more specifically, about academic writing.

First, what does it mean to say that someone knows something? Philosophers often begin with the idea that to know is to have a “justified, true belief” about something. We might want to dispute that, but if we play along for a moment we can consider our original problem as one of deciding what consequences someone else’s true beliefs should have for me. The fact that these beliefs are true tells us that something specific is the case. So, at first pass, we should live our life in accordance with other people’s knowledge on pain of being “unrealistic”. True beliefs, after all, are accurate representations of reality.

Notice that this does not imply that we should believe what other people merely believe. It’s only if they have knowledge that we need to get in line with them.

But a great deal of knowledge doesn’t have immediate practical implications. Astronomers, for example, know that Andromeda will collide with our own Milky Way in a few billion years. Not only is that a long time from now, I’m told it’s not even going to be particularly inconvenient for life in either galaxy when it happens because it will happen very slowly. But astronomers do, in fact, know that this is going to happen. And the consequence for me, I’m happy to admit, is that I believe it. And I think that’s really the first and most important impact that other people’s knowledge should have on our lives. If someone else knows something, then we should believe it.

I mean this “should” in an essentially logical sense. If I say, “Astronomers know that Andromeda will collide with the Milky Way but I don’t believe it,” I am contradicting myself. I can say that astronomers “think” or “claim” or “speculate” or “argue” or, of course, “believe” this, and then declare my own skepticism about it, without contradicting myself. But I can’t claim both that they know it and that I don’t believe it. Why, after all, would I not hold a belief myself if I believe it is true?

But can I say something like, “Astronomers know that Andromeda will collide with the Milky Way but I just don’t understand it”? There’s often something mind-boggling about astronomy and physics and mathematics. We want to grant that astronomers and physicists and mathematicians “know what they’re talking about” but this doesn’t always make things clearer for us, and I’m inclined to take a hard line on this. We should not here say that they know it. We should say that they “say” it and that until we ourselves understand what they’re saying we’re not going to believe it. After all, if I believe something I don’t understand I might not, in fact, be adopting the belief of the people who know it. Maybe we mean two different things by “collide” and in their sense the belief is true but in my sense it is nonsense.

I’ll develop these ideas in subsequent posts this week. But I want to declare my intentions clearly at the outset. I believe that academia should be a place where we are able to believe things that other people know, and where this way of forming beliefs, i.e., on the basis of other people’s knowledge, allows us to claim this knowledge of others as our own. It’s not a place where we believe everything we’re told. It’s a place where people present what they know in a way that opens it for criticism from other knowledgeable people. And the specific ways in which we do this, especially the way we use our writing to foster criticism, means that when we make a claim, and cite our source, we can, at that moment, say we “know” what we’re talking about.

It’s not perfect knowledge. Sometimes we merely believe something that will turn out later to be false. But, within the critical environment of the university, it should be okay to call it knowledge. We are not fools to believe in this way. Or, rather, we expect our peers not to make fools of us. This trust is an important part of what it means to be a scholar, an “academic”. It is sometimes violated, of course; but it is, in fact, the norm.