Data are “facts given or granted”. They can be distinguished from the facts that have to be established by your analysis. But we mustn’t forget that the data were themselves by no means easy to gather. Once we have our data, we can take them for granted in our analysis, but they first had to be wrung from experience through our procedures. We explain those procedures in our methods section so that our reader may come to trust us.
Many years ago, as a young philosopher learning about the social sciences, I was speaking to a fellow doctoral student about his research. I don’t remember what his research question was, except that it had something to do with “knowledge management” and that he was approaching it as an empirical problem. What I remember very clearly is how he talked about the survey questionnaire he had designed; he called it a “measuring instrument”. He had taken something that, for me, was an abstract and subtle phenomenon (knowledge) and turned it into something that could be measured. The answers to the questions on his survey (whether yes/no or on a scale) would give him data points for analysis. These data points would become facts he could take for granted, and on their basis he would establish his conclusions.
“Some Huxley or Haldane,” said Ezra Pound, “has remarked that in inventing the telescope Galileo had to commit a definite technical victory over materials.” He had to design and construct a “measuring instrument” to make his observations. But as has been pointed out by philosophers and historians of science for some time, his data were not immediately trusted by his fellow astronomers. He had to explain how the instrument worked before they would believe that what they were seeing through it were really, say, the phases of Venus as it circled the sun, not the Earth. He had to get them to understand how his equipment worked before they would grant him his facts. He had to commit a rhetorical victory too, we might say.
In your discipline, the methods you are using are probably well-established. You are using semi-structured interviews, or participant observation, or survey questionnaires, or archival documents, or you’re pulling data from trusted databases. (Need I emphasize that it’s called a data-base? A source of facts that can be taken for granted.) Your readers understand what you are talking about when you describe what you did. They will have a good sense of what sort of material you were working with in your analysis after you tell them that you conducted, recorded, and transcribed 27 two-hour semi-structured interviews. They’ve done this sort of work themselves so they know what you were looking at when you were “coding” them, just as an astronomer knows what it’s like to look through a telescope.
Of course, your peers also know what can go wrong. They know that even very good scientists can make mistakes, both in their collection of the data or in their analysis of it. So you have to demonstrate an awareness of the “sources of error”. You have to assure the reader that you did everything you could not to fall into traps that are familiar to them. These days, you may even have to persuade them that you did not abuse your “degrees of freedom”. The whole point of having data is that they can be taken for granted, so you don’t want there to be any doubt about how you came into possession of them. You want the reader to be able to trust you. The reader wants to be able to trust you.
It is precisely because you must be able to take your data for “given” that you can’t take your reader’s trust for granted. It’s hard work to make credible measurements and observations of a complicated reality. You must tell your reader that you did that work and that you understand its importance. A good method is no mean feat. Your methods section should be written with a sense of accomplishment.