Qualitative research can be difficult to report and to evaluate, primarily because no definitive or universal standards have been developed to guide researchers and reviewers. Indeed, one of the arguable benefits of qualitative research is the creativity that researchers can demonstrate because they do not need to comply with a rigid set of standards and conventions (Pratt, 2009). Nevertheless, scholars have attempted to characterize some of the hallmarks of excellent qualitative reports.
Researchers, obviously, need to justify the importance of their research. They need to specify their motivation to conduct this study. Many researchers, according to Pratt (2009), merely highlight how past studies have failed to explore a particular issue, controversy, association, and so forth. These researchers, however, do not specify the implications of overcoming this shortfall.
Researchers, for example, might want to ascertain how employees manage abusive supervisors. To justify the importance of this work, the researcher could allude to quantitative studies, showing that abusive supervisors do interact more supportively with a subset of subordinates. Hence, an understanding of the various responses of employees, and the implications of these responses, could enable employees to manage their abusive supervisors and ultimately improve their wellbeing.
Researchers often do not present key information about how they collected the data (Pratt, 2009). For example, the protocol that was followed to interview participants should be summarized and perhaps included in an Appendix. Similarly, the relationship between the researchers and participants need to be characterized. Were participants aware of the role or objectives of the researcher? Had the researcher formed a prior relationship with the participants? Did the researcher participate in the activity they examined?
Different forms of participation can be distinguished. Sometimes, the researcher is merely an observer of some event or setting. On other occasions, the researcher might participate in some of the activities, but observe other events from the perspective of a spectator. Some researchers begin with observation but then attempt to learn the rules that are needed to become actively involved. Finally, some researchers are genuine participants, from the outset.
Similarly, many methods can be applied to uncover data. Researchers should specify whether they applied various techniques designed to elicit data, including ethnography, semi-structured interviews, or more structured procedures--such as free lists, pile sorts, paired comparisons, frame substitutions, triad tests, or questionnaires.
Obviously, researchers need to specify how they analyzed the data. The complication is that suitable analyses and processes depend on the context.
Pratt (2009) presents some interesting insights on this issue. When researchers apply codes to data they did not collect--archival data, for example--they should seek the services of multiple raters. Then, some measure of inter rater reliability might be beneficial. In contrast, when researchers undertake an extended ethnography, describing a particular context in detail, such inter rater reliability is not applicable. Only someone who understands the context intimately can analyze the data.
The sequence of analytical procedures should also be discussed. The researcher, for example, might have coded some of the interviews, developed a narrative, and then collected more data to confirm this account. All of these phases should be explicit.
Similarly, researchers should describe the procedures they applied to identify themes or to construct and apply codes. In addition, they should delineate how they converted these themes into a model or theory.
Researchers should also justify the analytical tools they utilized. They should attempt to show these tools uncover a broad array of characteristics associated with the topic of interest. They should not necessarily confine themselves to one technique, but consider a variety of paradigms, such as thematic analysis, grounded theory, conversation analysis, hermeneutics, semantic network analysis, schema analysis, ethnographic decision models, componential analysis, and analytic induction.
Researchers should specify the measures they introduced to ensure the research was rigorous and accurate. They could specify the individuals they contacted to offer feedback during the construction of this report. They could specify which participants they contacted, after the original interviews, to check the accuracy of their descriptions.
In addition, researchers should report obstacles that could have biased their findings. They could, for example, report their personal reactions and emotions in response to particular events.
Some researchers interpret their findings without reporting enough raw data (Golden-Biddle & Locke, 2007). That is, the results are primarily abstract descriptions rather than tangible examples of the remarks, observations, or behavior of participants (Lofland & Lofland, 1995).
As a consequence, the interpretations seem bland rather than rich, because they are not underpinned by specific details and examples. Furthermore, the reader cannot readily evaluate the interpretations. That is, the sequence of insights that translate the data to the interpretations are concealed. This sequence of logic, instead, should be as explicit as possible.
Sometimes, these problems--bland or unjustified interpretations--persist when the data is presented, but isolated from the discussion (Pratt, 2009). To illustrate, the data might be reported in tables but not in the body text. The data and interpretations, then, are separated from one another, both spatially and conceptually.
Instead, when researchers describe themes or concepts, they might often include relevant quotes, in the text. Researchers should include the most telling remarks, sometimes called power quotes, as well as additional data to reinforce the regularity of some theme, sometimes called proof quotes (Pratt, 2008). Within the same section, the researcher should explore the meaning or implications of these data.
Similarly, when researchers describe individual cases, they might allude to the idiosyncratic mannerisms, remarks, choices, or behaviors of these individuals. When they describe groups, researchers might report the activities, attitudes, and beliefs that members of this group share as well as differences across individual cases. Finally, when they describe cultures, they might describe rituals and characteristics that members tend to share.
Some researchers present mounds of data with limited interpretation, commentary, assimilation, or discussion. Some journals, however, prefer the researcher to explicate theories, principles, and implications while they present the data.
Pratt (2009) presents some illustrations. He argues that many researchers merely present clusters of quotes. Each cluster relates to a specific theme, which may be delineated. These themes, however, represent stale classifications rather than dynamic mechanisms.
Instead, the researcher should relate these themes to theoretical insights. The researcher could discuss how the various themes are related to one another& one theme might represent a precursor to another theme, for example. Similarly, the researcher could discuss how these themes are related to an existing theoretical framework. Furthermore, the researcher can discuss sequences of events or mechanisms that underpin, precede, or follow a theme, and so forth.
Researchers should obviously develop a narrative that integrates the themes and insights into a unified framework or account. As some practitioners maintain (e.g., Pratt, 2009;; Spradley, 1979), to fulfill this goal, researchers can conceptualize each theme as a character in a story. They should reflect upon which character or theme is the main protagonist? What are the objectives of this character or theme? What are the obstacles to this objective? These questions ensure the narrative revolves around a key theme and then extends these insights in a coherent fashion.
Some of the data might not align with the narrative. These data should be reported as anomalies rather than overlooked.
Pratt (2009) indicate that researchers sometimes refer inappropriately to quantitative or deductive principles in qualitative studies (see also Golden-Biddle & Locke, 2007). A qualitative researcher, for example, might refer to controlling variance as a means to preclude confounds. This description is more applicable to quantitative, deductive designs. As a consequence, the description might evoke connotations that are not suitable to qualitative research.
Furthermore, researchers sometimes quantify their qualitative data. Often, quantification of data can be misleading or uninformative, especially when the sample size is small. Statements like "60% of the respondents alluded to the theme of praise" can be unsuitable if the sample size is fewer than 10, for example.
Specifically, such attempts are misleading, partly because the numbers are often unstable when the sample size is small. That is, had another sample been utilized, the results could be entirely different. Second, these statistics might evoke a quantitative frame in readers. To illustrate, readers might perceive everyone who expresses this theme as equivalent, and hence the variation, complexities, and richness might be overlooked.
In addition to describing or analyzing the results inappropriately, qualitative researchers might apply methods to collect data that are more applicable to quantitative studies. They might, for example, use random sampling--whereas interviewing or observing the most informative, relevant individuals, called purposive sampling, is usually more applicable.
Bernard, H. R. (19960. Qualitative data, quantitative analysis. Cultural Anthropology Methods Journal, 8, 9-11.
Golden-Biddle, K., & Locke, K. (2007). Composing qualitative research (2nd ed.). Thousand Oaks, CA: Sage.
Lofland, J., & Lofland, L. (1995). Analyzing social settings: A guide to qualitative observation and analysis (3rd ed.). Boston: Wadsworth.
Pratt, M. G. (2008). Fitting oval pegs into round holes: Tensions in evaluating and publishing qualitative research in top-tier North American journals. Organizational Research Methods, 11, 481-509.
Pratt, M. G. (2009). For the lack of a boilerplate: Tips on writing up (and reviewing) qualitative research. Academy of Management Journal, 52, 856-862.
Spradley, J. (1979). The ethnographic interview. New York: Holt, Rinehart & Winston.
Strauss, A., & Corbin, J. (1998). Basics of qualitative research (2nd ed.). Thousand Oaks, CA: Sage.
Suddaby, R. (2006). What grounded theory is not. Academy of Management Journal, 49: 633-642.
Tesch, R. (1990). Qualitative research: analysis types and software tools. New York, The Falmer Press.
Tierney, W. G. (1995). (Re)Presentation and voice. Qualitative Inquiry, 1: 379-390.
Last Update: 7/5/2016