Quantitative and qualitative research are contrasting methodologies, based upon different epistemiological positions: qualitative research has its routes in interpretivism, which assumes that there is no “true reality” that exists independently from observation but that all reality is in fact socially constructed (subjectively interpreted) and therefore fluid. This position is in contrast to the positivist paradigm, within which quantitative research is usually based, which holds that there is a single, observable, and measurable reality that can be quantified and objectively interpreted.
Broadly speaking, qualitative research tends to answer the “why?” and “how?” questions surrounding public health topics, in contrast to quantitative methodologies which tend to focus on epidemiological estimates of prevalence and strength of associations between variables. Qualitative methods are often employed when generating hypotheses which may be later developed into interventions and tested in randomised controlled trials. Moreover, qualitative research can provide additional understanding about a research topic that may be inaccessible using quantitative methods: for example, understanding why patients fail to adhere to prescribed treatments; why people undertake certain healthy (and unhealthy) behaviours; what concerns people have about their health and ill health; and how people conceptualise their illnesses. Qualitative research can provide insights that specialists and researchers may not have considered beforehand. In this way, qualitative methods can enhance quantitative research methods such as questionnaires and improve service design and delivery.
Given these very different approaches to knowledge acquisition, it is not surprising that qualitative research utilises very different methods of data collection and analysis compared to quantitative research. However, it is also increasingly common for the two methodologies to be combined in public health research - generally referred to as “mixed methods” - as the diverse findings produced by each methodology are often complementary and can be used to build a robust public health evidence base.
Sampling in qualitative research
Qualitative research studies may appear to have small sample sizes compared to quantitative research, and are not concerned with statistical power. Nevertheless, the final sample included in qualitative research is carefully selected according to the research aim and several techniques are available for this.
Probability sampling is used in quantitative research, and depends upon members of the population being chosen entirely at random; the aim is to achieve a sample that is sufficiently large that we can be confident that it is representative of the wider population to which the findings will apply. The findings of quantitative research are said to be generalisable when this sampling strategy is employed. By contrast, qualitative research deliberately uses non-probability samples for selecting the study population. In this approach participants are selected purposely because of specific characteristics, which are of relevance to the research question; this is called purposive sampling. As explained by Patton, “the power of [purposive] sampling lies in selecting information-rich cases to study in depth. Information-rich cases are those from which one can learn a great deal about issues of central importance…”(1).
Purposive sampling should lead to the identification of the main perspectives around the study topic, but should also capture diverse perspectives. For example, specific participant characteristics may be considered important for the research, such as a particular age range, gender, ethnicity, or belief system. In qualitative research the inclusion criteria may be adjusted as the research proceeds, for example if the researcher feels it would be beneficial to include “outliers”, “extreme cases”, or “divergent cases” to add depth to the emerging theory. This is possible as qualitative analysis often occurs simultaneously with data collection, which allows the researcher to ensure the data are collected from the most appropriate population, and this may not be clear at the outset.
As discussed above, qualitative samples are usually small in size and Ritchie et al. (2) describe why this can occur:
'First if the data are properly analysed, there will come a point where very little new evidence is obtained from each additional fieldwork unit. This is because a phenomenon need only to appear once to be part of the analytical map. There is therefore a point of diminishing return where increasing the sample size no longer contributes new evidence. Second, statements about incidence or prevalence are not the concern of qualitative research. There is therefore, no requirement to ensure that the sample is of sufficient scale to provide estimates, or to determine statistically significant discriminatory variables. Third, the type of information that qualitative studies yield is rich in detail. There will therefore be many hundreds of 'bites' of information from each unit of data collection. In order to do justice to these, sample sizes need to be kept to a reasonably small scale'.
Theoretical saturation is the term used to describe the point at which no new contribution to the emerging findings is obtained from further analysis of interviews/focus groups/observations. Qualitative researchers should aim to reach theoretical saturation before concluding data collection to ensure that the pertinent concepts have been retrieved.
In quantitative research, validity is defined as whether a particular measure does indeed measure what it is claimed to measure. However, within qualitative research, the issue of validity can be seen as much broader in scope: validity is generally seen as assessing '…the quality and strength of the arguments that researchers put forward to substantiate claims about the reliability of their conclusions' (3).
Nevertheless, qualitative research is occasionally criticised for its lack of generalisability, and the interpretive nature of analysis is criticised for its lack of reproducibility. However both criticisms highlight misunderstandings about the qualitative methodology. As will be discussed later in this chapter, qualitative research is often circular, iterative, and interpretive and therefore exact reproducibility of findings by other researchers is neither expected nor desired. Rather than expect independent researchers to reach identical conclusions from qualitative data, reliability in qualitative research is measured by how well the methods of analysis are documented and understood. The following techniques are often used as a benchmark for assessing the quality of qualitative research.
Triangulation was traditionally considered a means of validating a research conclusion through use of two or more other techniques. However, triangulation may also be used to enhance interpretation by providing additional insights into the experience, for example by combining findings from observations with participant interviews to reach a more complete understanding of the issue or topic under investigation.
Clear documentation of the research process
Clear documentation of the methods involved in data collection and analysis, as detailed above, increases the trustworthiness of the research.
Supporting theory with quotes from the transcripts
Evidence to support interpretations, using primary data in the form of participant quotes, helps ensure that readers can trust that the investigator’s interpretations remain grounded in the data. Consent to use anonymous quotes should be obtained from all participants beforehand.
This describes the process of testing how well the findings of the research (often known as the “theory” that has been generated from the primary data) fit the data, establishing the “limits” of the theory. It is accepted that the research findings will be relevant to certain contexts only, and by identifying and discussing “deviant” or “negative” cases – i.e. those that do not conform to the theory – it is possible to delimit the theory.
This is the approach in which theory generated from the primary analysis is then “falsified” against each participant case within the dataset, one by one (4). This process is also described as a core component of “Analytic Induction” (5). If the individual case does not “fit” with the theory then the theory is modified or adapted to incorporate the case.
This is when the analytical themes and interpretive findings are formally tested within the sample of participants from where the data arose in an attempt to enhance validity. Whilst member checking can provide an opportunity to “test” early findings with participants, it also assumes a “true” reality that is fixed over time and can and should be corroborated by individual participants. This stance conflicts with the interpretivist position that there is no objective “truth”; rather the learning and theoretical insight generated from the qualitative interviews should arise from the investigator’s immersion in the entire data. It can be argued therefore that comparing the investigator’s interpretation with the participants’ understanding of their data is discordant with the interpretivist approach. Nevertheless, member checking is sometimes used as a means of enhancing the quality and reliability of qualitative research.
All high quality qualitative research should involve a discussion of the researcher’s reflexivity. This is a subjective examination of the extent to which the investigator’s own relationship with the topic, the participants, or settings may influence the research findings. Furthermore, any theory used by the researcher to approach the study should be explicitly considered and documented, as this will influence the analytical process and the ultimate findings of the research.
© I Crinson & M Leontowitsch 2006, G Morgan 2016