# Errors in epidemiological measurements

## Introduction

**Learning objectives:** You will learn about common errors in epidemiological measurements.

All measurements are prone to error. Understanding common errors and the means to reduce them improves the precision of estimates.

*Read the resource text below.*

## Resource text

**Random error (chance)**

Chance is a random error appearing to cause an association between an exposure and an outcome. A principal assumption in epidemiology is that we can draw an inference about the experience of the entire population based on the evaluation of a sample of the population. However a problem with drawing such an inference is that the play of chance may affect the results of an epidemiological study because of the effects of random variation from sample to sample [1].

The effect of random error may produce an estimate that is different from the true underlying value. Note that the effect of random error may result in either an underestimation or overestimation of the true value.

**Sampling Error**

Because of chance, different samples will produce different results and therefore must be taken into account when using a sample to make inferences about a population [2]. This difference is referred to as the sampling error and its variability is measured by the standard error.

**Sampling error may result in**

A Type I error - Rejecting the null hypothesis when it is true

A Type II error - Accepting the null hypothesis when it is false

For example, when comparing the mean weights of primary class students in a government school and private school, it is generally assumed that students in government schools have a poorer nutrition and less weight (hypothesis). To prove the same, the null hypothesis is stated, no difference in the weights of students in either schools. However, because of sampling errors, there is a statistical probability of identifying a difference when truly there is no difference. This is called a type 1 error, and by convention it is fixed at 5% or below (p value = the probability of an event occurring by chance). On the other hand, a type 2 error is when we fail to observe a difference when there is a difference due to say, inadequate sample.

**Reducing sampling error**

Sampling error cannot be eliminated but with an appropriate study design can be reduced to an acceptable level. One of the major determinants to the degree to which chance affects the findings in a study is sample size [2]. In general, sampling error decreases as the sample size increases. Therefore, use of an appropriate sample size will reduce the degree to which chance variability may account for the results observed in a study.

The role of chance can be assessed by performing appropriate statistical tests and by calculation of confidence intervals. Note that the value of p will depend on both the magnitude of the association and on the study size. Confidence intervals are more informative than p values because they provide a range of values, which is likely to include the true population effect. They also indicate whether a non-significant result is or is not compatible with a true effect that was not detected because the sample size was too small.

**Measurement error (reliability and validity)**

All epidemiological investigations involve the measurement of exposures, outcomes and other characteristics of interest (e.g. potential confounding factors).

Types of measures may include:

- Responses to self-administered questionnaires
- Responses to interview questions
- Laboratory results
- Physical measurements
- Information recorded in medical records
- Diagnosis codes from a database

**Responses to self-administered questionnaires**

All these measures may be subject to some degree of measurement error and therefore result in the introduction of bias into the study. The research instruments used to measure exposure, disease status and other variables of interest should be both valid and reliable.

**Validity**

The degree to which an instrument is capable of accurately measuring what it purports to measure is referred to as its validity. An examples would be how well a questionnaire measures exposure or outcome in a prospective cohort study, or the accuracy of a diagnostic test.

**Assessing validity**

Assessing validity requires that an error free reference test or gold standard is available to which the measure can be compared.

**Reliability (repeatability)**

Reliability refers to the consistency of the performance of an instrument over time and among different observers.

**Assessing reliability**

1. Intra measurement reliability: Repeated measurements by the same observer on the same subject.

2. Inter-observer measurement carried out on the same subject by two or more observers and the results compared.

**Misclassification (information bias)**

Misclassification refers to the classification of an individual, a value or an attribute into a category other than that to which it should be assigned [1]. The misclassification of exposure or disease status can be considered as either differential or non-differential.

**Non-differential (random) misclassification occurs when classifications of disease status or exposure occurs equally in all study groups being compared. That is, the probability of exposure being misclassified is independent of disease status and the probability of disease status being misclassified is independent of exposure status.**

Non-differential misclassification increases the similarity between the exposed and non-exposed groups, and may result in an underestimate (dilution) of the true strength of an association between exposure and disease.

Differential (non-random) misclassification occurs when the proportions of subjects misclassified differ between the study groups. That is, the probability of exposure being misclassified is dependent on disease status, or the probability of disease status being misclassified is dependent on exposure status. This type of error is considered a more serious problem, as the effect of differential misclassification is that the observed estimate of effect can be biased in the direction of producing either an overestimate or under-estimate of the true association [2].

Differential misclassification may be introduced in a study as a result of:

- Recall bias
- Observer/interviewer bias

**References**

1. Hennekens CH, Buring JE. Epidemiology in Medicine, Lippincott Williams & Wilkins, 1987.

2. Kirkwood B. Essentials of Medical Statistics. Blackwell Science, 2003.