Principles of Making Inferences from a Sample to a Population
Populations and samples
In statistics the term "population" has a slightly different meaning from the one given to it in ordinary speech. It need not refer only to people or to animate creatures – the population of Britain, for instance or the dog population of London. Statisticians also speak of a population of objects, or events, or procedures, or observations, including such things as the quantity of lead in urine, visits to the doctor, or surgical operations. A population is thus an aggregate of creatures, things, cases and so on.
Although a statistician should clearly define the relevant population, he or she may not be able to enumerate it exactly. For instance, in ordinary usage the population of England denotes the number of people within England's boundaries, perhaps as enumerated at a census. But a physician might embark on a study to try to answer the question "What is the average systolic blood pressure of Englishmen aged 40-59?" But who are the "Englishmen" referred to here? Not all Englishmen live in England, and the social and genetic background of those that do may vary. A surgeon may study the effects of two alternative operations for gastric ulcer. But how old are the patients? What sex are they? How severe is their disease? Where do they live? And so on. The reader needs precise information on such matters to draw valid inferences from the sample that was studied to the population being considered. Statistics such as averages and standard deviations, when taken from populations are referred to as population parameters. They are often denoted by Greek letters; the population mean is denoted by µ (mu) and the standard deviation denoted by s (lower case sigma).
A population commonly contains too many individuals to study conveniently, so an investigation is often restricted to one or more samples drawn from it. A well chosen sample will contain most of the information about a particular population parameter but the relation between the sample and the population must be such as to allow true inferences to be made about a population from that sample.
Consequently, the first important attribute of a sample is that every individual in the population from which it is drawn must have a known non-zero chance of being included in it; a natural suggestion is that these chances should be equal. We would like the choices to be made independently; in other words, the choice of one subject will not affect the chance of other subjects being chosen. To ensure this we make the choice by means of a process in which chance alone operates, such as spinning a coin or, more usually, the use of a table of random numbers. Extensive ones have been published.1 A sample so chosen is called a random sample. The word "random" does not describe the sample as such but the way in which it is selected.
The use of random numbers is generally preferable to taking every alternate patient or every fifth specimen, or acting on some other such regular plan. The regularity of the plan can occasionally coincide by chance with some unforeseen regularity in the presentation of the material for study - for example, by hospital appointments being made from patients from certain practices on certain days of the week, or specimens being prepared in batches in accordance with some schedule.
As susceptibility to disease generally varies in relation to age, sex, occupation, family history, exposure to risk, inoculation state, country lived in or visited, and many other genetic or environmental factors, it is advisable to examine samples when drawn to see whether they are, on average, comparable in these respects. The random process of selection is intended to make them so, but sometimes it can by chance lead to disparities. To guard against this possibility the sampling may be stratified. This means that a framework is laid down initially, and the patients or objects of the study in a random sample are then allotted to the compartments of the framework. For instance, the framework might have a primary division into males and females and then a secondary division of each of those categories into five age groups, the result being a framework with ten compartments. It is then important to bear in mind that the distributions of the categories on two samples made up on such a framework may be truly comparable, but they will not reflect the distribution of these categories in the population from which the sample is drawn unless the compartments in the framework have been designed with that in mind. For instance, equal numbers might be admitted to the male and female categories, but males and females are not equally numerous in the general population, and their relative proportions vary with age. This is known as stratified random sampling. For taking a sample from a long list a compromise between strict theory and practicalities is known as a systematic random sample. In this case we choose subjects a fixed interval apart on the list, say every tenth subject, but we choose the starting point within the first interval at random.
Unbiasedness and precision
The terms unbiased and precision have acquired special meanings in statistics. When we say that a measurement is unbiased we mean that the average of a large set of unbiased measurements will be close to the true value. When we say it is precise we mean that repeated measurements will be close to one another. However, these may not necessarily be close to the true value. We would like a measurement that is both unbiased and precise. Some authors equate unbiasedness with accuracy, but this is not universal and others use the term accuracy to mean a measurement that is both unbiased and precise. Strike2 gives a good discussion of the problem.
An estimate of a parameter taken from a random sample is known to be unbiased. As the sample size increases, it gets more precise.
1. Machin D, Campbell MJ, Tan SB and Tan SZ Statistical Tables for the Design of Clinical Studies 3rd ed. Oxford: Wiley-Blackwell 2009
2. Strike PW. Measurement and control. Statistical Methods in Laboratory Medicine. Oxford: Butterworth-Heinemann, 1991:255.
See also “Methods for the quantification of uncertainty” and "Sampling Distributions"
© MJ Campbell 2016, S Shantikumar 2016