Your shopping cart is empty.

The hierarchy of research evidence - from well conducted meta-analysis down to small case series, publication bias

Epidemiology: The Hierarchy of Research Evidence and Publication Bias

Evidence based medicine has been described as ‘the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.’1 This involves evaluating the quality of the best available clinical research, by critically assessing techniques reported by researchers in their publications, and integrating this with clinical expertise. Although it has provoked controversy, the hierarchy of evidence lies at the heart of the appraisal process.

Ranking of trial designs

The hierarchy indicates the relative weight that can be attributed to a particular study design. Generally, the higher up a methodology is ranked, the more robust it is assumed to be. At one end lies the meta-analysis, synthesizing the results of a number of similar trials to produce a result of higher statistical power. At the other end of the spectrum lie observational studies, thought to provide the weakest level of evidence.

Several possible methods for ranking study designs have been proposed, but one of the most widely accepted is listed below.2 Information about the individual study designs can be found elsewhere in these notes.

  1. Systematic reviews and meta-analyses
  2. Randomised controlled trials (RCT) with definitive results (confidence intervals that do not overlap the threshold clinically significant effect)
  3. Randomised controlled trials with non-definitive results (a point estimate that suggests a clinically significant effect but with confidence intervals overlapping the threshold for this effect)
  4. Cohort studies
  5. Case-control studies
  6. Cross sectional surveys
  7. Case reports

Concerns and caveats

The hierarchy is widely accepted in the medical literature, but concerns have been raised about the ranking of evidence, versus that which is most relevant to practice. Particular concerns are highlighted below.

  • Techniques lower down the ranking are not always superfluous. For example, the link between smoking and lung cancer was discovered via cohort studies carried out in the 1950s. Although randomised studies are considered more robust, it would in many cases be unethical to perform an RCT looking at risk factor exposure; you need a cohort exposed to the agent by chance or personal choice.
  • The hierarchy is not fixed. There is debate over the relative positions of different methodologies. For example, the RCT has been traditionally been regarded as the most objective method of removing bias and producing comparable groups, but the technique is often slow, expensive and produces results that are difficult to every day practice.3
  • The hierarchy is also not absolute. A well-conducted observational study may provide more compelling evidence about a treatment than a poorly conducted RCT.
  • The hierarchy focuses largely on quantitative methodologies. However, it is again important to choose the most appropriate study design to answer the question. For example, it is often not possible to establish why individuals choose to pursue a course of action without using a qualitative technique such as interviewing.

The hierarchy in clinical practice

Guyatt and colleagues acknowledge the complexities of applying trial results to individual patients. However, they suggest that the hierarchy provides clinicians with a clear course of action. Physicians should look for the highest level of evidence available, even if this is extremely weak.

Publication bias

Systematic reviews are considered to provide the highest level of evidence, and authors seek to include all high quality studies that address the question under review. However, many researchers have shown that studies with significant positive results are easier to locate than those with non-significant or negative results, because the latter often fail to get published.

This is known as publication bias, and it has further been shown that studies with positive results are not only more likely to be published, they are also more likely to be published rapidly, in English language journals, and cited more by other authors.4 One reason this might occur is that scientists are less inclined to submit negative results for publication, but it may also reflect an attitude amongst journal editors that positive results make better articles.

Publication bias can by minimised by thorough literature searching, but some element is almost inevitable, and attempts should be made to quantify how big a problem it is. One method is to construct a funnel plot. This is described in more detail in the statistics section of this website.


  1. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. BMJ 1996: 312:7023
  2. Guyatt GH, Sackett DL, Sinclair JC, Hayward R, Cook DJ, Cook RJ. Users' guides to the medical literature. IX. A method for grading health care recommendations. JAMA 1995; 274:1800-4.
  3. - Accessed 22/12/08
  4. - Accessed 01/02/09

Further reading

  • Greenhalgh T. How to Read a Paper: The Basics of Evidence Based Medicine. London: BMJ, 2001
  • Guyatt G, Rennie D et al. Users' Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. McGraw-Hill Medical, 2008.

© Helen Barratt 2009