The UK Faculty of Public Health has recently taken ownership of the Health Knowledge resource. This new, advert-free website is still under development and there may be some issues accessing content. Additionally, the content has not been audited or verified by the Faculty of Public Health as part of an ongoing quality assurance process and as such certain material included maybe out of date. If you have any concerns regarding content you should seek to independently verify this.

Evaluation of health promotion, public health or public policy interventions

Principles and Practice of Health Promotion: Health Promotion Evalution and Effectiveness

This section covers:

  • Evaluation of health promotion, public health or public policy interventions
  • Risk behaviour in health and the effect of interventions in influencing health-related behaviour in professionals, patients and the public
  • Evaluation of preventative actions
  • Including the evidence base for early interventions on children and families, support for social and economic development
  • Understanding of pre-determinants of health including the effect of social cohesion on health outcomes

5.1 Introduction and definitions

This is a potentially large and complicated section and readers are advised to also refer to materials supporting other sections of the syllabus relating to research methodology in public health, and to review the references and further reading in more depth.  A number of key issues and approaches are described to attempt to give a flavour of the epistemological debate over health promotion evaluation in recent decades, and to point to practical evaluation solutions and emerging newer theories and methods for health promotion research.

Essentially what underlies this debate are the key characteristics of health promotion itself: that it is by definition a process; that it acts on individuals, communities, organisations and society; to be effective it uses multiple methods and is responsive to the needs of the target audience; that it is concerned with change which will usually occur over long periods of time; that this renders its outcomes problematic to define, measure and to attribute to interventions; that in short it is a complex field operating on complex social phenomena with health improvement as its long-term goal. So although its primary location is within the health field, the attempts to use evaluation methods derived for and applied within the field of healthcare have often fallen short of the ideal. This section will attempt to describe some of the history of these debates and illustrate the current position which is more inclusive of different research perspectives, and more promising for future understanding of effective health promotion. A brief review of recent evidence of effective interventions and sources of systematic reviews for further reference is also provided.

First some definitions largely adapted from Hawe et al (1990):

  • Community interventions - distinctions between 'community-wide interventions' and 'interventions-in-community' (Green & Kreuter, 1991; Potvin & Richard, 2001)The former generally attempts to make changes that affect individuals through population wide interventions, if using multiple approaches these would be complex community interventions. Interventions- in-community tend to operate on sub-groups within specific settings and may employ simpler methods. (Note that in developing the public health evidence base NICE distinguishes between 'public health programmes', which are larger complex, multiple method approaches; and 'public health interventions', which are simple and possibly single method.)

  • Effectiveness - the ability of an intervention to achieve its intended effect in normal conditions ie the 'real world'.

  • Efficacy - the ability of an intervention to achieve its intended effects under optimal conditions of delivery and compliance by its recipients ie an 'ideal world'.

  • Efficiency - the effectiveness of an intervention in relation to costs.

  • Evaluable - able to be fairly or appropriately judged or evaluated; a programme is evaluable when its activities, goals and objectives are articulated in such a way as to provide meaningful and measurable information.

  • Evaluation - the process by which we judge the worth or value of something (Suchman, 1967)

  • Evidence - (definition strangely absent from other texts), the Cassell Concise English Dictionary states 'anything that makes clear or obvious; that which makes truth evident, or renders evident to the mind that it is truth' ; the OED states 'information or signs indicating whether a belief or proposition is true or valid' from the Latin evidentia, from evidens 'obvious to the mind or eye'.

  • Evidence-based healthcare - the conscientious use of current best evidence in making decisions about the care of individual patients or the delivery of health services. Current best evidence is up-to-date information from relevant, valid research about the effects of different forms of health care, the potential for harm from exposure to particular agents, the accuracy of diagnostic tests, and the predictive power of prognostic factors (NIPH, 1996).

  • Evidence based clinical practice - an approach to decision making in which the clinician uses the best evidence available, in consultation with the patient, to decide upon the option which suits that patient best (Muir Gray, 1997).

  • Evidence-Based Medicine - is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research (Sackett et al, 1996)

  • Evidence-based health promotion - involves the systematic integration of research evidence into the planning and implementation of health promotion activities, (Wiggers & Sanson-Fisher, 1998)

  • Formative evaluation - evaluation for the purpose of improving the programme as it is being implemented

  • Impact evaluation - concerned with the immediate short-term effects and reach of the programme, generally measures achievement of programme objectives

  • Outcome evaluation - measures long-term effects, whether a programme has achieved its goals

  • Process evaluation - measures to what extent a programme has been implemented as planned, by measuring reach, participant satisfaction, implementation of activities, performance of intervention components and quality assurance.

  • Summative evaluation - the same as outcome evaluation

  • Transfer evaluation - assesses the replicability of a project's mechanisms/processes and outcomes, can they be transferred to another setting or population and achieve the same effects? (Wimbush & Watson, 2000) 

Apart from ensuring clarity in understanding, these definitions are presented as a reminder that the whole area of evaluation and evidence-based practice is founded on issues to do with judgement, values, and the perception of truth; that it is defined by notions of currency of best available information, and the integration of research information with other knowledge and experience; that it is dependent on knowing what it is you are setting out to do, and being able to measure that you have got there! The following brief history of the struggles in the development of evidence-based health promotion should be seen in this light.

References

Green LW & Kreuter MW (1991) Health promotion planning: an educational and environmental approach. Mountain View, CA: Mayfield

 Hawe P, Degeling D & Hall J (1990) Evaluating health promotion: A health worker's guide. NSW, McLennan & Petty Pty

Muir Gray JA. (1997) Evidence-based healthcare: how to make health policy and management decisions. London: Churchill Livingstone.

NIPH, (1996). First Annual Nordic Workshop on how to critically appraise and use evidence in decisions about healthcare, National Institute of Public Health, Oslo, Norway.

Potvin  L & Richard L (2001) Evaluating community health programmes. In Rootman I et al (Eds) Evaluation in health promotion: principles and perspectives.  Copenhagen, WHO Europe.

Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. (1996) Evidence based medicine: what it is and what it isn't. BMJ;312:71-2.

Suchman EA (1967) Evaluative research.  New York, Russell Sage Foundation

Wiggers J & Sanson-Fisher R (1998) Evidence-based health promotion. In: Scott R & Weston R (Eds.) Evaluating health promotion. Cheltenham: Stanley Thornes

Wimbush E & Watson J (2000) An evaluation framework for health promotion: theory, quality and effectiveness. Evaluation 6(3): 301-21

5.2 Health promotion evaluation and the great RCT debate

While research on health promotion interventions has had a long history, the systematic search for evidence of effective health promotion probably began with the publication of two significant sets of reviews of effectiveness on a wide range of topics.  The first was a European-wide initiative conducted by the International Union for Health Promotion and Education (eg Veen et al, 1994); the second was the series of effectiveness reviews commissioned by the Health Education Authority from the NHS Centre for Reviews and Dissemination (eg Ebrahim & Davey-Smith, 1996). Ironically while both collated and presented a valuable body of information about effective health promotion in an accessible way, both ended up being criticised., The former for not being rigorous enough about the quality of research data used, and the latter for conforming too closely to the evidence based medicine 'hierarchy' of evidence by generally only drawing on RCT evidence.  A number of authors and international groups called for the inclusion of other forms of research in healthcare studies (Black, 1996) and specifically in public health and health promotion (Tones, 1996; GGD, 1997; Speller et al, 1997).

Speller et al explained that health promotion in the UK was 'at risk from the application of inappropriate methods of assessing evidence, overemphasis on outcomes of individual behaviour change, and pressure on resources'; this last point due to the limited good research base, the consequent inability to claim effectiveness and the misguided views of policy-makers at the time that 'no evidence of effectiveness equals evidence of no effect'.  Further they stated that 'The effect of health promotion should be assessed across the breadth of its activities, not just by changes in individual health behaviour. Systematic review methods need to be revised to include a broader range of studies and research methods, including qualitative research. (And) review criteria should consider the quality of the health promotion intervention as well as the research design.' (1997, p361)

In order to fill the research gap in health promotion and to answer the increasingly pertinent question of whether or not health promotion is a good investment, the WHO commissioned a working group in 1995 to provide guidance on the appropriate methods for health promotion evaluation to increase their quality (WHO, 1998). Their recommendations are summarised in Box 5.1 below. In England the Health Education Agency reviewed their experience of commissioning reviews of effectiveness and created a broader national platform for developing the evidence base for public health, subsequently developed by the Health Development Agency (Meyrick, 1997). Here issues of the primacy of the RCT, the need for equivalent rigour in including qualitative data in reviews, and the need to consider the quality of the intervention as well as of the research design, were reiterated. Further details of the debate on the nature of evidence, evaluation and effectiveness in health promotion can also be found in Davies & MacDonald (1998) and Scott & Weston (1998).

Box 5.1 Health promotion evaluation - recommendations to policy-makers

Principles for the evaluation of health promotion initiatives

  • Participation - at each stage of evaluation those with an interest should be involved. These can include policy-makers, community members and organisations, health and other professionals, etc.

  • Multiple methods - evaluations should draw on a variety of disciplines and employ a broad range of information gathering procedures

  • Capacity building - evaluations should enhance the capacity of individuals, communities, organisations etc

  • Appropriateness - evaluations should be designed to accommodate the complex nature of health promotion interventions and their long term impact

Conclusions and recommendations for the evaluation of health promotion initiatives

  1. Those who have a direct interest in a health promotion initiative should have the opportunity to participate in all stages of its planning and evaluation

  2. Adequate resources should be devoted to the evaluation of health promotion initiatives (at least 10%).

  3. Health promotion initiatives should be evaluated in terms of their processes as well as their outcomes.

  4. The use of randomised controlled trials to evaluate health promotion initiatives is, in most cases, inappropriate, misleading and unnecessarily expensive.

  5. Expertise in the evaluation of health promotion initiatives needs to be developed and sustained

World Health Organization (1998) Health Promotion Evaluation -recommendations to policy-makers. Report of the WHO European working group on Health Promotion Evaluation. Copenhagen: WHO.
 

References

Black N (1996) Why we need more observational studies to evaluate the effectiveness of health care. BMJ 312: 1215-8

Davies JK & MacDonald G (Eds) (1998) Quality, evidence and effectiveness in health promotion: striving for certainties. London: Routledge

Ebrahim S & Davey-Smith G (1996) Health promotion in older people for the prevention of CHD and stroke. London: Health Education Authority

GGD (1997) Report of the expert meeting: beyond RCT - towards evidence-based public health. 13th Feb 1997. Rotterdam: GGD

Meyrick J (Ed) (1997) Reviews of effectiveness. Their contribution to evidence based practice and purchasing in health promotion. London: Health Education Authority

Scott R & Weston R (Eds) (1998) Evaluating health promotion. Cheltenham: Stanley Thorne

Speller V, Learmonth A & Harrison D (1997) The search for evidence of effective health promotion. BMJ 315: 361-3

Tones K (1996) Beyond the RCT: a case for 'judicial review' Health Education Review 12(2) 1-4

Veen CA et al (1994) An instrument for analysing effectiveness studies on health promotion and health education. Utrecht: Dutch Ctr. For Health promotion and Health Education, and IUHPE/EURO

World health Organization. (1998) Health Promotion Evaluation: Recommendations to policy-makers. Report of the WHO European Working Group on Health Promotion Evaluation. Copenhagen: WHO

5.3 Guidance on evaluating health promotion

Whether a small scale practitioner-led evaluation or major research project, there are key steps in the thinking and planning that need to be attended to. Morgan (2006) states that 'there is no single, correct way to evaluate - instead the method that is most appropriate will depend on the aims and objectives of the intervention, the types of information or data available, and the time and resources available.' He also notes that much time has been wasted in debating the relative merits of the different positivist and phenomenological perspectives on evaluation and that mixed methods that answer the questions posed by the users of the results is the right approach to take. This may mean looking at processes, intermediate outcome or final outcomes. The questions to ask in the planning stage of an evaluation are outlined in Box 5.2 below.
 

Box 5.2  Quality steps in the process of planning an evaluation

  1. What are the aims and objectives of my project? (Are these evaluable, what about the wider impact)

  2. What are my research questions? (These will determine the indicators and methods used)

  3. How is the project expected to work? (The theory base)

  4. What do I want my evaluation to do? (Process/formative evaluation or outcome/summative evaluation?)

  5. Who are the main groups and individuals involved in this project?

  6. Who is my evaluation for? (The 'audience' may the project funders or community members, and this may affect choice of evaluation measures.)

Adapted from Morgan A (2006) Evaluation of health promotion. In: Davies M & Macdowall W (Eds.) Health Promotion Theory: Understanding Public Health Series. Open University Press/McGrawHill

In terms of outcomes, Nutbeam (1998) outlined a distinctive framework for defining success in health promotion that distinguishes between the more traditional health outcomes and health promotion outcomes.  The model (Fig 5.1) shows the relationships between health promotion actions (education, facilitation and advocacy); health promotion outcomes (health literacy, social influence and action, healthy public policy and organisational practice). These would equate to some intermediate outcomes in other models, but Nutbeam emphasises that these are the endpoints of health promotion actions and as such are legitimate goals in themselves.  Intermediate health outcomes such as healthy lifestyles, effective health services and healthy environments represent the determinants of health and social outcomes.

Figure 5.1 An outcome model for health promotion (Nutbeam, 1998)

 

Wimbush & Watson (2000) propose an evaluation approach which recognizes the contributions of theory and quality as well as effectiveness in programme development. The framework in Box 5.3 builds on the work of the Health Education Board for Scotland, and identifies the different stages and forms of evaluation which contribute to the development of effective interventions.

Box 5.3 An evaluation framework for health promotion

Stage

Evaluation focus

Evaluation questions

Planning

Learning from other evaluations of effectiveness; option appraisal

What are likely to be the best ways of addressing a need or problem with a group or setting?

 

Design and pilot

Feasibility of proposed approach; 'theory of change'

Is the proposed programme feasible and acceptable? What outcomes can be achieved in what time period? How and why will it work? How can it be adapted to maximise effectiveness?

 

Implementation - early start-up

Delivery and quality assurance; monitoring and review systems; baselines

Are we on track? Are there problems to address? What action is needed to improve performance?

 

Implementation - establishment

Implementation process; reach; programme impacts

How is the project working? Is it being implemented as intended? Is the target population being reached? Are programme objectives/impacts being achieved?

 

Implementation - fully operational

Intermediate outcomes/effectiveness

To what extent were intermediate outcome achieved? How were they achieved? In which groups/settings are the greatest benefits shown?

 

Dissemination

Replicability of outcomes; generalizeability of outcomes

Can the programme be transferred to another setting or population and achieve the same outcomes?

Adapted from Wimbush E & Watson J (2000) An evaluation framework for health promotion: theory, quality and effectiveness. Evaluation 6(3) 301-21
 

References

  • Morgan A (2006) Evaluation of health promotion. In : Davies M & Macdowall W (Eds) Health Promotion Theory: Understanding Public Health series. OUP/McGrawHill

  • Nutbeam D (1998) Evaluating health promotion - progress, problems and solutions. Health Promotion International 13(1) 27-44

  • Wimbush E & Watson J (2000) An evaluation framework for health promotion: theory, quality and effectiveness. Evaluation 6(3) 301-21

5.4 'Theories of Change' and 'Realistic Evaluation'

A theory of change refers to the causal processes through which change comes about as a result of a programme's strategies and action (Weiss, 1972) It relates to how practitioners believe individual, inter-group, and social/systemic change happens and how, specifically, their actions will produce psotive results. Hence theory-based approaches to evaluation move on from the clarification of a programme's aims, objectives and outcomes to articulating the assumptions underlying a programme's design in order to understand more about how and why the programme is supposed to operate to achieve the outcomes. The Theory of Change approach, developed for the evaluation of comprehensive community initiatives in the US (Connell et al, 1995), suggests that all programmes have explicit or implicit 'theories of change' about how and why they will work (Weiss, 1995) Once these theories have been made explicit they can influence the design of the evaluation to ensure that it assesses whether the theory is correct when it is implemented. This approach reconciles process and outcome measurement, and ensures that practitioners and evaluators draw on established theory and their own observations about how change will happen, and has been used in for example, the Health Action Zone evaluation in England (Judge et al, 1999).

Realistic Evaluation (Pawson & Tilley, 1997) is an influential approach to evaluating social programmes and as Tones & Green (2004) state it 'is especially relevant to the complicated interventions and collaborations characteristic of health promotion'. They consider the approach to be 'post-positive' in that it recognizes realities that can be investigated robustly and used to shape policy. On the other hand it views that the strict positivist approach of experimental design, particularly RCTs, is insufficient to understand the context of programmes and the constant changeability and potential intrusion of 'new contexts and new causal powers'. Realistic evaluation considers that:

Outcomes = Mechanisms + Context

The understanding of how mechanisms are fired in certain contexts to produce certain outcomes generates theories about the effectiveness of the programme design 'through a detailed analysis of the programme in order to identify what it is about the measure which might produce change, which individuals and sub-groups and locations might benefit most readily and what social and cultural resources are necessary to sustain the changes.' (Wimbush & Watson, 2000).

Both these approaches to evaluating health promotion will utilise both qualitative and quantitative data as appropriate, and critically 'open up the black box' of the intervention in order to understand what is working and why, and to improve the implementation of the intervention in order to increase effect.

References

  • Judge K, Bauld L, Adams C et al (1999) Health Action Zones: Learning to make a difference. University of Kent at Canterbury, PSSRU.

  • Connell JP, Kubisch AC, Schorr LB & Weiss CH (Eds) (1995) New approaches to evaluating community initiatives: Concepts, methods and contexts. Washington DC: The Aspen Institute.

  • Pawson R & Tilley N (1997) Realistic Evaluation. London: Sage Publications

  • Tones K & Green J (2004) Health Promotion: Planning and Strategies. London: Sage Publications

  • Weiss CH (1972) Evaluation research. Englewood Cliffs: Prentice Hall

  • Weiss CH (1995) Nothing as practical as a good theory: exploring theory-based evaluation for comprehensive community initiatives. In: Connell JP, Kubisch AC, Schorr LB & Weiss CH (Eds) (1995) New approaches to evaluating community initiatives: Concepts, methods and contexts. Washington DC: The Aspen Institute

  • Wimbush E & Watson J (2000) An evaluation framework for health promotion: theory, quality and effectiveness. Evaluation 6(3): 301-321

5.5 Recent reflections on RCTs and systematic reviews

The above discussion on the epistemology and methodology of health promotion research and evaluation stems from the lack of relevance between the 'standard' dominant approach and health promotion practice However recent work has highlighted:

  1. the importance of understanding the process of intervention through assessing process outcomes,

  2. the related importance of assessing the quality of the intervention in order to determine its effectiveness; and

  3. the recognition of the complexity of the systems under study.

It has increasingly been recognised that these issues also apply to public health evaluation in a wider sense, including that of healthcare interventions. Previous proponents of the RCT, and of strict systematic review processes based on RCTs and meta-analysis have more recently revised their stance on these issues. As a brief consideration of this three recent papers are discussed as an example of this widening debate.

Oakley et al (2006) argue the case, somewhat belatedly, for the inclusion of process evaluation in RCTs of complex interventions such as peer led sex education in school based health promotion. They conclude that:

  • 'A detailed process evaluation should be integral to the design of any randomised controlled trials

  • Process evaluations should specify prospectively a set of process research questions and identify the processes to be studied, the methods to be used, and procedures for integrating process and outcome data

  • Expanding models of evaluation to embed process evaluations more securely in the design of randomised controlled trails is important to improve the science of testing approaches to health improvement

  • It is also crucial for persuading those who are sceptical about using randomised controlled trials to evaluate complex interventions not to discard them in favour of non-randomised or non-experimental studies.'

Herbert & Bo (2005) argue that the quality of interventions can affect the results of clinical trials and that reviews of complex interventions need to take this into account. In their definition complex health interventions include for example surgery and physiotherapy. They summarise their position as:

  • 'Systematic reviews of complex interventions should consider the quality of interventions in individual trials

  • Quality can be assessed when other research provides clear indications of how interventions should be administered

  • Assessment of effects of intervention quality can be built into the analyses of the intervention

  • Such analyses should be specified in the review protocol and should focus on interactions between the quality and the effects of the intervention'

Hawe et al (2004) propose a radical way of standardising complex community interventions for RCTs in comparison to simple interventions, which pays less attention to the replicability of individual components of an intervention by form (eg patient information kit, in-service training sessions) and more to their function (eg all sites devise information tailored to local circumstances, resources are provided to support all sites to run training appropriate to local circumstances etc.) While recognising the complexity of the systems under investigation Hawe et al state that 'complex systems rhetoric should not become an excuse to mean 'anything goes'. ' They conclude that:

  • 'Standardisation has been taken to mean that all the components of an intervention are the same in different sites

  • This definition treats a potentially complex intervention as a simple one

  • In complex interventions, the function and process of the intervention should be standardised and not the components themselves

  • This allows the form to be tailored to local conditions and could improve effectiveness

  • Intervention integrity would be defined as evidence of fit with the theory or principles of the hypothesised change process'

References

  • Hawe P, Shiell A, Riley T (204) Complex interventions: how 'out of control' can a randomised trial be? British Medical Journal 328: 1561-3

  • Herbert RD & Bo K (2005) Analysis of quality of interventions in systematic reviews. British Medical Journal 331: 507-509

  • Oakley A, Strange V, Bonell C et al (2006) Process evaluation in randomised controlled trials of complex interventions. British Medical Journal, 332: 413-416

5.6 Examples of evidence based health promotion actions to improve health

There is a large amount of systematic review literature available to inform and guide evidence based practice in health promotion. However, as we have seen, the direct transference of the methods used for assessing research evidence in clinical medicine can be problematic when applied to health promotion. Kelly (2006) summarises issues to consider when building the evidence base for health promotion:

  • Evidence of the effectiveness of interventions to reduce health inequalities is poor, and less than 0.5% of published papers by British researchers in public health actually address interventions research (Millward et al, 2003); there is also a paucity of cost-effectiveness data (Wanless, 2004)

  • Of the evidence that exists, there is more about 'downstream' interventions (eg individual behaviour change) than 'upstream' interventions (eg policy or environmental change)

  • Descriptions of social variations in populations are underdeveloped and often rely on occupational data, which while useful for certain things eg disaggregating population mortality data, generally do not take account of cultural and social factors impacting on health (Graham & Kelly, 2004)

  • The RCT dominates the effectiveness literature which has led to the consideration of other forms of evidence as inferior. As Kelly states 'This is not helpful, because while RCTs are good on internal validity, they tend to be much less informative about issues of process and implementation which are vital to know about if the intervention is to be transferred…in health promotion where the issues involved are often highly complex and the settings difficult to control … key information will not be available from trial data.' (2006, p190).

  • The problems of synthesising evidence from different research traditions (Dixon-Woods et al (2004), and the difficulty of grading the evidence, which applies to both the quality of systematic reviews and of primary research studies.

Notwithstanding these difficulties there has been considerable investment in developing robust review methodology for both secondary research, and for tertiary research, ie reviews of reviews for public health. The following tables provide some brief examples of effective health promotion actions in the areas of:

  • Tobacco use - reducing initiation, increasing cessation, and reducing exposure to environmental tobacco smoke (Table 5.1)

  • Food-support programmes for low-income and socially disadvantaged childbearing women in developed countries (Table 5.2)

  • Drug use prevention amongst young people (Table 5.3)

  • Increasing physical activity - Informational approaches, behavioural and social approaches and environmental and policy approaches (Table 5.3)

  • Preventing skin cancer by reducing exposure to UV radiation (Table 5.4)

  • Motor vehicle occupant injury - Increasing child safety seat use, increasing safety belt use, reducing alcohol impaired driving (Table 5.4)

  • Interventions to prevent accidental injury to young people aged 15-24 (Table 5.5)

  • Housing and Public Health - rehousing and neighbourhood regeneration, refurbishment and renovation , accidental injury prevention, and prevention of allergic respiratory disease (Table 5.6)

  • Promotion of breastfeeding initiation and duration (Table 5.7)

  • Empowerment (Table 5.8)

This is by no means a comprehensive list. Comments in the tables about recommended interventions only list those where there is good evidence, and do not include the extensive caveats in the various reports about the research base. Similarly they do not include any listings of interventions where there is insufficient evidence to make a judgement about effectiveness. Readers are encouraged to read the full reports to understand more about the underlying issues. The topics selected are to provide a range of recent examples across health issues, and include upstream and downstream interventions. They also vary between systematic reviews and reviews of reviews. The final narrative review on empowerment (Wallerstein, 2006) provides an interesting example of an inclusive and rigorous approach to reviewing the literature around a key health promotion concept and principle of practice. Three main sources of reviews have been chosen for their relevance to public health and health promotion evidence:

  • Task Force on Community Preventive Services (2005) The Guide to Community Preventive Services - what works to promote health? New York, OUP www.thecommunityguide.org

  • National Institute for Health and Clinical Excellence. Note that a number of these are based on Health Development Agency (HDA) Evidence Briefings. NICE is currently updating these and preparing NICE guidance on public health topics. To access all NICE public health documents go to www.nice.org.uk/guidance/topic/publichealth To access HDA publications from the NICE website go to www.nice.org.uk home>about NICE>who we are>about the HDA>HDA publications

  • WHO Regional Office for Europe's Health Evidence Network (HEN). HEN gives rapid access to reliable health information and advice to policy-makers in evidence-based reports and summaries and access to other sources. www.euro.who.int/HEN

Risk behaviour in health and the effect of interventions in influencing health-related behaviour in professionals, patients and the public

Table 5.1

Topic

Recommended interventions

Source

Tobacco use

Reducing tobacco use initiation

 

 

 

 

 

Increasing tobacco use cessation

 

 

 

 

 

Reducing exposure to environmental tobacco smoke

Increasing the unit price for tobacco products

Mass media education campaigns when combined with other interventions

Restricting minors' access to tobacco products

Community mobilisation when combined with additional interventions

Active reinforcement of retail laws, retailer education with reinforcement

 

Increasing the unit price for tobacco products

Mass media education campaigns when combined with other interventions

Healthcare provider reminder systems, and with provider/client education

Reducing client out-of-pocket costs for effective cessation therapies

Multicomponent interventions that include telephone support

 

Smoking bans and restrictions

Task Force on Community Preventive Services (2005) The Guide to Community Preventive Services - what works to promote health? New York, OUP

 

www.thecommunityguide.org

 

Table 5.2

Topic

Recommended interventions

Source

Food-support programmes for low-income and socially disadvantaged childbearing women in developed countries

Food-support programmes aim to improve key maternal and perinatal outcomes. The lack of any significant impact on low birth weight (LBW), pre-term birth and other perinatal outcomes along with the favourable impact on maternal weight gain and nutrient intakes provide a basis both for re-thinking the aims and objectives of current food-support programmes. Setting out-of-reach goals for food-support programmes such as reduction in rates of LBW and pre-term birth is probably not useful until there is strong evidence of what works to improve those outcomes.

With respect to the primary outcome of interest, LBW, the results of this review do not provide evidence that food-support programmes have any impact. However, there are favourable impacts on other outcomes. There is indicative evidence of an increase in mean birth weight of babies born to heavy smokers, and of the beneficial impact of food support on maternal weight gain and dietary intake in a woman's first pregnancy.

Childbearing women in the UK have diets deficient in key nutrients and those on low incomes face difficulties in feeding themselves and their children. In this respect, teenage mothers are perhaps the most vulnerable group. Programmes providing women with food supplements are likely to help them and their children to eat healthier diets. This in itself is a desirable outcome for any programme.

NICE (2006) Food-support programmes for low-income and socially disadvantaged childbearing women in developed countries.
Systematic review summary
London, National Institute for Health and Clinical Excellence July, 2006

 

D'Souza L, Renfrew M, McCormick F (2006) Food-support programmes for low-income and socially disadvantaged childbearing women in developed countries.  London, National Institute for Health and Clinical Excellence
www.nice.org.uk

 

 

Table 5.3

Topic

Recommended interventions

Source

Drug use prevention amongst young people

Although some caution expressed over strength of evidence and US sources this updated review of reviews suggests the following approaches are most effective:

 

Programme delivery should incorporate interactive methods and include peer-led interventions.

 

Design and content of programmes should be based on the social influence model, include booster sessions, be delivered to those aged between 11-14 years and involve family members.

McGrath Y, Sumnall H, McVeigh J, Bellis M (2006) Drug use prevention among young people: a review of reviews Evidence briefing update. London, National Institute for Health and Clinical Excellence, January 2006
www.nice.org.uk

 

Increasing physical activity - Informational approaches

 

 

Behavioural and social approaches

 

 
Environmental and policy approaches

Community-wide campaigns

Point-of-Decision prompts

School-based physical education

 

Individually-adapted health behaviour change programmes

Social support interventions in community settings

 

Creation of new or or enhanced access to places for physical activity, combined with informational outreach activities

Point-of-Decision prompts

Task Force on Community Preventive Services (2005) The Guide to Community Preventive Services - what works to promote health? New York, OUP
www.thecommunityguide.org

 

 Table 5.4

Topic

Recommended interventions

Source

Preventing skin cancer by reducing exposure to UV radiation

Educational and policy interventions in primary schools

Educational and policy interventions in recreational and tourism settings

 

Task Force on Community Preventive Services (2005) The Guide to Community Preventive Services - what works to promote health? New York, OUP
www.thecommunityguide.org

 

Motor vehicle occupant injury - Increasing child safety seat use

 

 

 

Increasing safety belt use

 

Reducing alcohol impaired driving

Child safety seat laws

Distribution and education programmes

Community-wide information and enhanced enforcement campaigns

Incentive and education programmes

 
Safety belt laws, and enhanced enforcement

 

 0.08% blood alcohol concentration laws

Minimum legal drinking age laws

Sobriety checkpoints

Lower blood alcohol limits for younger or inexperienced drivers

Intervention training programmes for servers (under certain conditions)

Mass media campaigns (under certain conditions)

Task Force on Community Preventive Services (2005) The Guide to Community Preventive Services - what works to promote health? New York, OUP
www.thecommunityguide.org

 

Table 5.5

Topic

Recommended interventions

Source

Interventions to prevent accidental injury to young people aged 15-24

 

Legislation and enforcement have been effective in preventing accidental injury to young people in this age range.

Interventions that use environmental measures and protective equipment have also been shown to be effective.

Stand-alone educational interventions have not been shown to be effective, but when combined with other approaches such as legislation and engineering, may be successful. However, with multi-factorial intervention programmes it is difficult to attribute the degree of success to any single element.

Road interventions such as raising the legal drinking age from 18 to 21, random breath testing, seat belt legislation, compulsory protective helmets for motor-cyclists and bicyclists, lowering the drink-driving limit (blood-alcohol concentration) and graduated driver licensing schemes have been shown to be successful.

Within the sports and leisure setting, legislative measures, such as the mandatory use of mouthguards and face protectors, and modifications to the rules of games, have been shown to be effective in reducing injuries.

Errington G, Athey K, Towner E et al. (2006) Interventions to prevent accidental injury to young people aged 15-24 London, NICE
www.nice.org.uk


Interventions to prevent accidental injury to young people aged 15-24
Evidence briefing summary. NICE, July 2006

 

 

Table 5.6

Topic

Recommended interventions

Source

Housing and Public Health

 


Rehousing and neighbourhood regeneration

 

 

 

 

 
Refurbishment and renovation

 

Accidental injury prevention

 

 

 

 

 

 

 

 

Prevention of allergic respiratory disease

 

Medical priority rehousing - Anxiety and depression scores are reduced in people who are rehoused on the basis of medical need.

 

Rehousing plus relocation from slum or socially isolated areas  -  rehousing people from slum areas can improve self-reported physical and mental health outcomes in the longer term (18 months), and can adversely affect self-reported health outcomes in the short term (9 months)

Housing subsidy programmes for low-income families e.g.

US  rental voucher programmes can improve household safety by providing families with the choice to move to neighbourhoods with reduced exposure to violence.

Improvement in housing energy efficiency measures - housing interventions involving improvements to energy efficiency measures, such as installation of new windows, can positively affect health outcomes.

Home visits to people in lower socio-economic areas plus provision of advice on home hazards, combined with health education and media campaigns, are effective in encouraging parents to make physical changes to the home environment to ensure their homes are safer.

Provision of free or discounted home safety equipment and/or educational campaigns may lead to behavioural and environmental change.

Home hazard modification interventions that seek to remove and repair safety hazards are effective in reducing falls in older people. This effect was strongest for people with a history of falling prior to intervention and men aged ≥75 years.  

Community based provision of free smoke alarms (with or without installation) may reduce fire-related injuries.

Use of physical (intensive home cleaning, vinyl mattress covers, daily wet cleaning of floors, boiling of top bedding covers and removal of soft furnishing) and/or chemical measures (air filters loaded with Enviracaire and acaracide spray and cleaning products) may lead to a reduction in allergen load for those with house dust mite-provoked respiratory disease when combined with maintenance drug treatments.  

NICE (2005) Housing and public health:
a review of reviews of interventions for improving health
- is available at
www.nice.org.uk

Housing and public health: a review of reviews of interventions for improving health. Evidence briefing summary NICE, December 2005

Table 5.7

Topic

Recommended interventions

Source

Promotion of breastfeeding initiation and duration

Evidence-based actions include:

Implementation of the Baby Friendly Initiative in all maternity and community services.

Routine delivery of education and support programmes by professionals and peers

Changes to policy and practice to include effective positioning, unrestricted feeding, and supportive care

Changes to policy and practice to abandon restrictions on feeding and contact, provision of supplemental feeds

Complementary telephone peer support

Education and support from one professional

One-to-one support for the first year

Media programmes

 

Dyson L et al (2006) Promotion of breastfeeding initiation and duration. Evidence into practice briefing. July 2006, London, NICE
www.nice.org.uk

Renfrew MJ, Dyson L, Wallace L et al. (2005) The effectiveness of health interventions to promote the duration of breastfeeding: systematic review. London: National Institute for Health and Clinical Excellence.

 

Renfrew MJ, Dyson L, McFadden A et al. (2005) Evidence into practice briefing on the initiation and duration of breastfeeding: technical report. London: National Institute for Health and Clinical Excellence.

 Table 5.8

Empowerment

Multi-level research designs show that empowering initiatives can lead to health outcomes and that empowerment is a viable public health strategy. Positive health outcomes include: improved mental health and school performance in young people; reduction of HIV infection rates; improved child and family health; increase in self-management of disease; adoption of healthier behaviours; increased uptake of health services; and increase of care-givers' coping skills and efficacy.

Effective empowerment strategies include:

Increasing citizen's skills, control over resources and access to information

Using small group efforts which enhance critical consciousness to build supportive environments and a deeper sense of community

Most effective empowerment strategies are those that build on and reinforce authentic participation ensuring autonomy in decision-making, sense of community and local bonding, and psychological empowerment of the community members themselves.

Wallerstein N (2006) What is the evidence on effectiveness of empowerment to improve health? Copenhagen, WHO Regional Office for Europe's Health Evidence Network (HEN). February 2006
www.euro.who.int

 

References

  • Dixon-Woods M et al (2004) Integrative approaches to quantitative and qualitative evidence. London, Health Development Agency

  • D'Souza L, Renfrew M, McCormick F (2006) Food-support programmes for low-income and socially disadvantaged childbearing women in developed countries.  London, National Institute for Health and Clinical Excellence

  • Dyson L et al (2006) Promotion of breastfeeding initiation and duration. Evidence into practice briefing. July 2006, London, National Institute for Health and Clinical Excellence

  • Errington G, Athey K, Towner E et al. (2006) Interventions to prevent accidental injury to young people aged 15-24 London, National Institute for Health and Clinical Excellence

  • Graham H & Kelly MP (2004) Health inequalities: Concepts, frameworks and policy. London, Health Development Agency

  • Kelly MP (2006) Evidence-based health promotion. In: Davies M & Macdowall W (Eds) Health Promotion Theory: Understanding Public Health Series. Open University Press/McGrawHill

  • McGrath Y, Sumnall H, McVeigh J, Bellis M (2006) Drug use prevention among young people: a review of reviews Evidence briefing update. London, National Institute for Health and Clinical Excellence

  • Millward LM, Kelly MP & Nutbeam D (2003) Public health intervention research: the evidence. London, Health Development Agency

  • NICE (2005) Housing and public health: a review of reviews of interventions for improving health. London, National Institute for Health and Clinical Excellence

  • NICE (2005) Housing and public health: a review of reviews of interventions for improving health. Evidence briefing summary, December 2005

  • NICE (2006) Interventions to prevent accidental injury to young people aged 15-24, Evidence briefing summary. London, National Institute for Health and Clinical Excellence

  • NICE (2006) Food-support programmes for low-income and socially disadvantaged childbearing women in developed countries. Systematic review summary London, National Institute for Health and Clinical Excellence July, 2006

  • Renfrew MJ, Dyson L, Wallace L et al. (2005) The effectiveness of health interventions to promote the duration of breastfeeding: systematic review. London: National Institute for Health and Clinical Excellence.

  • Renfrew MJ, Dyson L, McFadden A et al. (2005) Evidence into practice briefing on the initiation and duration of breastfeeding: technical report. London: National Institute for Health and Clinical Excellence.

  • Wanless D (2004) Securing good health for the whole population: Final report, February, 2004. London, Department of Health

  • Task Force on Community Preventive Services (2005) The Guide to Community Preventive Services - what works to promote health? New York, OUP

  • Wallerstein N (2006) What is the evidence on effectiveness of empowerment to improve health? Copenhagen, WHO Regional Office for Europe's Health Evidence Network (HEN). February 2006

© V Speller 2007