See also section on study design for assessing effectiveness, efficiency and acceptability of services.
A) Principles of evaluation
There are several problems which can arise when undertaking evaluation in practice, including:
- How the evaluation will sit within national and local policy context (it can be difficult to isolate an intervention or programme from the context within which it operates)
- Politics - some stakeholders may have biases towards which outcomes should be reported
- Practicalities - finance, time, resources, level of detail required, being able to measure what you want to measure
- Research versus practice - ensuring that the evaluation is undertaken in a rigorous and robust way can sometimes be compromised by the practicalities of running a service
- The evaluation should go beyond being just an audit - ideally it should relate the separate parts of the evaluation framework to each other e.g. input to outcomes as opposed to just reviewing whether processes are occurring as they should be
- Assessment - it can be difficult to measure what you want to measure in practice
- The evaluation may be wider than existing performance management processes requiring the collection of additional data.
There are several frameworks for conducting an evaluation. Whilst there are no hard and fast rules about which framework is best suited in which situation, some lend themselves better to certain situations. 3 different frameworks are outlined below:-
Frameworks for evaluation
E.g. evaluating a programme
Structure/inputs > process > outputs > outcomes
E.g. priority setting
Effectiveness (efficacy), efficiency, equity, humanity
E.g. a screening programme
Effectiveness (efficacy), efficiency, equity, access, acceptability, appropriateness
Choosing and designing an evaluation are discussed in more detail in section ‘Study design for assessing effectiveness’ of this website.
B) Quality assessment and quality assurance
Donabedian, Black and Maxwell all make slightly different judgements on the important factors to include in an evaluation, highlighting the different perspectives on quality that different stakeholders have. Consumers, practitioners and commissioners will all have different perspectives on quality. For instance, centralisation of services may be more efficient from a commissioner perspective, but may represent reduced accessibility from a patient perspective.
Quality assessment is part of the quality management process. Quality management processes are intrinsic to a quality management system. A quality management system may consist of policies and protocols to ensure that a service or intervention is optimally delivered and will incorporate indicators to demonstrate whether such success is being achieved, in the early, mid and end stages of an intervention or programme. Such indicators need to be reported and fed back in to the loop so that quality improvements can be made continually.
Some examples of quality assessment:
- Healthcare organisations’ performance against national targets and indicators
- GPs - performance as measured by quality outcomes framework
Quality assurance is the process of guaranteeing quality. If quality indicators are being measured and quality standards reached, through the quality management process, then quality assurance can be given.
Some examples of quality assurance:
- National Cancer Screening programmes quality assurance guidelines 
- NICE guidance and Health Technology Assessments
Quality assessment is important for safety and processes of care- it is important to know when errors occur, why they occur and what to do about them to prevent them happening again. Clinical governance is the quality management system for ensuring clinical quality within an organisation, for example, ensuring that processes / policies / protocols are in place to ensure that the correct kidney is removed, and that these processes / policies / protocols are put in action to ensure good clinical outcomes and to maximise patient safety.
Setting quality standards
Benchmarking clinical standards is not always straight forward. Whilst clinical guidelines set out appropriate steps in a patient pathway usually based on systematic reviews of evidence, actually assessing clinical outputs in terms of quality is difficult. It is difficult to know what to benchmark against – should it be average performance or should there be a threshold? Should there be a minimum safe standard that many will achieve or an aspiring standards of excellence? In practice, standards should fulfil 2 criteria to be useful:
- It must be clear how they can be achieved by individuals
- It must be an achievable standard 
-  www.cancerscreening.nhs.uk/cgi-bin/search/fmsearch.cgi
-  Pencheon et al. Oxford Handbook of Public Health practice, 2nd edition, chapter 6, p437.
© Rosalind Blackwood 2009, Claire Currie 2016