Your shopping cart is empty.

Use of information for health service planning and evaluation

PLEASE NOTE:

We are currently in the process of updating this chapter and we appreciate your patience whilst this is being completed.

 

Systems framework for health services planning

It is useful to consider a health service as a dynamic (people moving through a series of encounters) input-output system. There are various models (theoretical frameworks) for this[i]:

Input, Process, Output (e.g., as in manufacturing cars)
Structure, Process, Outcome (Donabedian quality framework)
Need, Demand, Use, Outcome (Logan et al dynamics of Medical Care[ii]

Planning
Commissioning or planning health care begins with an assessment of need[1].

This requires information on:

  • Size and structure of underlying population
  • Areas of met and unmet need
  • Incidence and / or prevalence of disease(s) of interest
  • Effectiveness of interventions available
  • Relative cost-effectiveness of those interventions
  • Current services, including costs, capacity, quality, effectiveness, efficiency
  • Prioritisation processes, including political priorities such as waiting times, service targets, etc.

Typically, this would be in the context of an assessment of the population's overall health which requires identifying and reviewing data sources such as:

  • Local data, such as a local specialised survey
  • Routine local and / or national statistics
  • Ad hoc data
  • Relevant published surveys
  • Qualitative data as well as quantitative sources
  • Trends in incidence and outcome

When data are not available directly, such as when future demand needs to be predicted or when estimating the impact of service re-design, the available information may be used to formulate the assumptions of a model, and to calibrate it.  See Uses of Mathematical Modelling.
 

Issues to consider when using information for local health service planning include:

  • The applicability of national data to the local population and conditions. If only national data are available, when applied locally the data may need to be adjusted for the composition of the population, such as age, ethnicity and deprivation.
  • Many data sources cannot give any indication of who has the health condition of interest but is not accessing services (unmet need).
  • National or regional data may mask differences/inequalities at smaller geographical levels such as wards.
  • Data are subject to random fluctuations. Small area data may have very small numbers. Indicators of precision, such as HealthKnowledge.org.uk/Statistics intervals  should always be used in these cases. 
  • Confidentiality issues with small numbers may mean the information is suppressed to prevent disclosure when the data source is used in combination with other sources.
  • How the data have been obtained.
  • For what purpose the data were originally collected.  If different from the new purpose, there may be biases within the data that hamper a correct assessment of the new purpose.  For example, a clinician records patient information to help treat that individual, not for population-based analyses.  This could result in inconsistent coding. Also, there are legal restrictions on using personal data for purposes other than those for which it was collected.
  • How timely the data are.
  • How complete the data are.
     

Evaluation
The process of evaluation and monitoring should be a continuous one and lessons learned should inform future developments.  A crucial part of evaluating health care is assessing the quality of health care.  (In the UK, 10% of inpatient episodes lead to unintended harm - around half are preventable[3])

'Quality of care is the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge' (Lohr, Ed. Medicare: A Strategy for Quality Assurance: VOLUME II Sources and Methods. Institute of Medicine (US) Committee to Design a Strategy for Quality Review and Assurance in Medicare; Lohr KN, editor. Washington (DC): National Academies Press (US), 1990) https://www.ncbi.nlm.nih.gov/books/NBK235476/
[Accessed 21/08/2018]

The quality of services can be evaluated using the Donabedian[4] framework of 'Structure - Process - Outcome'; all three elements are required to give an overall picture of the quality of a service.
 

Structure: examine the provision of facilities and staff available.  This could include:

  • numbers of hospital beds
  • ratio of doctors, nurses and other staff to patients and to each other
  • comparisons across different geographical regions
  • ease of access and opening hours to a clinic or screening service.
     

Process: what is done for and to a patient or a population, and how well.  This could include:

  • methods by which patients are identified as suitable to be recorded on a disease register
  • how thoroughly the diagnostic criteria had been determined
  • validity and reliability of diagnostic tests
  • time taken from diagnosis to treatment
  • frequency of patient follow up
  • how people are recruited and involved in a process, e.g. a screening or immunisation programme or substance misuse treatment.
     

Outcomes:

  • Did the patient get better and by how much did the patient’s health-related Quality of Life change?
  • Were there complications?
  • Did the patient feel satisfied with the service?
  • Has there been a recurrence?
  • Was there a reduction in incidence in a population?
  • Has coverage improved?
  • Has, for example, life expectancy increased? https://en.wikipedia.org/wiki/Life_expectancy  [accessed 23/08/2018] for the age group of the patient  
     

Maxwell's dimensions of quality[5] take account of the population as well as patient care. https://www.bmj.com/content/bmj/288/6428/1470.full.pdf [accessed 21/08/2018]

The dimensions include:

  • Access to services (for example, taking a population-based approach, do some sub-groups find services easier to access than other sub-groups?  This could be about the physical location of services as well as different attitudes on seeking care)
  • Relevance to need (for the whole community)
  • Effectiveness (on an individual patient basis)
  • Equity (could the service in any way be made more fair? Is it reaching different population groups?)
  • Acceptability (is this a procedure which many find too uncomfortable/embarrassing/ painful to undertake and so avoid treatment?)
  • Efficiency and economy (is the service making the best use of available funds?)

Example of where structure, process and outcome framework could be applied: death of a patient due to maladministration of an anti-cancer drug.[2] 

In this case a drug which should only ever be administered into a vein was inserted into the spine.  The patient died and the investigation found 40 system failures.
 

Structure:
The syringes for the two different methods of injecting were labelled in a confusing manner.  Senior doctor not present.
 

Process:
Two junior doctors were allowed to dispense these drugs without adequate training.  No formal induction programme was in place for junior doctors starting.
 

Outcome:
Death of a teenager.

Evaluations can lead to service re-design which then needs to be monitored regularly, such as:

  • Clearer protocols
  • Formal training
  • Induction programmes
  • Redesign of equipment
  • Re-balancing of junior and senior staff.
     

Information sources for evaluating health services

Evaluation involves comparison of the actual outcome of an intervention with the intended outcome. While it often involves an economic component, this is by no means the whole of evaluation.

Depending on what needs to be evaluated, the information sources listed in the planning section above will be useful to varying degrees.  For example, if patients' post-operative mortality rates are being measured and compared with other areas, Hospital Episode Statistics (HES) will be required, with linkage to ONS mortality data since HES only captures mortality within hospital stays.

Patient satisfaction may be determined by means of a survey which captures qualitative information.

If access to service is under-represented from one ethnic group compared to what would be expected for that population, then ethnicity needs to be captured consistently as part of an appropriate process (e.g. when it is asked for, how it is asked for, which ethnic group categories are used).  However, there are legal restrictions on collecting this data: hospitals may only collect it for admitted patients, not for outpatients or patients awaiting services.

Nationally, any health care system must have in place robust mechanisms for assuring the quality and safety of services provided to patients that enable comparisons across health care institutions and geographical regions.  This requires consistent methods of capturing and coding data.  Within the NHS, a framework for ensuring health care quality includes:

  • Clear national standards (NICE clinical guidelines, National Service Frameworks supported by the National Clinical Audit Programme).
  • Local Clinical governance mechanisms for implementing quality assurance programmes as well as patient safety and quality improvements.
  • Inspection and audit programmes (Healthcare Commission, which has now closed. Its responsibilities were taken over by the Care Quality Commisison in 2009. https://www.gov.uk/government/organisations/healthcare-commission [accessed 21/08/2018])
  • National audits such as the confidential enquiries into Perioperative Deaths, Maternal deaths, Suicide and Homicide by People with Mental Illness.
     

Monitoring
In addition to clinical audits and inspections, on-going monitoring of public health issues can be undertaken using a variety of websites and databases.

NHS providers routinely send data to commissioners though the NHS Digital Secondary User Service https://digital.nhs.uk/services/secondary-uses-service-sus [accessed 21/08/2018].

Independent providers of services such as terminations of pregnancy should supply datasets detailing their activities to support their invoices, and these data are used to monitor progress against plans. These data include clinical, personal, and financial data, but patients are anonymised before transmission.

Incidence of communicable disease was monitored continuously by the Health Protection Agency which has become part of Public Health England (PHE). PHE now monitors infectious diseases https://www.gov.uk/topic/health-protection/infectious-diseases  [accessed 21/08/2018]. It is informed through the notifiable diseases process https://www.gov.uk/guidance/notifiable-diseases-and-causative-organisms-how-to-report [accessed 21/08/2018].

The National Drug Treatment Agency, set up in 2001, collated and analysed data on a monthly basis that local drug action teams use to monitor their services against target.

It ceased to exist in 2010 and its key functions transferred to Public health England. https://en.wikipedia.org/wiki/National_Treatment_Agency_for_Substance_Misuse [accessed 24/08/2018]
 

References

[1] Guest CEt Al. Oxford Handbook of Public Health Oxford University Press, 3rd Edtion 2013 [2] Donaldson LJ, Donaldson RJ Essential Public Health 2nd Edition (Revised) , Petroc Press 2003
[3] Vincent C, Neale G, Woloshynowych M Adverse events in British hospitals: preliminary retrospective record review. BMJ 2001; 322:517-19
[4] Donabedian A. (1988) The quality of care: how can it be assessed? Journal of the American Medical Association 260: 1743-8.
[5] R. Maxwell, 'Quality Assessment in Health', British Medical Journal, Vol. 288, l984.

 

 

                                     © M Goodyear & N Malhotra 2007, D Lawrence 2018