Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Frameworks For Improvement Clinical Audit The Plandostudyact Cycle and Significant Event Audit

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Quality in Primary Care 2013;21:123–30 # 2013 Radcliffe Publishing

Quality improvement science

Frameworks for improvement: clinical audit,


the plan–do–study–act cycle and significant
event audit
Steve Gillam MD FFPH FRCP FRCGP
Department of Public Health and Primary Care, Institute of Public Health, University of Cambridge, UK
A Niroshan Siriwardena MMedSci PhD FRCGP
Professor of Primary and Prehospital Health Care, Community and Health Research Unit (CaHRU),
University of Lincoln, UK

ABSTRACT
This is the first in a series of articles about quality the similarities and differences between these and
improvement tools and techniques. We explore providing examples of each.
common frameworks for improvement, including
the model for improvement and its application to Keywords: clinical audit, general practice, primary
clinical audit, plan–do–study–act (PDSA) cycles care, plan-do-study-act cycles, quality improve-
and significant event analysis (SEA), examining ment, significant event analysis

Different perspectives on quality Evaluating quality

Quality, which has been defined as ‘the degree to Evaluation has been defined as ‘a process that attempts
which health services for individuals and populations to determine, as systematically and objectively as pos-
increase the likelihood of desired health outcomes and sible, the relevance, effectiveness and impact of activi-
are consistent with current professional knowledge’,1 ties in the light of their objectives, e.g. evaluation of
may be seen from different stakeholder perspectives. structure, process and outcome, clinical trials, quality
The ‘desired’ outcomes may be subtly different for of care’. Where do we start when thinking about
managers, patients and clinicians. Patients clearly evaluation of a service in the National Health Service
want treatment that works, and place a high priority (NHS)? Avedis Donabedian distinguished four ele-
on how that treatment is delivered. Clinicians focus on ments:2
effectiveness, and want to provide treatment that . structure (buildings, staff, equipment)
works best for each of their patients. Managers are . process (all that is done to patients)
rightly concerned with efficiency, and seek to maxi- . outputs (immediate results of medical interven-
mise the population health gain through best use of
tion)
increasingly limited budgets. The range of different . outcomes (gains in health status).
outcomes desired demonstrates the multidimensional
nature of quality. The first stage in any attempt to Thus, for example, evaluation of the new screening
measure quality is therefore to think about what algorithms for the early detection of cancer in primary
dimensions are important for you. care3,4 will need to consider:
. the cost of implementing the programme (add-
itional consultations, investigations and referrals)
. the numbers of patients screened, coverage rates for
defined age ranges and gender, number and pro-
portion of patients screened who are referred, time
124 S Gillam and AN Siriwardena

to referral from first consultation or number of


consultations before referral, numbers of true and
false positives and negatives (process)
. number of new cancers identified, treatments per-
formed (outputs)
. cancer incidence, prevalence and mortality rates,
together with patient experience (outcomes).
This distinction is helpful because for many inter-
ventions it may be difficult to obtain robust data on
health outcomes unless large numbers are scrutinised
over long periods. For example, when evaluating the
quality of hypertension management within a general
practice, you may be reliant on intermediate outcome
or process measures (the proportion of the appropri-
ate population screened, treated and adequately con-
trolled) as a proxy for health status outcomes. The Figure 1 The clinical audit cycle.
assumption here is that evidence from larger-scale
studies showing that control of hypertension reduces
subsequent death rates from heart disease will be Identify the problem or issue
reflected in your own practice population’s health Selecting an audit topic should answer the question
experience. There are three main types of quality ‘What needs to be improved and why?’. This is likely to
measure in health care: consumer ratings, clinical per- reflect national or local standards and guidelines where
formance data, and effects on individual and popu- there is definitive evidence about effective clinical
lation health. practice. The topic should focus on areas where prob-
lems have been encountered in practice.

Define criteria and standards


The model for improvement Audit criteria are explicit statements that define what
elements of care are being measured (e.g. ‘Patients with
The Institute for Healthcare Improvement’s (www. asthma should have a care plan’). The standard defines
ihi.org) model for improvement provides the basis for the level of care to be achieved for each criterion (e.g.
the commonly used quality improvement techniques ‘Care plans have been agreed for over 80% of patients
of clinical audit and plan–do–study–act (PDSA) cycles.5 with asthma’). Standards are usually agreed by con-
It is summarised in three simple questions: sensus but may also be based on published evidence
(e.g. childhood vaccination rates that confer popu-
. What are we trying to achieve? lation herd immunity) or on the results of a previous
. How will we know if we have improved? (local, national or published) audit.
. What changes can we make to improve?
Monitor performance
How these questions are applied in practical frame-
works for improvement is described in more detail To ensure that only essential information is collected,
below. details of what is to be measured must be established
from the outset. Sample sizes for data collection are
often a compromise between the statistical validity of
Clinical audit the results and the resources available for data collec-
tion (and analysis).
The clinical audit cycle (see Figure 1) involves measuring
performance against one or more predefined criteria Compare performance with criteria and
and standards assessment of performance in criteria standards
against a standard until that standard is achieved or
This stage identifies divergences between actual results
until a new standard is set. The greatest challenge is to
and standards set. Were the standards met and, if not,
make necessary adjustments and re-evaluate perform-
why not?
ance—in other words, to complete the cycle.
Clinical audit is therefore a systematic process Implement change
involving the stages outlined below.
Once the results of the audit have been discussed, an
agreement must be reached about recommendations
for change. Using an action plan to record these
Frameworks for improvement 125

recommendations is good practice. This should in-


clude who has agreed to do what and by when. Each
point needs to be well defined, with an individual
named as responsible for it, and an agreed timescale
for its completion.

Complete the cycle to sustain


improvements
After an agreed period, the audit should be repeated.
The same strategies for identifying the sample, methods
and data analysis should be used to ensure com-
parability with the original audit. The re-audit should
demonstrate that any changes have been implemented
and improvements have been made. Further changes
may then be required, leading to additional re-audits. Figure 2 The plan–do–study–act cycle.
An example audit is shown in Box 1.

The PDSA cycle one or more of these changes on a larger scale. The
following stages are involved.
The PDSA cycle takes audit one stage further (Figure 2)
by focusing on the development, testing and imple- . First, develop a plan and define the objective (plan).
mentation of quality improvement. . Second, carry out the plan and collect data (do),
The PDSA cycle involves repeated rapid small-scale then analyse the data and summarise what was
tests of change, carried out in sequence (changes tested learned (study).
one after another) or in parallel (different people or . Third, plan the next cycle with necessary modifi-
groups testing different changes), to see whether and cations (act).
to what extent the changes work, before implementing

Box 1 Audit record


Title of the audit
Audit of management of obese patients.
Reason for the choice of topic
All team members have noted the increasing prevalence of overweight and obesity across the practice
population.
Dates of the first data collection and the re-audit
1 March 2012 and 1 September 2012.
Criteria to be audited and the standards set
Criterion: The health records of adults with a BMI > 30 kg/m2 should contain a multicomponent weight-
management plan.
Standard: 100%.
According to NICE guidelines, adult patients with a BMI > 30 kg/m2 should have a documented
multicomponent weight-management plan setting out strategies for addressing changes in diet and activity
levels, developed with the relevant health care professional. The plan should be explicit about the targets for
each of the components for the individual patient and the specific strategies for that patient. A copy of the
plan should be retained in the health record and monitored by the relevant health care professional.
Results of the first data collection
Of 72 patients with documented BMI > 30 kg/m2, only 8 (11%) had copies of weight-management plans in
their records.
Summary of the discussion and changes agreed
The results were reviewed at the next clinical governance meeting, where it was felt that hard copies for the
paper record were less important than documentation of the process in the electronic record.
Results of the second data collection
Of 48 patients with BMIs > 30 kg/m2, 16 (33%) had documented weight-management plans in their
electronic record.
126 S Gillam and AN Siriwardena

Plan Significant event analysis (SEA)


Develop a plan for the change(s) to be tested or Significant event analysis is a very different approach
implemented. Make predictions about what will hap- to quality improvement that involves the structured
pen and why. Develop a plan to test the change. (Who? investigation of individual episodes which have been
What? When? Where? What data need to be collected?) identified by a member or members of the health care
team as ‘significant’ (see Box 3). SEA improves the
Do
quality and safety of patient care by encouraging reflec-
Carry out the test by implementing the change. tive learning and, where necessary, the implemen-
tation of change to minimise recurrence of the events
Study
in question.6 It can improve risk management, enhance
Look at data before and after the change. Usually this patient safety and facilitate the reporting of patient
involves using run or control charts together with safety incidents by health care practitioners.
qualitative feedback. Compare the data with your pre- SEA has been described as the process by which
dictions. Reflect on what was learned and summarise ‘individual cases, in which there has been a significant
this. occurrence (not necessarily involving an undesirable
outcome for the patient), are analysed in a systematic
Act
and detailed way to ascertain what can be learnt about
Plan the next test, determining what modifications the overall quality of care and to indicate changes that
should be made. Prepare a plan for the next test. might lead to future improvements’.7 The aim of SEA
Decide to fully implement one or more successful is to:
changes. An example is shown in Box 2.

Box 2 Example of a quality improvement project


Title: Improving monitoring of azathioprine.
Date completed: 1 June 2012.
Description: This was a quality improvement project focusing on improving monitoring of commonly used
disease-modifying antirheumatic (immunosuppressant) drugs (DMARDs, i.e. methotrexate and azathioprine)
in the practice.
Reason for the choice of topic and statement of the problem: DMARDs are commonly prescribed under shared
care arrangements with specialists. The general practitioner has a responsibility for ensuring that the drugs
are appropriately monitored for evidence of myelosuppression and liver dysfunction.
Priorities for improvement and the measurements adopted: The aim of this quality improvement project was to
improve monitoring of the two most commonly used DMARDs in the practice, methotrexate and
azathioprine. The criteria agreed for monitoring were:
. methotrexate: full blood count and liver function tests performed within the previous 3 months
. azathioprine: full blood count performed within the previous 3 months; renal function within the past
6 months.
Baseline data collection and analysis: The first data collection presented in the run and control charts from
week 1 to week 6 showed inadequate blood monitoring of these drugs with rates of complete blood
monitoring for 10 patients (4 patients prescribed methotrexate and 6 prescribed azathioprine) on these drugs
at around 70% (see Figures 3 and 4).
Quality improvement: The team met to plan how to measure monitoring and how to improve this. The topic
was discussed by clinical and administrative staff. During the baseline measurements for 6 weeks,
improvements were planned. The first improvement introduced was a protocol for a search and prescription
reminder for patients on these drugs. All patients on DMARDs were put on a 3-month prescription recall,
and an automatic prescription reminder to attend for blood monitoring at every 3-month recall was set up.
Following an initial improvement to 80% compliance with monitoring, it was decided to send a written recall
letter for blood tests and a follow-up appointment with the doctor.
The results of the second data collection: The subsequent data collection showed monitoring rates consistently
at 100%.
Intervention and the maintenance of successful changes: We provided a system for more consistent monitoring
of DMARDs.
Quality improvement achieved and reflections on the process: This project enabled members of the practice to
improve their knowledge in this area. This has led to higher-quality, safer care for patients.
Frameworks for improvement 127

be taken and circulated. Meetings should be held


Box 3 Common types of significant routinely, perhaps as part of monthly team meet-
events ings, when all events of interest can be discussed
. Prescribing error and analysed allowing all relevant staff to offer their
. Failure to action an abnormal result thoughts and suggestions. The person you choose
. Failure to diagnose to facilitate a significant event meeting or to take
. Failure to refer responsibility for an event analysis again will de-
. Failure to deal with an emergency call pend on team dynamics and staff confidence.
. Breach in confidentiality 4 Undertake a structured analysis of the event. The
. Breakdown in communication focus should be on establishing exactly what hap-
pened and why. The main emphasis is on learning
from the event and changing behaviours, practices
. gather and map information to determine what or systems, where appropriate. The purpose of the
happened analysis is to minimise the chances of an event
. identify problems with health care delivery recurring. (On rare occasions it may not be possible
. identify contributory factors and root causes to implement change. For example, the likelihood
. agree what needs to change and implement sol- of the event happening again may be very small, or
utions. change may be out of your control. If so, clearly
document why you have not taken action.)
5 Monitor the progress of actions that are agreed and
Common causes of significant events
implemented by the team. For example, if the head
There are many types of significant event. Most are receptionist agrees to design and introduce a new
multifactorial in origin, and for this reason SEA often protocol for taking telephone messages, progress
explores issues such as: on this new development should be reported back
at a future meeting.
. information: e.g. potentially important data over-
6 Write up the SEA once changes have been agreed.
looked on comorbidities (e.g. previous bronchospasm
This provides documentary evidence that the event
when considering beta blockers), previous drug
has been dealt with. It is good practice to attach any
side effects or allergies, potential interactions
additional evidence (e.g. a copy of a letter or an
. patient factors: e.g. the doctor failed to check that
amended protocol) to the report. The report should
the patient understood the reasons for treatment,
be written up by the individual who led on the event
the dosing, timing, stop and start dates, knew the
analysis, and should include the following:
possible side effects
. date of event
. professional factors: poor communication skills,
. date of meeting
lack of medical knowledge or skills, mistakes due to
. lead investigator
pressure of time, unnecessary interruptions, stress,
. what happened
etc.
. why it happened
. systems failure: e.g. lack of education, training or . what has been learned
supervision, poor identification of roles and re- . what has been changed.
sponsibilities, lack of detailed guidelines, proto-
cols, etc. lack of audit or regular reviews. It is good practice to keep the report anonymous so
that individuals and other organisations cannot be
identified.
Six steps in SEA Purists may wish to seek educational feedback on
1 Identify and record significant events for analysis the SEA once it has been written up. Research has
and highlight these at a suitable meeting. Enable repeatedly shown that around one third of event
staff to routinely record significant events using a analyses are unsatisfactory, mainly because the team
log book or pro forma. has failed to understand why the event happened or to
2 Collect factual information, including written and take necessary action to prevent recurrence. Sharing
electronic records, and the thoughts and opinions the SEA with others, such as a group of GPs or practice
of those involved in the event. This may include managers, provides an opportunity for them to com-
patients or relatives or health care professionals ment on your event analysis and also learn from what
based outside the practice. you have done (see Box 4).
3 Meet to discuss and analyse the event(s) with all Closely related to SEA, root cause analysis is a
relevant members of the team. The meeting should method of problem solving that seeks to identify the
be conducted in an open, fair, honest and non- underlying causes after an event has occurred.7
threatening atmosphere. Notes of the meeting should
128 S Gillam and AN Siriwardena

Box 4 Significant event analysis record


Date of report: 12 March 2014.
Reporter: AB.
Patient identifier: 1234.
Date of event: 15 February 2014.
Summary of event: Whilst entering data on her template, Nurse X noticed a previous glucose of 8.7 recorded
on 3.2.01. She initially assumed that this was normal because the template did not distinguish between fasting
and random glucose tests. She checked the result and found that it was in fact a fasting glucose, which may
have indicated diabetes. Nurse X explained the problem to the patient, apologised and checked whether she
had any symptoms or complications. The patient was adhering to her diet and was asymptomatic. Nurse X
arranged for a repeat fasting glucose, cholesterol, thyroid function, electrolytes and HbA1c, and to review the
patient with the results of these investigations. The fasting glucose came back as 8.8 mmol/l (normal
< 6 mmol/l), confirming diabetes.
Discussion points: The template was unclear and made this error more likely.
Agreed action points: Adjust template to distinguish between fasting and random glucose.
Responsible person: AB.

Table 1 Comparison of clinical audit, PDSA and SEA

Clinical audit PDSA SEA

Example triggers Significant event, Significant event, Significant event (critical


previous audit, clinical previous audit, clinical incident, complaint,
guideline guideline success)
Review of evidence for Yes Yes Sometimes
change
Criteria Yes Yes No
Standards Yes No No
Type of measurement Before and after Continuous (statistical No: detailed review of a
process control) single event
Change implementation Change(s) implemented Often multiple changes Recommendation for
strategy together after first audit conducted in sequence change in policy,
or parallel protocol, structure or
behaviour
Cyclical Yes Yes No
Ideal outcome To meet or exceed To achieve improvement Analysis leading to
standard from baseline change in process

Clinical audit, PDSA and SEA plete the cycle. PDSA cycles are less well understood by
compared many practitioners, and most have little practical
experience of PDSA. Clinical audit and PDSA use a
All three techniques involve gaining a deeper under- measurement process before and after implementing
standing and reflecting on what we are trying to one or more changes to assess whether improvement
achieve and what changes can be made to improve has actually occurred. However, this is usually a single
(see Table 1). SEA is now routinely used in UK general measure before and after the change in clinical audit,
practice as part of the requirement for the revalidation whereas PDSA involves continuous repeated measure-
of doctors. Clinical audit is also commonly used, ment using statistical process control with run or
although unfortunately many ‘audits’ do not com- control charts (see Figures 3 and 4). SEA should ideally
Frameworks for improvement 129

100%

90%

80% W1: Discussion at


practice meeting W8: Recall letter for blood
70% test and follow-up
appointment with doctor
60%

50% W6: Protocol for drug search


and prescription reminder
40%
Week 1

Week 1

Week 2

Week 3

Week 4

Week 5

Week 6

Week 7

Week 8

Week 9

Week 10

Week 12

Week 13

Week 14

Week 15

Week 16
Week 11
Figure 3 Run chart showing the effect of quality improvement in the monitoring of azathioprine.
AEF

AEF

AEF

AEF

AEF

AEF
AE

AE

AE
EF

EF

A
100%

90%

80% W1: Discussion at


practice meeting W8: Recall letter for
70% blood test and
appointment with doctor
60%
W6: Protocol for drug
50% search and prescription
reminder for blood tests
40%
Week 1

Week 1

Week 2

Week 3

Week 4

Week 5

Week 6

Week 7

Week 8

Week 9

Week 10

Week 12

Week 13

Week 14

Week 15

Week 16
Week 11

Figure 4 Control chart showing the effect of quality improvement in the monitoring of azathioprine. W1,
week 1; W6, week 6; W8, week 8.

lead to changes in policy or practice but does not 4 Hippisley-Cox J, Coupland C. Symptoms and risk
involve measuring the effects of this. The main differ- factors to identify men with suspected cancer in primary
ence between clinical audit and PDSA is that audit care: derivation and validation of an algorithm. British
involves implementation of change after the first Journal of General Practice 2013;63:1–10.
5 Institute for Healthcare Improvement. Science of Im-
measurement followed by a further measurement,
provement: testing changes.
whereas PDSA involves continuous measurement dur-
6 Pringle M, Royal College of General Practitioners. Sig-
ing implementation of multiple changes conducted in nificant Event Auditing: a study of the feasibility and
sequence (i.e. one after the other) or in parallel (i.e. potential of case-based auditing in primary medical care.
different individuals or groups implementing different Royal College of General Practitioners: London, 1995.
changes at the same time). 7 National Patient Safety Agency. Root Cause Analysis
(RCA) Tools: getting started. Department of Health:
London.
REFERENCES
1 Gray JAM. Evidence-Based Healthcare: how to make
PEER REVIEW
health policy and management decisions. Churchill
Livingstone: New York, 1997. Commissioned; not externally peer reviewed.
2 Donabedian A. Evaluating the quality of medical care.
Milbank Memorial Fund Quarterly 1966:44(Suppl):206.
3 Hippisley-Cox J, Coupland C. Symptoms and risk CONFLICTS OF INTEREST
factors to identify women with suspected cancer in
primary care: derivation and validation of an algorithm. None declared.
British Journal of General Practice 2013;63:11–21.
130 S Gillam and AN Siriwardena

ADDRESS FOR CORRESPONDENCE

Steve Gillam, Department of Public Health and Pri-


mary Care, Institute of Public Health, University of
Cambridge, Robinson Way, Cambridge CB2 2SR, UK.
Email: sjg67@medschl.cam.ac.uk

Received 7 February 2013


Accepted 21 February 2013

You might also like