Frameworks For Improvement Clinical Audit The Plandostudyact Cycle and Significant Event Audit
Frameworks For Improvement Clinical Audit The Plandostudyact Cycle and Significant Event Audit
Frameworks For Improvement Clinical Audit The Plandostudyact Cycle and Significant Event Audit
ABSTRACT
This is the first in a series of articles about quality the similarities and differences between these and
improvement tools and techniques. We explore providing examples of each.
common frameworks for improvement, including
the model for improvement and its application to Keywords: clinical audit, general practice, primary
clinical audit, plan–do–study–act (PDSA) cycles care, plan-do-study-act cycles, quality improve-
and significant event analysis (SEA), examining ment, significant event analysis
Quality, which has been defined as ‘the degree to Evaluation has been defined as ‘a process that attempts
which health services for individuals and populations to determine, as systematically and objectively as pos-
increase the likelihood of desired health outcomes and sible, the relevance, effectiveness and impact of activi-
are consistent with current professional knowledge’,1 ties in the light of their objectives, e.g. evaluation of
may be seen from different stakeholder perspectives. structure, process and outcome, clinical trials, quality
The ‘desired’ outcomes may be subtly different for of care’. Where do we start when thinking about
managers, patients and clinicians. Patients clearly evaluation of a service in the National Health Service
want treatment that works, and place a high priority (NHS)? Avedis Donabedian distinguished four ele-
on how that treatment is delivered. Clinicians focus on ments:2
effectiveness, and want to provide treatment that . structure (buildings, staff, equipment)
works best for each of their patients. Managers are . process (all that is done to patients)
rightly concerned with efficiency, and seek to maxi- . outputs (immediate results of medical interven-
mise the population health gain through best use of
tion)
increasingly limited budgets. The range of different . outcomes (gains in health status).
outcomes desired demonstrates the multidimensional
nature of quality. The first stage in any attempt to Thus, for example, evaluation of the new screening
measure quality is therefore to think about what algorithms for the early detection of cancer in primary
dimensions are important for you. care3,4 will need to consider:
. the cost of implementing the programme (add-
itional consultations, investigations and referrals)
. the numbers of patients screened, coverage rates for
defined age ranges and gender, number and pro-
portion of patients screened who are referred, time
124 S Gillam and AN Siriwardena
The PDSA cycle one or more of these changes on a larger scale. The
following stages are involved.
The PDSA cycle takes audit one stage further (Figure 2)
by focusing on the development, testing and imple- . First, develop a plan and define the objective (plan).
mentation of quality improvement. . Second, carry out the plan and collect data (do),
The PDSA cycle involves repeated rapid small-scale then analyse the data and summarise what was
tests of change, carried out in sequence (changes tested learned (study).
one after another) or in parallel (different people or . Third, plan the next cycle with necessary modifi-
groups testing different changes), to see whether and cations (act).
to what extent the changes work, before implementing
Clinical audit, PDSA and SEA plete the cycle. PDSA cycles are less well understood by
compared many practitioners, and most have little practical
experience of PDSA. Clinical audit and PDSA use a
All three techniques involve gaining a deeper under- measurement process before and after implementing
standing and reflecting on what we are trying to one or more changes to assess whether improvement
achieve and what changes can be made to improve has actually occurred. However, this is usually a single
(see Table 1). SEA is now routinely used in UK general measure before and after the change in clinical audit,
practice as part of the requirement for the revalidation whereas PDSA involves continuous repeated measure-
of doctors. Clinical audit is also commonly used, ment using statistical process control with run or
although unfortunately many ‘audits’ do not com- control charts (see Figures 3 and 4). SEA should ideally
Frameworks for improvement 129
100%
90%
Week 1
Week 2
Week 3
Week 4
Week 5
Week 6
Week 7
Week 8
Week 9
Week 10
Week 12
Week 13
Week 14
Week 15
Week 16
Week 11
Figure 3 Run chart showing the effect of quality improvement in the monitoring of azathioprine.
AEF
AEF
AEF
AEF
AEF
AEF
AE
AE
AE
EF
EF
A
100%
90%
Week 1
Week 2
Week 3
Week 4
Week 5
Week 6
Week 7
Week 8
Week 9
Week 10
Week 12
Week 13
Week 14
Week 15
Week 16
Week 11
Figure 4 Control chart showing the effect of quality improvement in the monitoring of azathioprine. W1,
week 1; W6, week 6; W8, week 8.
lead to changes in policy or practice but does not 4 Hippisley-Cox J, Coupland C. Symptoms and risk
involve measuring the effects of this. The main differ- factors to identify men with suspected cancer in primary
ence between clinical audit and PDSA is that audit care: derivation and validation of an algorithm. British
involves implementation of change after the first Journal of General Practice 2013;63:1–10.
5 Institute for Healthcare Improvement. Science of Im-
measurement followed by a further measurement,
provement: testing changes.
whereas PDSA involves continuous measurement dur-
6 Pringle M, Royal College of General Practitioners. Sig-
ing implementation of multiple changes conducted in nificant Event Auditing: a study of the feasibility and
sequence (i.e. one after the other) or in parallel (i.e. potential of case-based auditing in primary medical care.
different individuals or groups implementing different Royal College of General Practitioners: London, 1995.
changes at the same time). 7 National Patient Safety Agency. Root Cause Analysis
(RCA) Tools: getting started. Department of Health:
London.
REFERENCES
1 Gray JAM. Evidence-Based Healthcare: how to make
PEER REVIEW
health policy and management decisions. Churchill
Livingstone: New York, 1997. Commissioned; not externally peer reviewed.
2 Donabedian A. Evaluating the quality of medical care.
Milbank Memorial Fund Quarterly 1966:44(Suppl):206.
3 Hippisley-Cox J, Coupland C. Symptoms and risk CONFLICTS OF INTEREST
factors to identify women with suspected cancer in
primary care: derivation and validation of an algorithm. None declared.
British Journal of General Practice 2013;63:11–21.
130 S Gillam and AN Siriwardena