Understanding Audit
Understanding Audit
Understanding Audit
No. 5
October 2003
UNDERSTANDING AUDIT
1. Background
Clinical governance provides a framework for accountability and quality improvement. While
research is concerned with discovering the right thing to do, audit is concerned with ensuring that
the right thing is done.1 A First class service 2 outlined structures within the National Health
Service (NHS) for setting standards: the National Institute of Clinical Excellence (NICE) and the
National Service Frameworks (NSF); and monitoring performance: the Commission for Health
Improvement (CHI) and the Performance Assessment Framework (PAF). Analogous mechanisms
have been established in Scotland.2 Modifications have been under way in response to the Kennedy
report,3 which recommended bringing together national audit and assessment activity within a
single independent organisation, the Commission for Healthcare Audit and Inspection (CHAI) in
April 2004.4 This organisation will provide external audit and quality assurance mechanisms for
the NHS.
National clinical audit funding will cover central costs (e.g. database design, tools for data
collection, transmission and feedback), data analysis, ‘benchmarking’, project management and
administration. This funding will not cover local data collection or resource implications of
change. However, as good medical practice, individual doctors are required to undertake clinical
audit.5 NHS trusts are required to ensure that hospital doctors take part in national clinical audits,
confidential enquiries and audit programmes endorsed by the CHI. Obstetricians and
gynaecologists therefore need to understand the principles of clinical audit. In this document, the
issues surrounding audit methodology, organisation and implementation of change are discussed.
2. Definition
Clinical audit is a quality improvement process that seeks to improve patient care and outcomes
through systematic review of care against explicit criteria and the implementation of change.
Aspects of the structure, processes and outcomes of care are selected and systematically evaluated
against explicit criteria. Where indicated, changes are implemented at an individual, team or
service level and further monitoring is used to confirm improvement in healthcare delivery.6
3. Evidence
A review of the evidence by NICE concluded that audit is an effective method for improving the
quality of care. The same review also described the audit methods associated with successful audit
projects.6 These findings are drawn upon in this document to give practical advice for under-
taking audit.
Despite all the difficulties associated with the interpretation of outcome measures, mortality
and morbidity measures are important and this is a major justification for regular monitoring.
‘Critical incident’ or ‘adverse event’ reporting involves the identification of patients where an
adverse event has occurred, such as the Confidential Enquiries into Maternal Deaths
(CEMD), the Confidential Enquiry into Stillbirths and Deaths in Infancy (CESDI) and the
Guidelines Evidence
Benchmarking Consensus
Process
redesign Sampling
In selecting a topic for audit, priority should be given to common health concerns, areas
associated with high rates of mortality, morbidity or disability, and those where good research
evidence is available to inform practice or aspects of care that use considerable resources. It is
important to involve those who will be implementing change at this stage of the audit process.
Induced abortion Screening for lower genital tract organisms and treatment of positive cases among
women undergoing induced abortion should be carried out to reduce post-abortion
infective morbidity
Caesarean section A thromboprophylaxis strategy should be part of the management of women delivered by
caesarean section
Hysterectomy Transcervical resection of the endometrium or endometrial ablation should be available and
offered to women with dysfunctional uterine bleeding as an alternative to hysterectomy
Target levels of performance have been most used in screening programmes. For example, in
screening for cervical cancer there are quality criteria to be met, such as the proportion of
cervical smears that have endocervical cells.
The term ‘standard’ has been used to refer to different concepts, sometimes as an alternative
word for ‘clinical guidelines’ and ‘review criteria’, either with or without a stated target level
of performance and, somewhat confusingly, also to refer to the observed or desired level of
performance. However, it has been defined as ‘the percentage of events that should comply
with the criterion’ in the interests of clarity.
Data collectors should always be aware of their responsibilities to the Data Protection Act12
and any locally agreed guidelines. There are also nationally agreed guidelines, known as the
Caldicott Principles, a key part of clinical and information governance. The six Caldicott
Principles are:
1. Justify the purpose(s). Every proposed use or transfer of patient-identifiable information
within or from an organisation should be clearly defined and scrutinised, with continuing
uses regularly reviewed by an appropriate guardian.
2. Do not use patient-identifiable information unless it is absolutely necessary.
3. Use the minimum necessary patient-identifiable information. Where use of patient-
identifiable information is considered to be essential, each individual item of information
should be justified with the aim of reducing identifiability.
4. Access to patient-identifiable information should be on a strict need-to-know basis. Only
those individuals who need access to patient-identifiable information should have access to
it, and they should only have access to the information items that they need to see.
5. Everyone should be aware of their responsibilities. Both clinical and nonclinical staff
should be aware of their responsibilities and obligations to respect patient confidentiality.
6. Understand and comply with the law. Someone in each organisation should be responsible
for ensuring that the organisation complies with legal requirements.
Under the Data Protection Act 1998, it is an offence to collect personal details of patients
such as name, address, or other items that are potentially identifiable for the individual
without consent. There seems to be consensus that clinical audit is part of direct patient care
and therefore consent to use of data for audit can be implied through consent to treatment,
Routinely collected data can be used if all the data items required are available. It will be
necessary to check the definitions for data items that are used within the routine database to
ensure its usefulness for the aims of the audit. Also, the completeness and coverage of the
routine source needs to be known.
Where the data source is clinical records, training of data abstractors and use of a standard
pro forma can improve accuracy and reliability of data collection. The use of multiple sources
of data may also be helpful. However, this can also be problematic, as it will require linking
of data from different sources with common unique identifiers.
Questionnaire surveys of staff or patients are often used for data collection. There are several
validated questionnaires on a wide range of topics that may be adapted to a specific audit
project. There is also literature on developing these (see Appendix).
Consideration also needs to be given to the coding of responses on the database. For ease of
analysis of closed questions it is generally better to have numeric codes for responses. For
example, yes/no responses can be coded to take the value 0 for no and 1 for yes. Missing data
Data items that have categorical responses (e.g. yes/no or A/B/C/D) can be expressed as
percentages. Some data items are collected as continuous variables; for example, mother’s
age, height and weight. These can either be categorised into relevant categories and then
expressed as percentages or, if they are normally distributed, the mean and standard
deviations can be reported. These summary statistics (percentages and means) are useful for
describing the process, outcome or service provision that was measured.
Comparisons of percentages between different groups can be made using a chi-square test; t
tests can be used to compare means between two groups, assuming that these are normally
distributed. Nonparametric statistical methods can be used for data that are not normally
distributed. These comparisons are useful in order to determine whether there are any real
differences in the observed findings; for example, when comparing audit results obtained at
different time points or in different settings. In some situations a sample-size calculation may
be necessary to ensure that the audit is large enough to detect a clinically significant difference
between groups, if one exists. In this situation, it is important to consult a statistician during
the planning stages of the audit project.
These simple statistics can be easily done using Microsoft Excel spreadsheets and Microsoft
Access databases. Other useful statistical software packages include Epi Info, SAS, SPSS,
STATA and Minitab.
The significance of teamwork, culture and resistance to change has led several authors to
propose frameworks for planning implementation. These usually include analysis of the
barriers to change and use of theories of individual, team or organisational behaviour to
select strategies to address the barriers. For some topics, such as adverse incidents, systems
for continuous data collection may be justified.
6. Organisation of audit
The NICE review6 found that some methods of organising audit programmes were better than
others. The following features are associated with successful audit:
● structured programmes with realistic aims and objectives
● leadership and attitude of senior management
● nondirective, hands-on approach
● support of staff, strategy groups and regular discussions
● emphasis on teamworking and support
● environment conducive to conducting audit.
Developing questionnaires
There is a large amount of literature on how to develop questionnaires.15,16 Some of the general
principles involved are presented here.
Questionnaires are often used as a tool for data collection. Questions may be open or closed.
Generally, questionnaire design using open questions; e.g., “What was the indication for caesarean
section?” (followed by space for free text response) is easier. However, analysis of these data is
difficult, as there will be a range of responses and interpretation can be problematic. Open
questions may be more difficult and time consuming to answer and can lead to non-reponse,
which results in loss of data.
Questionnaires can be composed entirely of closed questions (i.e. with all possible answers
predetermined). More time is needed to develop this type of questionnaire but the analysis is
generally easier. An example of this type of questionnaire is:
Which of the following statements most accurately describes the urgency of this caesarean section?
A. Immediate threat to the life of the fetus and the mother.
B. Maternal or fetal compromise that is not immediately life threatening.
C. No maternal or fetal compromise but needs early delivery.
D. Delivery timed to suit the woman and staff.
Closed questions assume that all possible answers to the question are known but not the
distribution of responses. Time and consideration needs to be given to the options available for
response as, if a desired response is not available, the question may just be missed out and it may
put people off completing the rest of the questionnaire. For some questions, the ‘other’ category
can be used with the option ‘please specify’, which gives an opportunity for the respondent to
write in a response. However, if this is used, thought must be given a priori as to how these free-
text responses will be coded and analysed. In some situations, not having a category of ‘other’ may
lead to the question not being answered at all, which means that data will be lost.
If questionnaires are developed for a specific project, they need to be piloted and refined to ensure
their validity and reliability before use as a tool for data collection. While those who developed
the questionnaire understand the questions being asked, the aim of piloting is to check that those
who have to fill in the questionnaire are able to understand and respond with ease. Questionnaires
that are not user friendly are associated with lower response rates, the quality of data collected
will be poor and hence results will be of little value.
References
1. Smith R. Audit and research. [see comments]. BMJ 1992;305:905–6.
2. Department of Health. A First Class Service; Quality in the New NHS. London: Department
of Health; 1998.
3. Department of Health. Learning From Bristol: The Report of the Public Inquiry into
Children’s Heart Surgery at the Bristol Royal Infirmary 1984–95. London: Department of
Health; 2001.
4. Commission for Health Improvement. Commission for Health Improvement response to
Alan Milburn statement today. Media statement 18 April 2002. [www.chi.nhs.uk/eng/news/
2002/apr/07.shtml] Accessed 11 September 2003.
5. General Medical Council. Good Medical Practice. London: General Medical Council; 2001.
This advice was produced on behalf of the Royal College of Obstetricians and Gynaecologists by:
Dr S Paranjothy, Research Fellow, National Collaborating Centre for Women and Children’s Health, London;
Miss J M Thomas MRCOG, Director, National Collaborating Centre for Women and Children’s Health London.
Final version is the responsibility of the Guidelines and Audit Committee of the RCOG