Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Clinical Governance A Study of Implementation A Study of Change

Download as pdf or txt
Download as pdf or txt
You are on page 1of 344

CLINICAL GOVERNANCE

A STUDY OF IMPLEMENTATION; A STUDY OF CHANGE

by

LINDA ANN LATHAM

A thesis submitted to
The Faculty of Commerce and Social Science
The University of Birmingham
for the degree of
DOCTOR OF PHILOSOPHY

Health Services Management Centre


University of Birmingham
Birmingham
B152TT

February 2003
University of Birmingham Research Archive
e-theses repository

This unpublished thesis/dissertation is copyright of the author and/or third


parties. The intellectual property rights of the author or third parties in respect
of this work are as defined by The Copyright Designs and Patents Act 1988 or
as modified by any successor legislation.

Any use made of information contained in this thesis/dissertation must be in


accordance with that legislation and must be properly acknowledged. Further
distribution or reproduction in any format is prohibited without the permission
of the copyright holder.
o

ao
(A)
ABSTRACT

The concept of clinical governance was first introduced to the National Health
Service in the White Paper published in 1997 (Department of Health); it has been
described as the 'linchpin' of the quality reforms and, as of April 1999, is one of the
statutory duties placed on NHS Trust Boards. Clinical governance is defined as:

'A framework through which NHS organisations are accountable for


continuously improving the quality if their services and safeguarding high
standards of care by creating an environment in which excellence in clinical
care will flourish.' (Department of Health, 1998; p33).

The research project upon which this thesis is based took place over an 18 month
period and has followed one NHS Trust as it implemented this new policy.
Implementation may be conceptualised as both a change process and an end state; to
capture this duality, two broad research questions are posed namely: what
constitutes the local clinical governance agenda (content) and how has clinical
governance been implemented (process). Given that the main purpose of these
research questions is to explore and describe, an overarching qualitative framework
has been adopted and, within this, an action research approach utilised.
To Dilys Davies......

my grandmother and a fellow traveller


ACKNOWLEDGEMENTS

I would like to thank all of the friends and colleagues who have
provided support and encouragement throughout the life-time of this
research project. I would also like to express my thanks and
appreciation to the following:

To all at the NHS Trust who took part in the research; in particular the
Clinical Governance Lead whose support of this work made the project
feasible.

To Professor Peter Spurgeon for his experienced supervision, support


and expert advice.

To my husband Tim Cairns who now knows far more about clinical
governance than he ever wanted or, as a non-clinician, will ever need -
thank you for everything.
CONTENTS

LIST OF TABLES

LIST OF APPENDICES

CHAPTER 1 1
INTRODUCTION AND OVERVIEW OF THESIS

1.1 The emergence of clinical governance 1


1.2 Case study site profile 8
1.3 Thesis overview 9

CHAPTER! 11
LITERATURE REVIEW - CLINICAL GOVERNANCE

2.1 Introduction 11
2.2 Clinical governance - an emerging concept 11
2.3 Clinical governance and related concepts 13
2.3.1 Total quality management and continuous quality
improvement 13
2.3.2 Corporate governance 15
2.3.3 Hospital governance 15

2.4 Making sense of clinical governance 17


2.5 Clinical governance - early implementation 22
2.6 Implementation insights from the policy literature 26
2.7 Chapter summary 30

CHAPTER 3 32
LITERATURE REVIEW - TOTAL QUALITY MANAGEMENT
AND CONTINUOUS QUALITY IMPROVEMENT

3.1 Introduction 32
3.2 Quality in health care 33
3.2.1 Quality in health care - a mixed picture 33
3.2.2 Quality - a complex concept 34
3.2.3 Quality and the challenge of CQI 37

3.3 Total Quality Management - the concept 38


3.3.1 TQM - a hazy and ambiguous concept 38
3.3.2 The search for core principles 40
3.3.3 In search of theoretical underpinnings 45

3.4 Implementing Total Quality Management 50


3.4.1 TQM implementation frameworks 50
3.4.2 TQM implementation - critical success factors 53
3.4.3 Barriers, pitfalls and obstacles to the implementation of
TQM 57
3.4.4 Implementation case studies 62

3.5 TQM and CQI in health care - a general overview 66


3.5.1 TQM - an ambiguous and hazy concept within health care 66
3.5.2 The challenge of TQM implementation in health care 68

3.6 Experimenting with TQM in the UK National Health Service


and the Norwegian Health Service 72

3.7 Chapter summary 74

CHAPTER 4 76
LITERATURE REVIEW - CHANGE AND CHANGE MANAGEMENT

4.1 Introduction 76
4.2 Theories of change 77
4.3 Change conceptualised 79
4.3.1 Incremental and discontinuous change 79
4.3.2 Planned and emergent change 83
4.3.3 Ideal types and composites of change 83

4.4 Change management 85


4.4.1 Models and frameworks for change management 87
4.4.2 Change in eight steps 89
4.4.3 Ten keys to effective change management 89
4.4.4 A framework for transformational change 90
4.4.5 Common themes 92
4.4.6 Culture change 97

4.5 Chapter summary 98

CHAPTER 5 100
RESEARCH METHODOLOGY

5.1 Introduction 100


5.2 Research design 101
5.2.1 A qualitative framework to provide a flexible approach 101
5.2.2 Qualitative designs 102
5.2.3 A conceptual framework 104

5.3 Research strategy 106


5.3.1 Case studies, surveys and experiments 106
5.3.2 The single site case study 107
5.3.3 Generalising from case studies 109

5.4 Action research 109


5.4.1 Origins and applications 109
5.4.2 Definitions and principles 111
5.4.3 The researcher role in action research 112
5.4.4 A model for action research 113
5.4.5 Collaboration and participation as key components 114

5.5 Data collection and data management 115


5.5.1 Data collection methods 115
5.5.2 Data management 118

5.6 Research in action 118


5.6.1 Phase one - gaining entry 118
5.6.2 Phase two - rapid appraisal 120
5.6.3 Feeding back to the Trust - Report One 121
5.6.4 Phase three - widening the corporate-level interview set 122
5.6.5 Phase four - Primary Care Division 123
5.6.6 Phase five - The Final Report 126
5.6.7 The action research cycle 126

5.7 Research quality 127


5.7.1 Quality criteria for qualitative inquiry 127
5.7.2 Research strategy: design and operationalisation 128
5.7.3 Sampling strategy 129
5.7.4 Generalising from case studies 129
5.7.5 Rigour in the field 130
5.7.6 Analysis and reporting 130
5.7.7 Participant feedback 131

5.8 Chapter summary 131

CHAPTER 6 132
RESULTS: CLINICAL GOVERNANCE IMPLEMENTATION
CORPORATE ACTIVITY: CONTENT

6.1 Introduction 132

6.2 A vision and a strategy for clinical governance 132


6.2.1 The Clinical Governance Report 132
6.2.2 The Clinical Governance Development Plan 134

6.3 Structures to support a developing agenda 136


6.3.1 Clinical Governance Lead 136
6.3.2 Clinical Governance Sub-committee 138
6.3.3 Divisional structures 143
6.3.4 Clinical Governance Development Team 144
6.3.5 Risk Management Team 147
6.3.6 Training and Development Group 150
6.3.7 Libraries 152
6.3.8 Related structures 153

6.4 Systems and Processes 154


6.4.1 Dissemination and Implementation of Good Practice Guidelines
6.4.2 Clinical audit 156
6.4.3 Raising issues of concern 158
6.4.4 Incident reporting, trigger events, significant case reviews 159
6.4.5 Significant clinical incident review 160
6.4.6 User involvement 163
6.4.7 Appraisal and professional development 165
6.4.8 Communicating the clinical governance agenda 166
6.4.9 Monitoring and reporting progress 171

6.5 People 173


6.5.1 The human resource 173
6.5.2 Linking HR and clinical governance 174

6.6 Organisational culture 176


6.6.1 The need for culture change 176
6.6.2 Culture conceptualised 177
6.6.3 A culture of trust 178

6.7 Chapter summary 180

CHAPTER 7 182
RESULTS: CLINICAL GOVERNANCE IMPLEMENTATION -
CORPORATE ACTIVITY: PROCESS

7.1 Introduction 182


7.2 Leadership and management 182
7.3 Confronting reality 184
7.4 Creating a vision of clinical governance 184
7.5 Planning for implementation 185
7.6 Creating and reallocating resources 186
7.7 From vision to operations 187
7.8 Energy for change 189
7.8.1 Education and involvement 190
7.8.2 Co-ordination 197
7.8.3 Feedback 199
7.8.4 Communication 203
7.8.5 Support 205

7.9 Chapter summary 209

CHAPTER 8 210
RESULTS: CLINICAL GOVERNANCE - A DIVISIONAL VIEW

8.1 Introduction 210


8.2 Primary Care Division - an overview 210
8.3 Clinical governance implementation in the Division: content 212
8.4 Clinical governance implementation in the Locality: content 215
8.4.1 Structures for quality improvement 216
8.4.2 Clinical supervision 217
8.4.3 Appraisal and personal development 218
8.4.4 Clinical audit 218
8.4.5 User involvement 219
8.4.6 Clinical governance - knowledge and skills 219

8.5 Clinical governance: the implementation process in the


Division and Locality 222

8.5.1 Clinical governance implementation - a late start 222


8.5.2 An action plan for the Division 222
8.5.3 Organisation development - a missing component 224

8.6 Chapter summary 227

CHAPTER 9
DISCUSSION OF RESULTS 228

9.1 Introduction 228


9.2 Factors which predict significant movement towards total 230
quality
9.2.1 Demonstrated senior management commitment and understanding 231
9.2.2 A well-developed and well-documented implementation strategy 238
9.2.3 Comprehensive baseline assessment of service quality 246
9.2.4 A structure to oversee implementation 250
9.2.5 Strong/persevering co-ordinator - board-level appointment 256
9.2.5 Early involvement of clinicians 261
9.2.6 Comprehensive training 263
9.2.7 Explicit strategy/resources for recognising and rewarding progress 268
9.2.8 Organisational changes after evaluation 269

9.3 Key messages from the experience of the Emerald NHS Trust 271
9.3.1 Learning from the past 272
9.3.2 A study of implementation-a study of change 274
9.3.3 Clarifying the'what'of implementation 275
9.3.4 Implementation frameworks to deliver the 'what' and the 'how' 276
of the change process
9.3.5 Culture matters 280

9.4 Limitations 281

9.5 Further research 284

9.6 Chapter summary 285

CHAPTER 10 CONCLUSION 287

APPENDICES 291

REFERENCES 314
LIST OF TABLES

CHAPTER 1 INTRODUCTION AND OVERVIEW OF THESIS


Table 1.1 Mechanisms for delivering the national quality agenda 2
Table 1.2 A framework for quality 3
Table 1.3 Key components of clinical governance 4
Table 1.4 For key steps in clinical governance implementation 6
Table 1.5 Clinical governance conceptualised 7

CHAPTER 2 LITERATURE REVIEW - CLINICAL GOVERNANCE


Table 2.1 Five critical areas for effective governance 17
Table 2.2 Clinical governance - emerging themes 21
Table 2.3 Perceived barriers to clinical governance implementation 24
Table 2.4 Factors influencing the success of policy implementation 28
Table 2.5 Perfect implementation 29

CHAPTER 3 LITERATURE REVIEW - TOTAL QUALITY


MANAGEMENT AND CONTINUOUS QUALITY
IMPROVEMENT
Table 3.1 Dimensions of quality 35
Table 3.2 TQM conceptualised 41
Table 3.3 The TQM 'Gurus' 42
Table 3.4 TQM and core principles 43
Table 3.5 Degrees of TQM conceptualisation 49
Table 3.6 Characteristics of a TQM implementation framework 52
Table 3.7 TQM implementation - process CSFs 55
Table 3.8 TQM implementation - end state CSFs 55
Table 3.9 Organisational factors in TQM implementation 56
Table 3.10 Barriers to TQM implementation 59
Table 3.11 A whole system view of the barriers to TQM 61
Table 3.12 TQM design requirements 63
Table 3.13 Common themes from TQM case studies 65
Table 3.14 The professional paradigm and the TQM paradigm 71
Table 3.15 Factors predictive of TQM movement 73

CHAPTER 4 LITERATURE REVIEW - CHANGE AND CHANGE


MANAGEMENT
Table 4.1 A matrix of continuity and time 82

CHAPTER 6 RESULTS: CLINICAL GOVERNANCE IMPLEMENTATION


- CORPORATE ACTIVITY; CONTENT
Table 6.1 The main components of clinical governance - Trust model 133

CHAPTER 7 RESULTS: CLINICAL GOVERNANCE IMPLEMENTATION


- CORPORATE ACTIVITY; PROCESS
Table 7.1 Process architecture 190

CHAPTER 9 DISCUSSION OF RESULTS


Table 9.1 Predicting movement towards total quality 231
LIST OF APPENDICES

Appendix! Typologies of TQM implementation 291

Appendix 2 Framework for transformational change 294

Appendix 3 Eight stage process for change 295

Appendix 4 Ten keys to effective change management 296

Appendix 5 Action research - a model for practice 297

Appendix 6 Recommendations - action set 1 298

Appendix 7 Recommendations - action set 2 299

Appendix 8 A selection of interview schedules/guides 301


CHAPTER 1

INTRODUCTION AND OVERVIEW OF THESIS

'The new NHS will have quality at its heart1


(Department of Health, 1997; p!7)

1.1 THE EMERGENCE OF CLINICAL GOVERNANCE

Historically, the quality of care provided by the National Health Service (NHS) had

been regarded as an inherent component of the system and, as such, thought to be

assured by the 'ethos and skills' of the health professionals who worked within it

(Donaldson and Muir Gray, 1998). However, there has been growing disquiet over the

medical profession's claim to self-regulation and concern that this mechanism is no

longer sufficient to assure the quality of health care (Sutherland and Dawson, 1998).

This has been fuelled by changes in health policy (0vretveit, 1998) and by a steady

increase in public access to the 'coded knowledge base' of professionals (Sutherland

and Dawson, 1998). There have also been a number of high profile failures in health

service quality which have received a considerable amount of publicity. Amongst those

which have achieved a particular notoriety is paediatric cardiac surgery at Bristol Royal

Infirmary and also the cervical screening service at the Kent and Canterbury NHS

Trust. More recently, the disturbing activities of Harold Shipman have come to light; a

GP convicted of murdering hundreds of patients over years spent in general practice.

The negative impact on public confidence in the NHS of cases such as those described

has not gone unnoticed by policy makers (Donaldson, 1998). Such incidents also

highlight another cause for concern - the variations in health service quality that appear

to exist not only between health care organisations but also within the same health care
1
provider. Shortly after gaining office, the UK Labour Government identified quality

assurance, quality improvement and unexplained variations in health services as areas

which must be addressed (Department of Health, 1997; 1998). Thus, when the

ambitious programme of reform of the NHS was launched through the White Paper

The new NHS: Modern, Dependable1 (Department of Health, 1997), the Government

signalled that quality of health care would be high on the policy agenda. This was

backed by the pledge that 'the new NHS will have quality at its heart'; quality 'in its

broadest sense: doing the right things, at the right time, for the right people, and doing

them right -first time' (Department of Health, 1997; p 17).

The White Paper (ibid; p 17, 18) announced that 'new and systematic action is needed

to raise standards and ensure consistency* and that the aim of this action would be to

'drive quality into all parts of the NHS'. This document briefly introduced the

mechanisms that would deliver the quality agenda (Table 1.1).

Table 1.1: Mechanisms for delivering the national quality agenda

• National Institute of Clinical Excellence • National Patient and User Survey;


(NICE); • National Service Frameworks (NSF);
• The Commission for Health Improvement; . National Performance Framework;
• Clinical Governance;

Department of Health (1997)__________________________________

The quality framework was outlined in greater detail the following year in the

consultation document 'A First Class Service' (Department of Health, 1998) (Table

1.2). At the time, several commentators on health policy pointed to a promising degree

of consistency and coherence in approach across the key components of the quality

framework (Walshe, 1997; Thomson, 1998); in particular the introduction of


mechanisms to set standards nationally, deliver locally and monitor centrally.

Table 1.2: A framework for quality

National Institute for Clinical Excellence Clear standards


National Service Frameworks of service

Professional self-regulation Dependable


Patient/ Clinical governance local delivery
Public Involvement Lifelong learning

Commission for Health Improvement Monitored


National Performance Framework standards
National Patient and User Survey

Department of Health (1998)

Whilst strengthening the existing notions of patient/public involvement, professional

self-regulation and lifelong learning, the quality framework also introduces a mix of

new structures and initiatives. The centrepiece or rather the 'linchpin' of the quality

strategy (Department of Health, 1999) is the new system of clinical governance;

defined below as:

'A framework through which NHS organisations are accountable for


continuously improving the quality of their services and safeguarding
high standards of care by creating an environment in which excellence
in clinical care will flourish'. (Department of Health, 1998; p 33)

Clinical governance applies to all areas of the NHS and, as part of an overall

governance framework, it is explicitly linked to the concept of corporate governance

(Department of Health, 1998). Whilst the latter approach reaffirms the importance of

maintaining financial probity within the Health Service, the former recognises the need

to increase the status of quality at the corporate level. Consequently, as of 1 April

1999, health organisations now have a statutory duty to maintain and improve standards

of heath care (Donaldson, 1999); a development that the public might have assumed
was already present in the current system even pre-reform.

Although quality is to become 'everybody's business' in the 'new' NHS (Department of

Health, 1997; 1998), clinical governance brings a greater emphasis on the corporate

responsibility for quality. Within the clinical governance framework (ibid), the chief

executive is explicitly identified as the officer accountable for quality on behalf of the

board of all NHS Trusts. In future, quality, which includes clinical quality, should be

awarded a status on the corporate agenda that is equal to finance. In the past, it has not

always been clear exactly who was responsible for this aspect of the service (Walshe,

1998a) and problems with clinical quality issues have often been regarded as the

province of the individual clinician (Walshe, 1998b).

Whilst the clinical governance concept is closely linked to that of corporate

governance, it also incorporates a number of quality-specific components (Department

of Health, 1998) (Table 1.3).

Table 1.3: Key components of clinical governance

• Clear lines of responsibility and accountability for the overall quality of clinical care;
• A comprehensive programme of quality improvement activity;
• Clear policies aimed at managing risks;
• Procedures for all professional groups to identify and remedy poor performance.

Department of Health (1998; p36)

Although the White Paper (Department of Health, 1997) provided a taster of what was

to come, it was the later consultation document (Department of Health, 1998) which

provided further detail on the nature of clinical governance; a concept which had not

previously been presented in this form as a search of Medline pre-1997 demonstrated.


NHS Trusts were now provided with a clearer idea of the sort of systems needed to

support the quality improvement activities they would be required to undertake. Many

of these activities such as risk management and clinical audit are not new to the NHS

although their effectiveness, and in particular that of clinical audit, in delivering

improvement appears in some doubt (Walshe, 1999). However, what is new is the

notion of deliberate action to integrate what are often completely unconnected vehicles

for improvement into a unified, whole-organisation approach to quality. Whilst 'A First

Class Service' (Department of Health, 1998) gave an insight into the key components of

clinical governance, there was little in this document to guide Trusts in terms of

implementation; this guidance did not appear until the following year. This gradual

unveiling suggests that policy makers were developing the detail over time - thus

clinical governance was not a fully formed concept when introduced to the NHS back

in 1997.

'Clinical governance: Quality in the new NHS' (Department of Health, 1999) provided

the first detailed guidance on the implementation of this policy. According to the

document, its intention was to be developmental and this notion seemed to translate as -

although a 'clear framework for action' would be outlined, there would be no

prescription as to the methods to be used by the organisations. In reality, the document

does present a clear action set which, in some areas, is distinctly prescriptive with

objectives explicitly stated. In other areas there is more scope for local interpretation

and objectives are rather more abstract - however, the guidance does make it clear that

all Trusts must show progress against these objectives. The actions outlined in Table

1.4 below are an attempt by the Department of Health to ensure that a number of key
elements of the clinical governance agenda (leadership, strategy, structures,

infrastructure) are addressed at an early stage in the implementation process. However,

the guidance essentially stops short of providing a blueprint that would address all of

the design elements required for effective implementation.

Table 1.4: Four key steps in clinical governance implementation

• Establish leadership, accountability and working arrangements;


• Carry out a baseline assessment of capacity and capability;
• Formulate and agree a development plan in the light of this assessment;
• Clarify reporting arrangements for clinical governance within Board and
Annual reports.

Department of Health (1999)

Taken together, the aforementioned White Paper, Consultation Document and guidance

constitute the key, centrally-generated documents that refer to clinical governance in

terms of policy/implementation in any detail. Despite the fact that The NHS Plan'

(Department of Health, 2000) outlines a 10 year strategy for investment and reform of

the NHS, constitutes a major improvement programme to address a multitude of quality

issues and outlines specific improvement methodologies such as the collaboratives, it is

rather curious that the most substantial reference to clinical governance seems to

consist of a single paragraph and this is in connection with the regulation and

development of medics. This does little to reinforce the connection between clinical

governance as a concept and the practice of Continuous Quality Improvement (CQI).

Publication of the NHS Plan (ibid) has been followed up with clear statements of

detailed targets and milestones for the achievement of specific elements, an

implementation plan, and the year 2002 saw the publication of a progress report -

essentially a centralised approach to implementation which is in sharp contrast to the

one adopted in relation to clinical governance.

6
The National Clinical Governance Support Unit was created to provide support to NHS

Trusts around the clinical governance agenda. The Support Unit has developed a

model of clinical governance consisting of 13 key components (Nicholls, Cullen,

O'Neill et al, 2000) (Table 1.5) and this has largely been disseminated through

presentations, training courses and journal publications. The Commission for Health

Improvement (CHI) which undertakes clinical governance reviews of all NHS Trusts,

publishes, on its web site, details of the pre-information currently sought from Trusts

which should give organisations an idea of how this national body conceptualises

clinical governance. In addition, CHI is also developing a model for clinical

governance; however, from the details on the web site (CHI: www.chi.nhs.uk), it seems

that this is still being refined (Table 1.5).

Table 1.5: Clinical governance conceptualised

Clinical Governance Support Unit Commission for Health Improvement


1. Patient-professional partnership 1. Strategic Capacity
2. Clinical effectiveness • Patient focus
3. Risk management effectiveness • Leadership
4. Patient experience • Direction and Planning
5. Communication effectiveness
6. Resource effectiveness 2. Resources and processes
7. Strategic effectiveness • Processes for quality improvement
8. Learning effectiveness • Staff focus
9. Systems awareness
10. Teamwork 3. Results
11. Communication • Patient experience
12. Ownership • Outcomes
13. Leadership
4. Use of information
CHI web site (www.chlnhs.uk) accessed
Nicholls, Cullen, O'Neill et al (2000) 30 Oct 02

Whilst there is clearly some overlap between the two models from the information

presented here (Table 1.5), there is insufficient detail on the CHI web site of the sub-

components of the four main areas to make any meaningful comparisons between them.

In reality, there is no single, agreed model of clinical governance to guide Trusts in


their efforts to implement this complex, far-reaching policy. Commenting on this two

years after the concept of clinical governance was introduced to the NHS, Lugon and

Seeker-Walker (1999, pi6) note that:

'Clinical governance is such a new concept that there is no 'right' way to


manage it and each Trust will adapt to fit its own circumstances and a
national consensus will be arrived at by trial and error over a period of
time'.

The observation above might well turn out to be an accurate description of the

development of clinical governance over time; however, those Trusts preparing for a

visit from CHI might appreciate more explicit guidance from the centre given that the

extent of each Trust's 'trial and error' will be published on the Department of Health

web site for all with an interest in these matters to see. It seems that, in the early days,

there was a lack of clarity in the field about just what clinical governance meant as a

concept never mind what it would look like in practice (Walshe, 1998b; Grainger,

Hopkinson, Barrett et al, 2002). This obviously adds to the challenge faced by NHS

Trusts; that of turning policy into practice.

1.2 CASE STUDY SITE PROFILE

This thesis presents a detailed description of one NHS Trust as it implements clinical

governance. The Trust will be referred to throughout as the Emerald NHS Trust or the

Emerald Trust although this is not its real name. Since achieving Trust status in the

early 1990s, the Emerald NHS Trust has grown and expanded due to a combination of

service reconfigurations and mergers. As a result, the Trust provides a complex array

of community, mental health and learning disability services from a large number of

dispersed sites over a wide geographical area. The Trust employs around 3,000 staff

8
and the turnover for year 2000/01 was £64 million.

The Chief Executive, the Trust Chair, the Executive Team and two of the Non-

executive Directors have been in post since the Trust was formed; the membership of

the other Non-executives has changed intermittently. The forum for senior

management is the Management Team (MT); membership of which includes the Chief

Executive, the Clinical Governance Lead, the Finance Director, six Divisional

Managers (two of whom are also Executive Directors), the Head of Human Resources,

the Information and Technology Manager, and the Estates Manager. A number of

Management Team members have been in post since the Trust formed originally and

although others have joined more recently, tenure has, for the most part, been stable and

consistent for several years; thus, senior people at the corporate level are well used to

working with each other.

During the research period, consultation took place on the development of a county-

wide Primary Care Trust (PCT). The outcome of this consultation was that the Emerald

Trust was later dissolved and its services transferred to the PCT as of April 2002. Thus

it is against a backdrop of significant structural change that the Trust was and is taking

forward the clinical governance agenda albeit that the organisation, as described in this

case study, no longer exists.

1.3 THESIS OVERVIEW

This research represents perhaps the most in-depth, action research study of the clinical

governance implementation process to have taken place over an extended period of


time (18 months) and, as such, it is hoped that it will provide interesting insights for the

reader. Implementation may be conceptualised as both a change process and an end

state; to capture this duality, two broad research questions have been posed namely:

what constitutes the local clinical governance agenda (content) and how has clinical

governance been implemented (process). However, before this account of the Trust's

journey proceeds any further, an overview of the structure of the thesis will now be

presented.

The purpose of this introductory chapter has been to provide an overview of the

emergence of clinical governance as a national policy and this will now be followed by

a review of the relevant literature. The literature on clinical governance will be

presented, and, as this body of literature is still emerging, it will be complemented by

two further chapters which consider the related literatures of Total Quality Management

(TQM)/Continuous Quality Improvement (CQI) and change/change management. A

comprehensive description and discussion of the research methodology will then be

presented. Three chapters will be devoted to presentation of the results; two dealing

with implementation at the corporate level and a third with divisional findings. The

following chapter will focus on a discussion of the results and the final chapter will

present the concluding comments and draw the thesis to a close.

10
CHAPTER 2

LITERATURE REVIEW - CLINICAL GOVERNANCE

'A framework through which NHS organisations are accountable


for continuously improving the quality of their services and
safeguarding high standards of care by creating an environment in
which excellence in clinical care will flourish'
(Department of Health, 1998; p33)

2.1 INTRODUCTION

The previous chapter has outlined the emergence of clinical governance as national

policy (Department of Health, 1997; 1998; 1999). The purpose of this chapter is to

give an overview of the body of literature which has been steadily growing since the

publication of The new NHS; Modern, Dependable' (Department of Health, 1997) in

which the term clinical governance appears to have been first utilised in relation to UK

health care. As the main focus of this study is on the implementation of a national

policy - clinical governance, a number of important insights on aspects of

implementation from the wider policy literature will also be presented.

2.2 CLINICAL GOVERNANCE - AN EMERGING CONCEPT

As a further testimony to the newness of the clinical governance concept and its

literature-base, it is worth noting that a search for 'clinical governance' using the

Medline database produces no hits pre-1997 but yields 357 between the years 1997 to

2002. Early contributors to the emerging literature include academics, clinicians,

clinician/managers and managers. With little to draw on in terms of the specifics of

clinical governance other than the policy documents themselves, they sought to put

some 'flesh on the bones' of this new concept. The first papers were often dedicated to

11
the clarification of what clinical governance might actually mean. Most authors seem

to start with and indeed stay with the Department of Health definition cited at the start

of this chapter. This definition highlights the need for an integrated approach, stresses

corporate accountability, implies not only quality assurance but also a more dynamic

aspect of quality - continuous improvement. Although the goal is the achievement of

high standards of care, the definition implies that a whole organisation approach will be

needed to create the alignment necessary to deliver this objective.

Some writers have sought to offer an alternative definition to the one cited above:

'A proper level of clinical governance in an organisation requires that


substantially the whole of clinical activity meets commonly accepted
standards, where these exist, and can be shown as meeting them'
(Scotland, 1998).

'It (clinical governance) means corporate accountability for clinical


performance' (Walshe, 1998b).

'It (clinical governance) can be defined as the action, the system or the
manner of governing clinical affairs. This requires two components; an
explicit means of setting policy and an equally explicit means of
monitoring compliance with such policy (Lugon and Seeker-Walker,
1999, pi).

'A systematic approach to assure the delivery of high quality health


services with the active participation of clinicians and patients
supported by managers' (Winter, 1999).

Unsurprisingly, given the concept is clinical governance, there is a distinctly 'clinical'

flavour to the above. There is a risk that the focus is explicitly on clinical activity

which could ignore the fact that health care provision does not rely on clinical input

alone. As an example, the neurosurgeon waiting in theatre relies on the porter to

deliver the right patient to the right place at the right time; the right investment in the

service to provide appropriate staffing levels, skill mix and so on all contribute to the

12
surgeon's ability to undertake almost any form of clinical intervention. Also, the

definitions cited above appear to offer more of an assurance flavour as opposed to that

of continuous quality improvement.

Walshe's definition (1998b) emphasises the corporate accountability for clinical

performance which seems to assume an active role for managers as well as clinicians in

the delivery of the clinical governance agenda. In contrast, Winter (1999) seems to

relegate managers merely to a supporting role which, perhaps, gives added weight to

the concern expressed by Bloor and Maynard (1998) that managers may be held legally

accountable for the standard of clinical care and yet have little influence over practice

due to professional self-regulation. Nevertheless, this chief executive clearly regards

clinical governance as a key element of the business planning process and thus an

integral part of the management function (Lloyd, 2001; p47):

'If the outcome of clinical governance processes - which include service


development based on evidence and examples of good practice - is
placed at the heart of the business planning process in the NHS, real,
quantifiable improvements in patient care might be made that can be
underpinned by an effective monitoring system.'

2.3 CLINICAL GOVERNANCE AND RELATED CONCEPTS

2.3.1 Total Quality Management and Continuous Quality Improvement

In 1998(b), Walshe commented that 'no-one seems entirely sure what it (clinical

governance) means'. Perhaps the secret to understanding this concept lies in the

original definition, and perhaps clinical governance is intended as Total Quality

Management/Continuous Quality Improvement (TQM/CQI) by another name - in

effect, a new prescription for an old remedy. Whilst some authors draw attention to

13
past experiments with whole system approaches to quality in the NHS, namely TQM

and Re-engineering (Walshe, 2000a), there are few who seem to have noticed how

closely the language and philosophy of clinical governance resonates with TQM and

CQI. In contrast, Huws (2000) sees a clear link with la TQM-style frame-work' and

regards this approach as a realistic mechanism for taking clinical governance and the

wider quality agenda forward. In addition, one of the early and perhaps seminal papers

on the emerging concept points clearly in the direction of TQM and CQI. Scally and

Donaldson (1998) advocate quality improvement through:

'......A more widespread adoption of the principles and methods of


continuous quality improvement initially developed in the industrial
sector and then later applied to health care. Generally these involve an
organisation-wide approach to quality improvement...... '

The authors (ibid) also talk of the well managed organisation which integrates

'financial control, service performance, and clinical quality at every level'. This

suggests a holistic conceptualisation of clinical governance as a way of running the

business and not merely a narrow definition of quality in terms of clinical or technical

quality alone. This notion does not appear to be an explicit theme in the early literature

but there is a flavour of it in the Commission for Health Improvement (CHI) Clinical

Governance Review Process (CHI, 2002) and the approach of the Clinical Governance

Support Unit (Hallett and Thompson, 2001). The models of both organisations include

a focus on strategic effectiveness which might reasonably be regarded as a precursor to

the creation of the environment of excellence referred to in the original definition of

clinical governance (Department of Health, 1998).

14
2.3.2 Corporate Governance

The notion of 'governance1 is addressed by a number of authors. There are those who

highlight the parallels with corporate governance (Scally and Donaldson, 1998) and the

requirement for openness, probity and accountability in corporate affairs. Others, such

as Bloor and Maynard (1998), also draw parallels with the concept of corporate

governance and emphasise clearly the regulation and accountability aspects of the

clinical governance agenda. Davies and Mannion (1999) consider clinical governance

in terms of principal-agent theory and explore the notions of trust and checking. The

authors (ibid) conclude that there must be a balance between the two elements and that

trust should not be synonymous with the abandonment of management controls.

2.3.3 Hospital Governance

Whilst clinical governance was a new concept to the UK NHS, the notion of "hospital

governance' has received considerable attention for some time in the US. This notion

of governance seems to have preceded the clinical governance agenda in the UK by at

least a decade and offers some valuable insights for those trying to make sense of the

more recent UK initiative as the following discussion will seek to demonstrate.

Arlington and colleagues (Arrington, Gautum and McCabe, 1995) suggest both a broad

and a narrow definition of governance:

'In the largest sense, governance is the process of leading and directing
the work and effective performance of an organisation, a group of
organisations or of a community that involves shared effort or
partnership among directors, executives and other relevant leaders.
Governance, in its narrowest sense, is commonly considered a synonym
for the work done by boards of directors'.

15
Thus from the above, it would appear that, in the US context, governance is concerned

with the way the business/organisation is run; so much so that the effectiveness of

governance arrangements are considered to mean the difference between organisational

success or bankruptcy (Alexander, Weiner and Bogue, 2001). Many writers refer to the

strategic management element of the hospital governance agenda and, way before the

emergence of clinical governance in the UK, there were calls to combine the

governance of clinical and administrative aspects of governance in an effort to increase

effectiveness (Kovner, 1990). In addition, although quality improvement had

traditionally been in the domain of the clinicians, the growth of competition in the US

health care industry was causing a shift in the responsibility for the development and

oversight of quality improvement efforts so that this rested 'first and foremost with the

hospital governing board', the body ultimately accountable in law for the quality of

care (Weiner and Alexander, 1993). Thus, it appears the notion of clinical governance,

whilst new to the UK, was already being promulgated in the US.

The established and growing body of literature on US hospital governance is testimony

to the length of time the notion of governance has been around the wider health care

industry. Although there does not appear to be a universally accepted definition of

governance, one can detect a shift in the content of US literature from sense-making to

other more practical issues. Authors highlight problems in board performance such as

lack of vision, reactiveness, passivity and rubber stamping, and inexperience (Carver,

1990). Others suggest approaches to ensure the effectiveness of hospital governance

(Umbdenstock, Hageman and Amundson, 1990) which may serve as useful learning

points for the NHS Trust Boards (Table 2.1).

16
Table 2.1: Five critical areas for effective governance

• A common working definition of governance;


• A clearly defined mission with specific goals and objectives;
• A well-planned decision-making process;
• A board structure tailored to the priorities at hand;
• An information, reporting and communication system that focuses priorities.

Umbdenstock, Hageman and Amundson (1990)_________________

Whilst it is sensible to try and learn from the experience of others, it is also important to

remember that any comparison between governing boards in the US and Trust Boards

in the UK is not generally on the basis of like for like. Alexander and colleagues,

(Alexander, Weiner, and Bogue, 2001) highlight the fact that governance arrangements

differ within the US depending on the nature of the institution; specifically whether it is

not-for-profit, public, or investor-owned. Vertical integration of health care

organisations in the US create multiple levels of governance even at the corporate level

and accountability to a higher authority might vary between a combination of state and

local government, religious organisations or universities depending on ownership

arrangements. Unlike the Trust Boards in the UK which are composed of executive

and non-executive directors, the governing boards in the US do not necessarily

incorporate the senior management team. In the case of some US boards, the chief

executive is the only representative of the management function and s/he may not have

full voting rights.

2.4 MAKING SENSE OF CLINICAL GOVERNANCE

Thus, the above discussion provides a flavour of how a number of writers have

attempted to get to the core of clinical governance, either by trying to dissect the term

itself or drawing on other literature; not the easiest task when the policy does not

17
necessarily arrive fully formed and ready for implementation. This is, apparently, not a

rare occurrence in the policy process and, according to Gunn (1978), complete

understanding of the objectives is more an ideal than the reality. This will no doubt

strike a chord with Scotland (1998) who comments on the lack of consensus around the

concept of clinical governance. Whilst the resemblance to TQM and CQI has been

suggested earlier in this chapter, the lack of an explicit policy statement in this respect

or the apparent absence of any other empirically tested theoretical basis for clinical

governance leaves the policy open to charges such as Goodman's (1998) who argues

that the Department's definition represents little more than ' empty phrases'.

Whilst there is an obvious need to make sense of clinical governance at a conceptual

level, it is also necessary to try and translate this into tangible objectives for

implementation at both the corporate and operational levels of real organisations.

Although the consultation documentation (Department of Health, 1998) highlighted the

key components of clinical governance, advocated structural arrangements at the

corporate level and spoke in broad terms of a comprehensive framework for quality

improvement, it was the perception of some in the field that there was little else to go

on (Huws, 2000). This absence of a blueprint has not prevented authors bringing a

prescriptive flavour to much of the early literature which has aroused criticism for an

overuse of words such as 'should' and 'need to ensure' (Wall, 1999).

Early papers often revisit the policy documentation and/or reflect the writer's personal

view/interpretation and, in this, there are offerings from managers, academics,

clinicians and clinician/managers. Some writers have addressed a particular aspect of

18
clinical governance in practice: Walshe (1999) and Garland (1998) highlight the

importance of an initial baseline assessment of existing systems for quality

improvement given that their effectiveness is often highly variable - particularly with

regard to clinical audit. Others address the implications of clinical governance with

respect to clinical competence and clinical behaviour (Scotland, 1998), or issues such

as the information requirements to support the clinical governance agenda (Hopkinson,

1999). The Clinical Governance Support Unit published a series of articles which

appeared monthly in the journal 'Professional Nurse1 from July 2000 to June 2001.

Each deals with an aspect of the clinical governance model outlined in the previous

chapter (Table 1.5) and, taken together, provides a coherent overview of the 'what' of

clinical governance as conceptualised by the Support Unit; however, there is little on

the 'how1 of implementation per se.

The need for effective leadership and culture change is often cited in the literature but

generally not dealt with in any depth. Walshe (2000b) includes a very brief comment

on leadership and clinical governance in an early, superficial review of the literature.

The author (ibid) presents the notions of transformational and transactional leadership

and suggests that the former is likely to be more appropriate in order to meet the

demands of clinical governance.

Hackett and Spurgeon (1999) provide a more in depth discussion on culture change and

clinical governance. They draw attention to the fact that there are multiple cultures

within NHS organisations and argue that culture change is a secondary outcome to the

implementation of clinical governance rather than an end in itself. The authors (ibid)

19
point to the different levels of organisational culture and suggest that making structural

changes to support clinical governance may address the more visible and perhaps more

easily manipulated manifestation of culture - the artefacts. Other interventions will be

required to change the deeper, less apparent aspects of culture such as values and

beliefs.

Davies and colleagues (Davies, Nutley and Mannion, 2000) warn that culture change

should be approached with caution for a number of reasons; not least because the

'cultural destination' in terms of clinical governance has not been clearly and

unambiguously specified. It is also argued (ibid) that wholesale, simultaneous change

in all aspects of organisational culture is unfeasible and probably undesirable.

Although certain aspects may need changing, there are others which serve as a sound

basis upon which clinical governance may be built.

Other writers have attempted to provide a more holistic sense of clinical governance

and take a wider organisational perspective. In one of the early edited texts on the

subject, Lugon and Seeker-Walker (1999) present clinical governance from a variety of

perspectives. Thus an organisational framework for clinical governance is offered

which outlines the structures and systems that should be introduced - not only at the

corporate level but also at the clinical team level where it is envisioned that the

operationalisation of clinical governance would take place through improvement

groups. The roles and responsibilities from the chief executive and the board to clinical

teams and individuals are outlined and chapters deal with some of the building blocks

of clinical governance such as clinical audit, risk management, complaints and so on.

20
Importantly, the text makes explicit the need for effective change management

processes and also the need for a clear clinical governance implementation plan which

is linked to the organisation development plan for the organisation as a whole.

Still others (BAMM, 1998; Holt, 1999; Wright, Smith and Jackson, 1999) have sought

to offer a broader sense of clinical governance within the organisation. Whilst there is,

in some cases, a sense of internal consistency in what is being proposed, the lack of a

common model for clinical governance means that it is not always possible to discern

why elements have been included for discussion and others omitted. Each of the

writers provide a perspective which offers interesting insights in itself; it is also

possible to identify some early themes amongst the 'should do's' reproduced in Table

2.2. These may be regarded as sensible suggestions but, in the absence of a coherent

whole, are offered for consideration only.

Table 2.2: Clinical governance - emerging themes

• Clinical governance needs to be part of the main business of the organisation - it is not an add-on or
optional extra;
• There needs to be a structure at both corporate and directorate levels to both support clinical governance
and clarify lines of accountability;
• There needs to be systems in place to ensure the alignment of corporate and directorate quality goals;
achievement of these goals is through performance management;
• Communication needs to be up, down and across the organisation;
• People need to be trained in quality improvement methods - not only clinical audit etc but also CQI
methods;
• Leadership and management development to ensure people have the skills to take forward the agenda;
change management skills important;
• Clinical governance activity needs to be supported by trained facilitators;
• Clinical governance needs to be appropriately resourced in terms of funding, time, training and
information technology;
• Culture change required but from-to highly variable.

21
2.5 CLINICAL GOVERNANCE - EARLY IMPLEMENTATION

The early literature outlined in the preceding discussion is extremely valuable in that it

provides a flavour of the emergent thinking both in terms of how clinical governance

has been perceived - as a 'big idea that has shown that it can inspire and enthuse ',

(Scally and Donaldson, 1998), inevitable so make the best of it (Wright, Smith and

Jackson, 1999), 'emptyphrases' (Goodman, 1998) - and also how the concept is being

interpreted for implementation. Gradually, this literature pool has been augmented by

the emergence of accounts providing details of actual implementation efforts. Holland

and Fennell (2000) describe an approach to the mandatory initial baseline assessment

using the European Foundation for Quality Excellence Model (EFQM). Although this

initiative apparently raised issues around the maturity of some of the clinical teams and

the time needed to undertake the assessment, the reported view was that the use of the

model brought positive outcomes - a common approach to assessment was achieved

and the self-assessment aspect contributed to the ownership of the resulting action

plans. Hewer and Lugon (2001) focus on the development of Clinical Improvement

Groups which serve as the main vehicle for the operationalisation of clinical

governance down the organisational hierarchy and into the directorates.

Greater detail is provided by Hittinger (2001) on the implementation of clinical

governance in a teaching Trust. The framework used incorporates the elements of

strategy, structure, technical support and culture. The more recent paper by Lewis and

colleagues (Lewis, Saunders and Fenton, 2002) demonstrates the importance of

evaluation and a recognition that the initial approach had 'not facilitated effective

progress'. The authors (ibid) subsequently describe the changes made to address the

22
gaps identified.

These accounts are important as they provide information on actual attempts to

implement clinical governance. Although the level of detail varies, each offers

different insights which range from the benefits of using a uniform approach to

assessment (Holland and Fennell, 2000), the need for a clear, widely communicated

strategy to guide clinical governance (Hittinger, 2001), to the need for structures at both

the corporate and directorate levels to support clinical governance (Hewer and Lugon,

2001; Lewis, Saunders and Fenton, 2002). Although the focus of these papers varies, a

consistent theme is evident - the need to prepare staff for their role in relation to the

clinical governance agenda, whether it is to support the baseline assessment or to

function effectively as a member of a quality improvement group. This finding

inevitably highlights the need for appropriate investment in education and training.

In addition to the within-organisation accounts described above, in a relatively short

space of time, a number of research reports have also been published which focus on a

variety of aspects of implementation, some of which will now be considered.

Latham and colleagues (Latham, Freeman, Walshe et al, 2000) undertook a postal

survey of NHS Trusts located within two English Regions to explore, amongst other

things, the early activity associated with implementation. The majority of the Trusts

reported that they had undertaken the baseline assessment and existing systems for

quality improvement were found to be highly variable in terms of effectiveness and

coverage both within and between these organisations. Most had developed or were in

23
the process of developing a strategy for clinical governance; all had established a

clinical governance committee although the size and membership varied. All Trusts

had put in place leadership arrangements for clinical governance at the corporate level

although most leads had little or no time formally allocated for this new responsibility.

The study also reported a number of anticipated barriers to implementation; an

overview is presented in Table 2.3 below; some of which, particularly the time element,

were also highlighted by Dewar (1999) during interviews with chief executives.

Table 2.3: Perceived barriers to clinical governance implementation

Lack of time and money;


Lack of access to library facilities;
Lack of IT systems to underpin clinical governance;
Fear of change;
Too much change;
Lack of understanding;
Cultural change needed;
Competing priorities;
Professional boundaries.

Latham, Freeman, Walshe et al (2000)

As part of the same project as that described above, Walshe and colleagues (Walshe,

Freeman, Latham et al, 2000) visited all of the Trusts in one Region and undertook a

series of structured interviews with the senior people charged with taking clinical

governance forward in the organisation. Usually the interview set comprised the chief

executive, the non-executive director lead, and the executive lead(s) - often a joint

appointment of the medical and nurse directors. This confirmed that early attention had

been devoted to putting structures and systems in place. Although the need for

leadership and culture change was often mentioned by interviewees, there was little

evidence to suggest that deliberate interventions were being introduced to address the

second of these two issues.

24
Conduit and colleagues (Conduit, Morgan and Willetts, 1999) devised a 20-point self-

assessment tool which was sent out to all Trusts in the Trent region. Apparently scores

ranged from 8-63 out of a possible 80 and the 'weakest1 category tended to be quality

improvement. Although the paper highlights the contribution this tool made to the

completion of baseline assessments and the development of subsequent action plans,

there is little else in the way of detail to this publication and it would have been

interesting to know if/how the gaps identified were addressed by the Regional Office

driving the assessment.

Firth-Cozens (1999) reports on a study of the development needs of a cross section of

employees in relation to clinical governance. The findings demonstrate that needs

differ depending on the background of the interviewee; the author states:

'There were very large differences between the development of chief


executives and those of clinical staff: the former had had considerable
personal and management development covering areas like team
leadership, change management, decision-making and negotiation; the
latter, including consultants, had had very little development with most
of their education being clinically focused'.

Firth-Cozens (ibid) also highlighted common areas for development; these include: risk

management, change management, team dynamics, basic clinical audit training, IT

training. Interestingly, the research highlighted the fragmented way in which the

elements of clinical governance were being considered in contrast to the integrated

whole aspired to in the policy (Department of Health, 1999). Thus, it is proposed that

development programmes do not address individual elements as such but instead are

problem based so that staff learn to integrate the tools of quality management through

practical application. Barriers to development included: time pressures, a lack of

25
information technology, inadequate funding, and the lack of a coherent strategy for

post-qualification training in general outwith the demands of clinical governance.

Professional isolation and the geographical demands of life working in the community

were also cited as barriers for those not in a secondary care setting.

In addition to the academic research reports, there has also been a steady stream of

reports from CHI which are published on the internet and are the outcome of the routine

review process. This is an extremely valuable source of information on the progress of

implementation as the reports highlight both the positive and negative aspects of this

process and each is supplemented with an action plan for further work. Searching the

CHI web site, there does not appear to be any summary of key themes from the

reviews; hopefully this is already under consideration by the policy makers.

The literature described above makes a valuable contribution to what appears to be a

rather emergent understanding of clinical governance. This review of the literature

highlights the virtual absence of data from longitudinal, in-depth case studies of clinical

governance implementation - hence the decision to undertake the study which will form

the focus of this thesis. However, before moving from clinical governance to related

literatures, it is worth noting some of the issues surrounding the notion of

'implementation' to be found in the wider policy literature.

2.6 IMPLEMENTATION INSIGHTS FROM THE POLICY LITERATURE

Firstly, although not always acknowledged by writers on the subject, it is important to

make explicit that implementation has a double meaning (Lane, 1987) - 'either the act

26
of implementing or the state of having been implemented' thus, from this description,

implementation constitutes a process (how) and an end state (what). Of these two

notions of implementation, it seems that the former had received the least attention

(Elmore, 1978; Hogwood and Gunn, 1984; Parsons, 1995). An area initially considered

as 'a series of mundane decisions and interaction unworthy of the attention of scholars'

(Van Meter and Van Horn, 1975; p450) the implementation process has apparently

attracted greater interest as policy has, in some areas, failed to deliver the anticipated

outcomes (Hogwood and Gunn, 1984). Consequently, policy analysts have sought to

understand the 'implementation gap' (Dunsire, 1978) not only in terms of end state

success or failure but also with regard to process effectiveness. Given the newness of

clinical governance, it will be important to capture both facets of implementation in

order to start building up a sense of how policy is turned into practice and what that

looks like on the ground.

Secondly, it is worth noting that Hogwood and Gunn (1984), in considering

implementation failure, distinguish between non-implementation and unsuccessful

implementation. The authors (ibid) view the former as a policy which has not been put

into effect as intended; in the case of the latter, the policy has been carried out in full

but fails to produce the outcomes intended. Apparently, either of these circumstances

may arise from what has been described as bad implementation, bad luck and bad

policy - apparently the latter is usually the least likely to be offered as a cause of

failure.

Wolman (1981) makes a similar point and suggests that failure is not necessarily due to

27
poor implementation but may in fact be the result of problems or inadequacies in one or

more components of the policy process either in the policy formulation stage or what he

calls the 'carrying out stage' or both (Table 2.4).

Table 2.4: Factors influencing the success of policy implementation

Components of the formulation process Components of the carrying out process


1. Problem conceptualisation Resource adequacy
2. Theory evaluation and selection • Program funding
3. Specification and objectives • Staff resources
4. Program design Management and control structure
Causal efficacy • Authority leakage due to lack of
Political feasibility knowledge/will
Technical feasibility Bureaucratic rules and regulations
Secondary consequences Political effectiveness
Regulation vs. incentives implementation Feedback and evaluation
considerations • Substantive
5. Program structure • Agency response
• Co-ordination
• Intergovernmental administration
• The carrying out process

Wolman (1981)__________________

If, as Wolman (ibid) suggests, policy failures are more often due to failures of

formulation than implementation, then this poses an additional challenge for the

implementers. Successful implementation will not only depend on their skills in

relation to the implementation process but will also depend on their ability to critically

evaluate the policy process upstream and also the level of discretion they have been

afforded to overcome any problems inherited from the earlier part of the process.

Given the problems that may originate upstream, it would seem prudent not to make the

assumption that policy, in the form it reaches those charged with its implementation, is

necessarily amenable to implementation with any degree of success.

Finally, Gunn (1978) has taken the ideal type approach in presenting a model of 'perfect

implementation' (Table 2.5). Although the author (ibid) regards this as an 'unreal

28
concept', its purpose is to aid systematic thinking, not only about the reasons for

implementation failure but also about possible ways in which these elements may be

addressed in order to improve the chances of success.

Table 2.5: Perfect implementation

1. That circumstances external to the implementing agency do not impose crippling constraints;

2. That adequate time and sufficient resources are made available to the programme;

3. Not only are there no constraints in terms of overall resources but also that, at each stage in
the implementation process, the required combination of resources is actually available;

4. That policy to be implemented is based upon a valid theory of cause and effect;

5. That the relationship between cause and effect is direct and that there are few, if any,
intervening links;

6. That there is a single implementing agency which need not depend on other agencies for
success or, if other agencies must be involved, that the dependency relationships are minimal
in number and importance;

7. That there is complete understanding of, and agreement upon, the objectives to be achieved,
and that these conditions persist throughout the implementation process;

8. That in moving towards agreed objectives, it is possible to specify, in complete detail and in
perfect sequence, the tasks to be performed by each participant;

9. That there is perfect communication among and co-ordination of, the various elements or
agencies involved in the programme;

10. That those in authority can demand and obtain perfect obedience.
Gunn (1978) __________________________

To those involved with policy implementation generally, and clinical governance in

particular, there is probably no need to labour the notion of ideal type when considering

the 10 aspects of 'perfect' implementation outlined above. Although Harrison (2000)

commenting on this makes the point that Gunn's ideas do not represent a model of

organisational change, he still seems to regard them as constituting a 'classic

management text' equally relevant after the passage of over 20 years as both a

mechanism for illuminating implementation situations and as a tool for getting policy

into practice.

29
2.7 CHAPTER SUMMARY

This chapter has provided an overview of the emerging literature around clinical

governance. From this, it has been possible to follow the shift from contributors

offering personal perspectives and views on what clinical governance 'should' look like

in practice and how it 'should' be implemented to details of actual implementation

efforts from insider and outsider researchers. There is, apparently, no consensus on the

meaning of clinical governance and interpretations seem to range from business

excellence through to clinical standards.

The newness of the clinical governance concept and the absence of a definitive model

of clinical governance or a blueprint for the implementation process is likely to pose

something of a challenge to health care managers and practitioners. However, insights

from the wider policy literature on implementation suggest this level of under-

development is not uncommon. This literature also alerts the reader to the dual

meaning of implementation and to the fact that problems with implementation are not

necessarily due to the implementation process but may originate from a number of

sources 'upstream' in the policy formulation process. Finally, the notion of perfect

implementation is a sobering one and clearly highlights some of the pitfalls that may lie

ahead.

The lack of any 'received wisdom' concerning exactly what clinical governance should

look like or how it should be introduced into the Trusts also poses a challenge in terms

of research design. Given the lack of literature around clinical governance per se, the

30
most logical step is to seek out related literatures; and, since clinical governance seems

to resonate with the language of TQM and CQI, a review of this body of literature

seemed perfectly appropriate under the circumstances.

31
CHAPTER 3

LITERATURE REVIEW - TOTAL QUALITY MANAGEMENT &


CONTINUOUS QUALITY IMPROVEMENT

Total Quality Management... 'a sort of Rorschach test'


(Dean and Bowen, 1994)

3.1 INTRODUCTION

With the publication of documents such as 'The new NHS: Modern, Dependable'

(Department of Health, 1997) and 'A First Class Service' (Department of Health,

1998), the UK government has undoubtedly raised the profile of Health Service quality;

however, it would be wrong to think that a regard for quality in medicine is a new

phenomenon. On the contrary, Ellis and Whittington (1993) argue that this interest in

quality dates back to ancient times and cite early guidelines for education and practice

as examples. Whilst this may indeed be so, Morgan and Murgatroyd (1994) point out

that concern for quality is quite different from systematic management which needs an

'intentional framework'. This rather suggests that the emphasis on quality management

is a more modern concept and the emergence of Total Quality Management (TQM) as a

whole system approach would appear to support this.

As highlighted in earlier chapters, the influence of TQM on the language of clinical

governance is evident. What is less clearly expressed is the contribution the field of

TQM may make to the implementation of clinical governance; a notion that will be

explored in this chapter. Firstly, the complexity of health care quality will be

discussed; the review will then move on to the wider TQM literature focusing on the

philosophy and principles of this concept and certain aspects of implementation; in

32
particular, frameworks, critical success factors and barriers. Finally, the review will

consider the challenge of implementing TQM in health care with specific reference to

earlier experiments in the UK (Joss, Kogan and Henkel, 1994; Joss and Kogan, 1995)

and Norway (0vretveit, 1999; 0vretveit and Aslaksen, 1999).

3.2 QUALITY IN HEALTH CARE

3.2.1 Quality in Health Care - A Mixed Picture

Over the years, there has been a wide variety of quality initiatives. Taylor (1996) lists a

total of 25 examples ranging from accreditation systems to TQM whilst Pollitt (1993)

paints an evocative picture of the NHS 'bubbling with a mixed stew of quality

initiatives'. Whilst there has undoubtedly been an admirable amount of activity taking

place in this area, much of this has been less than a resounding success (Pollitt, 1996).

More recently, in a review of Continuous Quality Improvement (CQI) in American

health care organisations, Blumenthal and Kilo (1998) conclude:

'...... there simply are no organisation-wide success stories out there -


no shining castles on the hill to serve as inspirations for a struggling
industry'.

One of the reasons for the rather mixed picture of health care quality presented above is

an apparent failure of the NHS to learn from past experience. According to Klein

(1998), the NHS consistently seems to fall pray to a 'collective amnesia' and in doing

so not only loses its "collective memory' but also its 'understanding of NHS history'

(0vretveit, 1998). Governments have been struggling for decades with the notion of

quality in health care (Klein, 1998) and the boards of NHS Trusts will do well to

remember this as they strive to discharge their new statutory duty for quality through

the clinical governance framework. Given this tendency to forget, it is less than

33
surprising to find the importance of learning from the past stressed both in policy

documents (Department of Health, 1998; 1999) and also in the recent literature

(Donaldson and Muir Gray, 1998; Klein, 1998; Walshe, 1998b). It therefore seems

appropriate to explore further the concept of quality and consider some of the reasons

why it seems to represent such a challenge for its many stakeholders.

3.2.2 Quality - A Complex Concept

Quality is notoriously difficult to define irrespective of whether the concept is applied

to health care or the commercial context (Ellis and Whittington, 1993; Dale, 1994).

Quality is often used as an umbrella term which covers everything but touches nothing

in particular; however, in order to manage quality, it is necessary to be explicit about

what is meant when the term is used (Moss, 1995). This is no easy task particularly in

the absence of any universally agreed definition of health care quality (Walshe, 1998c).

In fact, there seems to be almost as many definitions as there are authors on the subject

and a number of factors which contribute to this lack of consensus will now be

considered.

Quality - 'in the eye of the beholder'

Few would deny the complexity of the arena within which health care is delivered.

There are multiple stakeholders, both internal and external to the organisation. These

include a variety of professional groups who deliver a myriad of services ranging from

physical to psychosocial interventions. In addition, there are the consumers of health

care who invariably have needs which are highly heterogeneous. Added to this,

individual perceptions of service quality may be influenced by personal values, beliefs,

34
past experience and even one's own vested interests. This richness is often captured by

writers in their descriptions of health care quality; a concept some see as complex and

often contested (Sutherland and Dawson, 1998) or even 'slippery' (Kerrison, Packwood

and Buxton, 1994). For others, 'quality is like beauty' and whilst it has positive

connotations, its meaning usually lies '/« the eye of the beholder' (Kritchevsky and

Simmons,1991).

Table 3.1: Dimensions of quality

Maxwell (1984) Klein (1998) Ovretveit (1992)

Relevance to need • Respect • Client quality


Effectiveness • Choice • Professional quality
Access to services • Information • Organisational quality
Equity • Technical competence
Social acceptability
Efficiency and economy

Quality: more to it than meets the eye

Just as the context of quality is multidimensional, so it seems is the concept; elements

of which are illustrated in Table 3.1. Maxwell (1984), for example, describes six

dimensions to which Klein (1998) has added a further four (perhaps somewhat tongue

in cheek) to make the '10 Commandments' for the NHS and still another perspective is

offered by Ovretveit (1992).

Quality: a political concept


Given the complexities described above, the difficulties in reaching a universal

definition of health care quality may be appreciated; however, there are those who

argue that the lack of an explicit definition is not accidental and suggest this is due to

the political nature of the quality concept (0vretveit, 1998). Within this paradigm,

35
issues of power and professionalism are interlinked both at the macro and micro levels.

At the macro level, control of quality is considered to lie at the heart of professionalism.

Through control of both initial entry to the profession and also of subsequent practice,

professional bodies such as the General Medical Council and Medical Royal Colleges

claim to provide their own quality assurance (Pollitt, 1990). At the micro or individual

level, the unique body of knowledge which distinguishes the professions from other

groups in the NHS also confers a significant level of autonomy (Weiner, Shoetree and

Alexander, 1997) which, it is argued, is translated to mean that the professionals know

best and if left alone will assure quality in health care (Pollitt, 1996). This medical

model of quality is criticised as paternalistic as the needs of patients are invariably

defined by the professionals (Pollitt, 1996; Packwood, Pollitt and Roberts, 1998) and,

within this model, the voice of the patient tends to be the one least heard (Hart, 1996).

The introduction of clinical governance has made explicit the fact that the quality of

health services is a corporate concern. The whole system approach which is embedded

in the concept brings clinical as well as non-clinical quality within the remit of

managers as well as individual clinicians who are professionally responsible and

accountable for their actions. Certain objectives contained within the modernisation

agenda and outlined in the NHS Plan (Department of Health, 2000) will be delivered

using quality improvement methodologies which have been imported from industry and

which are based on the philosophy of TQM and CQI. In this way, clinical governance

poses an interesting challenge to the notions of professional control over the quality

agenda; how this will be played out in practice remains to be seen but some of the

36
reasons for this challenge will now be discussed.

3.2.3 Quality and the Challenge of CQI

The concept of CQI, defined as 'an ongoing effort to provide care that meets or

exceeds customer expectations' (Weiner, Shoetree and Alexander 1997; p493), is at the

heart of a total quality or a whole systems framework and the ideas contained within the

above definition may be regarded as a considerable challenge to the medical model of

quality in two ways.

Firstly, whilst all change may not bring improvement, CQI always implies change

(Berwick, 1996; Garside, 1998) so it follows that individual clinical practice will have

to change to reflect this if improvement is to occur. Such change is not always

welcome. Preferred ways of working may have been followed for many years and not

only serve as a frame of reference for the working life of the clinician concerned but

can also form the basis of empires carefully nurtured over a long career (Marris, 1993).

Secondly, CQI focuses on the customer definition of quality whereas, in health care, the

concept of the patient as a customer is relatively new and "the very idea of asking

customers what they value is seen by some as revolutionary' (Morgan and Murgatroyd,

1994; p74). In addition, the notion of the customer relates to internal customers in

addition to the patient or carer as end user. The idea of internal customer-supplier

relationships brings a focus onto the processes of care and clinical teams, which, for

those who operate in a more individualistic mode, seems to suggest something of a

change in practice which is not always appreciated.

37
Thus, from the preceding discussion, it can be seen that the concept of quality is, in

itself, a complex phenomenon which invariably serves as a challenge to all associated

with it. As the next section will demonstrate, the complexity of quality is matched by

the complexity of TQM, not only as a concept but also in practice.

3.3 TOTAL QUALITY MANAGEMENT - THE CONCEPT

3.3.1 TQM - A Hazy and Ambiguous Concept

Given the growing body of literature addressing some or other aspect of TQM, Dean

and Bowen's (1994) assertion that it is a 'ubiquitous organisational phenomenon' has

some resonance. The anacronyms TQM/CQI not only feature widely in the peer

reviewed literature but also in general discussion within organisations around the topic

of quality -a discourse not uncommonly peppered with associated slogans such as 'right

first time', 'quality is everyone's business', 'a journey not a destination'. Whilst such

slogans may be regarded as buzzwords, they may also contribute to a sense that this

'phenomenon' is based on a codified theoretical base, reinforced by some who refer to

the notions of 'conventional wisdom' (Boerstler, Foster, O'Connor et al, 1996), or

'received wisdom' (Dale, Boaden and Lascelles, 1994).

This author's initial exploration of the literature was accompanied by a growing sense

of bewilderment. It was, therefore, somewhat comforting to find that this sense of

confusion is not uncommon (Teixeira, 1999) and that, far from being a 'cut and dried

reality' (Spencer, 1994), TQM is variously described as 'a hazy ambiguous concept'

and 'a sort of Rorschach test' (Dean and Bowen, 1994), an 'amorphous philosophy*

38
(Spencer, 1994), which means 'different things to different people' (Yong and

Wilkinson, 2001). It is therefore unsurprising to find a profusion of definitions in the

literature which gives credence to those who highlight the fact that there is no universal

definition of TQM (Wilkinson and Witcher, 1993; Grant, Shani and Krishnan, 1994;

Teixeira, 1999).

Dean and Bowen (1994) suggest that the meaning attributed to TQM is a function of

belief and experience, aspects of which are also likely to colour one's perception of

TQM and vice versa. TQM has been described as, for example, 'a major change

movement' (Scon and Cole, 2000), 'a tool for change' (Yong and Wilkinson, 2001), 'a

new and emerging paradigm of management' (Wilkinson and Witcher, 1993; Teixeira,

1999), 'a company-wide philosophy of quality improvement' (Grant, Shani and

Krishnan, 1994) and 'a systematic approach to management' (Spencer, 1994). Whilst

understanding is likely to influence definition, Boaden (1997) also suggests that authors

tend to adopt that which is most suited to their own particular purposes whilst some

avoid any explicit definition altogether. This is apparently not confined to the literature

but is also a feature of practice; diversity of academic background and opinion

contributed to the decision of one project team not to adopt a single definition of TQM

(Boaden, 1997) - unfortunately the author does not comment on whether this proved

problematical for either the team or the research process as a whole.

In the absence of a universal definition, Teixeira, (1999) advises the practitioner to

return to core principles as a means of navigating what he has termed 'an oversupply of

ideas'. However, this approach does not offer the safe passage suggested as, in reality,

39
much of the generic TQM literature appears to present a very varied picture of the

conventional/received wisdom. Hill and Wilkinson (1995) rather optimistically suggest

that there is now reasonable agreement around the basic principles of TQM; however,

this was apparently not borne out by The Conference Board (1993). In citing this

study, Boaden (1997) highlights the fact that out of the 20 studies examined by the

Board, only six out of 23 elements were cited three or more times and these in only

seven studies.

3.3.2 The Search For Core Principles

To demonstrate this lack of consistency, Table 3.2 highlights some of the ways in

which TQM has been conceptualised. Some authors offer three similar albeit not

entirely the same principles (Dean and Bowen, 1994; Sitkin, Sutcliffe and Schroeder;

Hill and Wilkinson, 1995), others opt for eight elements (Dale, Boaden and Lascelles,

1994), ten points (Oakland, 1995) or twelve factors (Powell, 1995). In some instances,

the origins or the process by which the author has arrived at these conceptualisations

have not been stated. In several cases they have been distilled from the writings of the

'gurus'; in others they represent a synthesis of the syntheses of others. Dean and Bowen

(1994) on the other hand present their notion of TQM in terms of principles, practices

and techniques; however, Boaden (1997) does not appear convinced this categorisation

is as neatly nested as it first appears. The author (ibid) challenges the value of such

'lists' except as a basis for discussion and debate.

40
Table 3.2: TQM conceptualised

Dean and Bowen (1994) Sitkin, Sutcliffe and Schroeder Hill and WUkinson (1995)
(1994)
Customer focus Customer satisfaction • Customer orientation
Continuous Continuous improvement • Continuous improvement
improvement Organisation as a total system • Process orientation
Teamwork

Dale, Boaden and Oakland (1995) Powell(1995)


Lascelles (1994)

Commitment and 1. Organisation needs long term 1. Committed leadership


leadership of CEO commitment to constant 2. Adoption and
Planning and improvement communication of
organisation 2. Adopt the philosophy of zero TQM
Using tools and defects and change culture to 3. Closer customer
techniques right first time relationships
4. Education and training 3. Train the people to understand 4. Closer supplier
5. Involvement the customer-supplier relationships
6. Teamwork relationship 5. Benchmarking
7. Measurement and 4. Do not buy products or services 6. Increased training
feedback on price alone - look at total cost 7. Open organisation
8. Culture change 5. Recognise that improvements of 8. Employee
systems need to be managed empowerment
6. Adopt modern methods of 9. Zero-defects mentality
supervision and training- 10. Flexible manufacturing
eliminate fear 11. Process improvement
7. Eliminate barriers between 12. Measurement
departments by managing the
process - improve
communications and teamwork
8. Eliminate - arbitrary goals
without methods; all standards
based on numbers alone; barriers
to pride of workmanship; fiction
- get facts by using the correct
tools
9. Constantly educate and retrain -
develop the 'experts' in the
business
10. Develop a systematic approach
to manage the implementation of
TQM _______________

The diversity of offerings in Table 3.2 does not indicate a consensus in the core

principles of TQM. One might look to the work of the 'quality gurus' to illuminate this

quest - but apparently this does not provide a neat solution either. Although there is a

recognition that TQM has evolved from the work of a number of early 'founding

41
fathers', views about who enjoys membership of this club appears to vary. In the

literature supporting this part of the review, where the authors have referred to the

'gurus', each has included Deming and Juran; however, as Table 3.3 demonstrates, the

configurations vary, usually without any explanation as to why a trio has been selected

rather than a quintet.

Table 3.3: The TQM 'Gurus'

Dean & Bowen (1994) Hackman &Wageman Boaden (1996)


(1995)
• Deming • Deming
• Juran • Deming • Juran
• Crosby • Juran • Feigenbaum
• Ishikawa

Dale, Boaden &Lascelles Wilkinson (1995) Hill & Wilkinson (1995)


(1994)
• Deming • Deming Deming
• Juran • Juran Juran
• Crosby • Feigenbaum Crosby
• Feigenbaum • Ishikawa Feigenbaum
Ishikawa

These multiple configurations may demonstrate the richness of the landscape but any

syntheses based on such different configurations as illustrated above perhaps need to be

explicit about the rationale upon which they are derived otherwise the value of any such

exercise may be limited and even contribute to further obfuscation.

In searching for the core principles of TQM, it is perhaps worth noting Boaden's (1997)

claim that the early authors did not use the term TQM'; certainly their key texts do not

contain either the anacronym or the term Total Quality Management in the indices

(Crosby, 1979; Deming, 1986; Juran, 1988) although Feigenbaum is an exception

(1991).

42
In an interview (Romano, 1994), Deming's comments on the subject, cited below, seem

to add weight to Boaden's assertion regarding terminology (1997):

'The trouble with Total Quality Management -failure ofTQM, you call
it - is that there is no such thing. It is a buzzword I have never used the
term, as it carries no meaning '.

Hackman and Wageman (1995), perhaps rather optimistically in light of this discussion,

assert that there is substantial agreement amongst the work of those they describe as

'the movement's founders' however, others are not convinced (Dean and Bowen, 1994).

Table 3.4 summarises the core principles of Deming (1986), Crosby (1979) and Juran

(1988); Feigenbaum, however, did not develop such a convenient, pocket-sized

encapsulation of his teaching.

Table 3.4: TQM core principles

Deming -14 Points Crosby -14 steps Juran - 'Trilogy'

1. Create constancy of purpose 1. Management commitment 1. Quality planning


2. Adopt the new philosophy 2. Establish quality 2. Quality control
3. Cease dependence on mass improvement teams 3. Quality improvement
inspection 3. Introduce quality
4. End practice of awarding measurement
business on price alone 4. Evaluate cost of quality
5. Improve constantly and 5. Develop quality awareness
forever the system of 6. Take corrective action
production and service 7 Establish committee for zero
6. Institute training defects program
7. Adopt and institute 8. Supervisor training
leadership 9. Zero Defects Day
8. Drive out fear 10. Goal setting
9. Break down barriers between 11. Error cause removal
staff areas 12. Recognition
10. Eliminate slogans, 13. Quality councils
exhortations and targets for 14. Do it over again
the workforce
11. Eliminate work
standards/quotas
12. Remove barriers to pride of
workmanship
13. Institute education and self-
improvement
14. Take action to accomplish
the transformation

43
Dale and colleagues (Dale, Boaden and Lascelles, 1994) take the view that the writings

of the 'founding fathers' represent variations on a theme and point out that they were all

consultants who sought to differentiate their work from that of others in order to

position themselves in the market and attract clients. In the eyes of these authors (ibid;

p20) this appears to have been a successful tactic and they suggest that the teachings of

four men at least can be characterised by a particular approach: Crosby - company-wide

motivation; Deming - statistical process control; Feigenbaum - systems management

and Juran - project management.

A number of authors have commented on the similarities and differences within the

early work on TQM (Dean and Bowen, 1994; Dale, Boaden and Lascelles, 1994;

Beckford, 1998). Others would point to a failure to establish links with the existing

management literature (Spencer, 1994; Teixeira, 1999; Scott and Cole, 2000).

Consequently, elements in the early work may appear at odds with management theory;

a particular example of this is in relation to the notions of reward and appraisal

(Hackman and Wageman, 1995). In addition, there are those that point to omissions;

for example, there is a sense of rational linearity surrounding TQM which ignores the

political nature of organisations (Wilkinson and Witcher, 1993), the notion of

universality in TQM application does not recognise contingency theory (Sitkin,

Sutcliffe and Schroeder, 1994) and, although there are potentially profound

implications for the human resource, little attention is given to how this might be

managed (Wilkinson, 1995).

44
3.3.3 In Search of Theoretical Underpinnings

Whilst there might seem to be little in the way of consensus around what constitutes the

core principles of TQM, there is some agreement over the perception that a theoretical

basis underpinning the work of the early practitioners was not articulated (Grant, Shani

and Krishnan, 1994; Anderson, Rungtusanatham and Schroeder, 1994; Sitkin, Sutcliffe

and Schroeder, 1994). Anderson and colleagues (Anderson, Rungtusanatham and

Schroeder, 1994) take the view that Demings' '14 Points' evolved over a number of

decades and represent generalisations based on his experience as a consultant and, as

such, his energies were directed at implementation rather than theory development and

empirical testing. In terms of empirical evidence, Anderson and colleagues (Anderson,

Rungtusanatham and Schroeder, 1994) argue that there is little available to support the

effectiveness of Deming's approach in particular. Others (Dean and Bowen, 1994) look

at the broader and varied picture of TQM implementation in general and assert that

there is little in the way of theory to explain why TQM is considered a success in some

organisations and a failure in others. This apparent lack of a solid theoretical

foundation underpinning TQM is attributed to a number of factors which will now be

explored.

TQM - A practitioner-led movement

An area of agreement amongst certain academics is an acknowledgement that academia

has not been in the vanguard of the TQM movement (Wilkinson and Witcher, 1993;

Dean and Bowen, 1994; Anderson, Rungtusanatham and Schroeder, 1994; Boaden,

1996; Scott and Cole, 2000), instead, this has largely been practitioner-led. There have

been a number of criticisms of the 'practitioner literature' that has emerged; the

45
approach has been largely personal anecdotes (Hackman and Wageman, 1995) aimed at

managers who, in their desire to believe their efforts are successful, will settle for

anecdote (Scott and Cole, 2000). Again, with the managerial audience in mind, some

suggest that the publications have been 'long on prescription and shorter on analysis'

(Hill and Wilkinson, 1995) thus promoting a 'quick-fix' for those searching for off-the-

shelf solutions or as Deming (1986) has described such products - 'instant puddings'.

Lack of academic interest

The lack of theory however, is not just attributed to the practitioner focus; the lack of

academic interest in this phenomenon has also been acknowledged. According to

Powell (1995), 'no other management concept/practice has received so much

practitioner attention with so little academic study'. As a result of this, Yong and

Wilkinson (2001) argue that the practitioners and consultants have had a free hand in

shaping the formative stages of the movement's development and contributed to the

diversity within the field. Boaden (1997) suggests that TQM has been dismissed by

some as one in a long line of managerial fads which is likely to have a fairly predictable

life-cycle.

Dean and Bo wen (1994) suggest that, given the lack of theoretical frameworks,

researchers have been reluctant to conduct research based on the consultant-oriented

frameworks that were available. The authors (ibid) also suggest that TQM transcends

the boundaries of existing theories and, although the field of management theory is

populated by multiple disciplines, individual theories are often discipline-bound. As a

result, it is argued that existing theories are unlikely to be broad enough to support

46
research into this phenomenon. This may be true of some theories but perhaps does not

take account of systems theory or certain change theories which focus on large-scale

organisational change. Also, research should not necessarily preclude the use of single

theories if the intention is to bring new insights from existing disciplines. However, the

boundary-spanning nature of TQM might pose more of a challenge in practice - as

highlighted earlier when even the task of definition appeared troublesome within a

particular project team (Boaden 1997). The issue might be more around trying to

capture the holistic nature of the phenomenon; an example of which is Deming's (1986)

emphasis on the importance of implementing all of the '14 Points'.

Whatever the reason for the apparent lack of earlier academic interest, one outcome of

this seems to have been that practice has been 'propelled ahead of theory' (Anderson,

Rungtusanatham and Schroeder, 1994) so much so that, in an issue of the Harvard

Business Review, voices from industry called for a greater engagement of the academic

community which, it appears, did not fall on deaf ears (Robinson, Akers, Artzt et al,

1991).

In addition to papers appearing across a wide range of peer reviewed academic

journals, special issues of journals such as the Academy of Management Review have

sought to develop a forum for theory development and, in that issue alone, a number of

valuable, albeit different, contributions were in fact made. For example, Dean and

Bowen (1994) present their interpretation of the principles and practices of TQM and

compare these to the domains included in the Malcolm Baldrige National Quality

Award. Spencer (1994) takes three organisational models - mechanistic, organismic,

47
cultural, as a basis for examining TQM and highlights similarities and differences

between each of the models and aspects of TQM. Anderson and colleagues (Anderson,

Rungtusanatham and Schroeder, 1994) use a number of methods to identify a theory

underlying the Deming approach. Sitkin and colleagues (Sitkin, Sutcliffe and

Schroeder, 1994) apply a contingency theory perspective to the implementation of

TQM.

Edited texts such as that by Cole and Scott (2000) have sought to make explicit the

contribution of existing organisational theory to the work of researchers in the field of

quality. Contributors have highlighted a number of related dimensions such as the

concept of culture (Cameron and Barnett, 2000; Hamada, 2000), corporate

performance (Easton and Jarrell, 2000), and different aspects of Human Resource

Management (HRM) (Ichniowski and Shaw, 2000; Kochan and Rubinstein, 2000).

These publications demonstrate the increasing contribution academics are making to

the quality movement and to the ever-growing body of literature surrounding it.

However, there is the view that the audience of management theorists is essentially

academia with the expectation that, in time, there will be a diffusion of the content to

the practitioner community through either teaching or consultancy (Dean and Bowen,

1994). Whilst this work may indeed serve to formalise the theoretical context over time

as advocated by Anderson and colleagues (Anderson, Rungtusanatham and Schroeder,

1994), one wonders where this leaves action researchers and practitioners in the

meantime.

48
Teixeira, (1999) advises the practitioner to start by seeking out the core principles of

TQM but, given the previous discussion, one may question whether there is such a

thing. Witcher (1995) takes the view that there is no core, arguing instead that the

debate is about its very absence. On the other hand Dean and Bowen (1994) present a

very clear core and claim that, far from being 'a hodgepodge of slogans and tools, it is

a set of mutually reinforcing principles each of which is supported by a set ofpractices

and techniques'. Teixeira (1999) suggests that the practitioner's mindset should be a

dynamic balance of all contributions, but how realistic is this? Although the notion of

'conventional/received wisdom' has a certain intuitive appeal to the hard-pressed

practitioner, it is important to heed the warnings that TQM is not based upon a sound

theoretical foundation and any such 'dynamic balance of all contributions' may lead to

confusion with concepts being misunderstood and misapplied or even to what Dale and

colleagues have termed TQM paralysis (Dale, Boaden and Lascelles, 1994). Whilst

Teixeira (1999) may argue that lack of any universal definition gives a freedom to act,

Wilkinson (1995) suggests that a 'fuzzy' understanding of the concept will result in the

adoption of a 'fuzzy' model. Table 3.5 outlines a number of ways in which TQM may

be conceptualised. These range from a way of managing the business to a tool;

operationalisation of either would require very different approaches.

Table 3.5; Degrees of TQM conceptualisation________


1. TQM as a program;
2. TQM as human resource management;
3. TQM as quality management;
4. TQM as business process management;
5. TQM as a concept and a tool;
6. TQM as marketing;
7. TQM as a paradigm;
8. TQM as a manifestation of post-modern organisation.

Witcher (1995)______________________

49
This brings the review round to Hackman and Wageman's question - is there such a

thing as TQM or is it merely a 'potpourri' of initiatives under a common banner (1995)?

In practice, many of the key elements of the early founders are perceived to have been

ignored altogether or sanded down which has left the authors (ibid) to conclude that

'rhetoric is winning over substance' and that the 'science is fading and the slogans

staying'. In the absence of any definitive 'received wisdom', it seems important that

those seeking to implement total quality are explicit about the way in which it is being

conceptualised and that the model chosen to translate the concept into practice is

consistent with this. The following section will now consider some of the frameworks

and critical success factors for and barriers to implementation.

3.4 IMPLEMENTING TOTAL QUALITY MANAGEMENT

As with many other aspects of TQM, the literature on implementation is wide and, for

the purpose of this section, the discussion will focus on three main areas: models and

frameworks, critical success factors (CSFs), barriers to and problems associated with

the implementation of TQM. The aim is to provide a flavour of the literature around

these aspects rather than an in-depth analysis and discussion.

3.4.1 TQM Implementation Frameworks

The earlier discussion around policy implementation in Chapter 2 highlighted the

difference between two aspects of implementation - that is the end state and the change

process. Some have observed that not all authors differentiate accordingly (Yusof and

Aspinwall, 2000a) and, in the field, this may lead to confusion and possible

inconsistency with regard to the 'what' and the 'how1 of TQM implementation; factors

50
which are considered to contribute to implementation failures (Glover, 1993).

The implementation of TQM is considered by some (Kanji and Barker, 1990; Glover,

1993) as the most complex activity an organisation can undertake and the need for

culture change is cited as the main reason for this complexity. Culture change

notwithstanding, if TQM is conceptualised as a way of managing the business rather

than some sort of localised initiative, then the scale of the change required to deliver the

whole system intervention which is intended, over time, to move the organisation from

one state to another, will undoubtedly contribute to the complexity of the task. Despite

(or perhaps because of) this inherent complexity, TQM implementation, as with

implementation per se, has received relatively less attention than other aspects of total

quality until recently. Sproull and Hofmeister (1986) suggest that the lack of interest is,

in part, because implementation is not generally glamorous or exciting but is, instead,

about the 'nuts and bolts, details, and mundane problems.' Also, because

implementation is often not clearly bounded, it does not merely follow on from a

decision and it may take place through a large number of actors.

One mechanism for providing a boundary for and increasing the explicitness of the

implementation process is to adopt a framework. In fact, the importance of this is

highlighted below:

'......one of the most influential factors in ensuring total quality


management adoption success is the formulation of a sound
implementation frame-work prior to embarking on such a change
process' (Yusof and Aspinwall, 2000a).

Yusof and Aspinwall (2000b) propose that an implementation framework should

51
consist of a number of characteristics (Table 3.6).

Table 3.6: Characteristics of a TQM implementation framework

Systematic;
Simple in structure; easily understood;
Clear links between elements outlined,
General enough to suit different contexts;
Represent a road map and a planning tool for implementation;
Answers 'how to1 and not 'what is';
Implementable.

Yusof and Aspinwall, (2000b) _________

The authors (Yusof and Aspinwall, 2000a) also suggest that the utilisation of a

framework will bring a number of benefits: it may serve as a vehicle for raising

awareness of the concept and facilitate a common understanding of what is to be

achieved and how; it could also enable the organisation to introduce the elements in a

'more comprehensive, controlled and timely manner'.

At Appendix 1, seven implementation typologies are outlined; the authors have

presented these variously as TQM implementation frameworks (Dale and Boaden,

1994; Ghobadian and Gallear, 1997), stages (Kanji and Barker, 1990), steps (Glover,

1993; Stamatis, 1994; Oakland, 1995), or process (Rand, 1994). Across this collection,

there is a variation in the level of abstraction, range and focus. As an example of the

latter, Stamatis (1994) emphasises the importance of project management whilst

Ghobadian and Gallear (1997) aim their framework at small and medium sized

enterprises. Some frameworks are written by consultants and others by academics;

some are based on experience whilst others have been derived from case studies. Some

include what could be considered end state elements but most deal with the process of

implementation and also emphasise the central aspect of change within this. Whilst

52
Dale and Boaden (1994) explicitly state that their framework is not designed as a 'how

to', the accompanying text does include 'process' aspects in its discussion of the four

main elements.

Rather than presenting a comparison of these typologies which would be of limited

value given the diversity referred to above, the intention is merely to demonstrate the

variation that exists within. This adds weight to the notion that there are no simple

recipes or prescriptions for successful TQM implementation; largely because, as

demonstrated previously, there is little consensus over the 'what' and, given the

complexity and uniqueness of individual organisations, the 'how' must therefore be

adapted to the specific context. Owing to the variation between authors, frameworks

must be adopted with caution and adapted with regard to the local context rather than

treated as the received wisdom which, in view of the earlier discussion on the 'what' of

TQM, seems a matter of interpretation. However, the following discussion of critical

success factors and pitfalls should provide valuable insights which may aid this

interpretation process.

3.4.2 TQM Implementation - Critical Success Factors

A concept which is related to the notion of the frameworks discussed in the previous

section is that of critical success factors (CSFs). Oakland (1995, p25) defines CSFs as

'what must be accomplished for the mission to be achieved.' In this sense, CSFs serve

as an important vehicle for the translation of the goal into practice through the

subsequent identification of critical processes, activities and tasks and then the

development of key indicators to measure performance. Oakland (1995) stresses that

53
the CSFs should inform the design stage of the implementation process thus enabling

the match of the 'what' to the 'how' and thereby reducing the likelihood of what he

terms the 'danger gap', a situation in which there are goals but no mechanisms for their

achievement.

A number of authors have proposed a set of factors which they consider to be critical to

successful implementation. Whilst the title of their papers may refer to successful

implementation, it seems one must bear in mind the distinction between

implementation as a process and as an end state because both or either may be

discussed under this banner. Although some elements such as leadership or senior

management commitment may relate to both content and process, their

operationalisation will vary depending on how they are categorised (the 'what' and/or

the 'how'). By way of illustration, Table (3.7) outlines two examples of mainly

'process' CSFs and Table 3.8 a further two examples of mainly 'end state' CSFs.

54
Table 3.7: TQM implementation - process CSFs

Porter and Parker (1993) Yusof and AspinwaU (1999)


1. Management behaviours Leadership and support from top
• clear leadership management
• vision Providing effective and appropriate training
• commitment to TQM for employees
• involved in TQM process Measuring results and performance
• strategic issue Conducting continuous improvement
2. Strategy for TQM Adopting a QA system
• specific objectives Sufficient financial resources
• incorporate into business plans Providing relevant training for senior
• establish means for CQI management/staff level
Favourable work environment
3. Organisation for TQM
Selective application of tools and techniques
• organisational structure 10. Involving suppliers in improvement
• team structure 11. Desirable HR practices
• hierarchy for authority
4. Communication for TQM
• Quality awareness
• Publish achievements
5. Training for TQM
• All employees
• Ongoing process
• Scope and depth to meet individual need
6. Employee involvement
7. Process management and systems
• Documented quality system
8. Quality technologies
• SPC etc

Table 3.8: TQM implementation - end state CSFs

Ahire, Golhar and Waller (1996) Saraph, Benson and Schroeder (1989)
1. Top Management Commitment 1. Divisional top management leadership for
2. Customer focus quality
3. Supplier Quality Management 2. The role of the quality department
4. Design Quality Management 3. Training
5. Benchmarking 4. Product/service design
6. SPC Usage 5. Supplier quality management
7. Internal Quality Information Usage 6. Process management (design and control)
8. Employee Empowerment 7. Quality data and reporting
9. Employee Involvement 8. Employee relations
10. Employee Training
11. Product Quality
12. Supplier Performance

Other authors have adopted a particular perspective in their quest to increase the

effectiveness of implementation (Table 3.9) and have considered organisational factors

which affect implementation success (Mann and Kehoe, 1995), organisational factors
55
which are 'most likely to result in TQM-consistent behaviors' thereby impacting on

implementation success (Shea and Howell, 1998) and factors relating to the 'mind-set'

of senior managers which are also thought to contribute to successful TQM

implementation (Taylor, 1996).

Table 3.9: Organisational factors in TQM implementation

Mann and Kehoe (1995) Shea and Howell (1998) Taylor (1996)

1. Process 1. Persuasion 1. Understanding of TQM


2. Type of employees • Leadership 2. Motivation for
3. Shared values 2. Enactive attainment implementation
4. Management style • Organisation structure 3. Perception of customer
5. Organisational structures • Job design satisfaction
6. Number of employees 3. Vicarious experience 4. Perception of financial
7. Industrial relations • Modelled behavior impact of TQM
• Training 5. Perception of extent of
4. Cognitive mediators employee involvement
• Self-efficacy
• Outcome expectancy

Although there are some similarities across the authors cited above either in relation to

specific elements such as senior manager commitment/leadership and the more

generally expressed management style, the key purpose of presentation has been to

display the diversity of approaches and to highlight once again how the duality of

implementation may/may not be explicitly expressed and therefore may pose a danger

to the unwary.

The CSFs in Tables 3.7 and 3.8 could be perceived as the positive side of the coin - the

enablers; the next section will look at the opposite side of the same coin - the potential

barriers to implementation.

56
3.4.3 Barriers, Pitfalls, and Obstacles to the Implementation of TQM

Given the variation in the sets of CSFs presented earlier, it is worth adopting a different

perspective and reviewing the other side of the CSF 'coin'; of particular significance

given the fact that two in three TQM efforts are not considered successful (Brown,

1993). A number of consultants and academics, either from their own general

experience in the field (Brown, 1993; Katz, 1993; Dale and Cooper, 1994; Whalen and

Rahim, 1994; Yong and Wilkinson, 1999), based on a review and synthesis of the

literature (Davis, 1997) or from case study research (Newall and Dale, 1991; Krishnan,

Shani, Grant et al, 1993; Koeslar, 1995; Kanji, 1996) have presented a range of issues

likely to impact on the success or otherwise of the implementation effort (Table 3.10).

These have variously been described in terms of pitfalls (Katz, 1993; Kanji, 1996), the

common mistakes of managers (Dale and Cooper, 1994), barriers (Whalen and Rahim,

1994), obstacles (Yong and Wilkinson, 1999), breakdowns (Davis, 1997), problems

(Krishnan, Shani, Grant et al, 1993; Newall and Dale, 1991) and 'reasons why TQfails'

(Brown, 1993).

The level of detail varies amongst the papers cited above but it is generally a broad

treatment of a range of issues; albeit some with a particular perspective such as

management mistakes (Dale and Cooper, 1994) or, less commonly, the failings of a

particular managing director (Krishnan, Shani, Grant et al, 1993). Others have focused

their attention more closely on areas such as organisational politics (Wilkinson and

Witcher, 1993), culture and structure (Tata and Prasad, 1998) or the human resource

issues associated with TQM implementation (Snape and Redman, 1995).

57
Amongst the observations cited above which are experiential in origin, there is a sense

that implementation failure is generally due to the implementation process rather than

the concept itself. The authors tend to recognise that TQM involves 'change'; this may

be reflected as the cultural change required to move from a fire-fighting approach to

running the business to one which is based upon 'planning, prevention and

improvement' (Dale and Cooper, 1994). Alternatively, the nature of the change may

reflect a whole system perception of TQM where implementation will require changes

to the way the business itself is run thereby addressing change on multiple fronts

(Davis, 1997). However, although the authors refer to TQM, not all are explicit about

what 'it' is, or the organisational context in which 'it' has been implemented. Whilst this

does not preclude the identification of general themes, it suggests that any comparison

of the issues raised by the authors in Table 3.10 on a like for like basis may be of

limited value.

58
Table 3.10: Barriers to TQM implementation

8 TQM pitfalls (Katz, 1993) Common mistakes made by 10 reasons why total
senior managers (Dale and quality fails (Brown, 1993)
Cooper, 1994)________
1. CEO delegates 1. Lack of commitment, 1. Disguising cost control as
responsibility for TQM awareness and vision TQ
2. Failing to recognise that 2. Failure to commit sufficient 2. Measuring too many of the
every company and time to learn about TQM wrong things
environment is different 3. Failure to become 3. Lack of support from the
3. Applying tools of TQM personally involved in top
before needs determined planning for its introduction 4. Too much too soon
and direction established and development 5. Too little too late
4. Conducting training before 4. Underestimating the 6. Dual structures
support for TQM resources needed to start 7 Focus on activities vs.
5. Training employees before and develop a process of QI results
managers 5. Not establishing an 8. Can't get out of phase 1
6. Overemphasis on technical effective infrastructure 9. No one gets rewarded for
tools over personal skills, 6. Not committing sufficient quality and customer
leadership and management resources to TQM satisfaction
7. Failure to incorporate education and training 10. Total quality as a fad
suppliers 7. Treating output and cost
8. Not celebrating success targets as the main business
priorities

Breakdowns in TQM Obstacles to full TQM Common barriers to


(Davis, 1997) (Yong and Wilkinson, implementation (Whalen
1999) and Rahim, 1994)

1. Failure to execute 6 Lack of senior level 1. Poor planning


fundamentals leadership 2. Lack of management
2. Lack of focus and Lack of long-term strategy commitment
dissipation of resources or vision 3. Resistance of the workforce
3. Creation of a separate TQM Lack of time 4. Lack of proper training
organisation Lack of resources and 5. Teamwork complacency
4. Poorly integrated infrastructure 6. Use of an off-the-shelf
complimentary Lack of action and programme
management programs consistency 7. Failure to change
5. Inadequate linkages with Lack of middle organisational philosophy
financial results management commitment 8. Lack of resources provided
Fear among front line 9. Lack of effective
employees measurement of QI
Values and attitudes of
organisational actors
Barriers between
departments/functions
High labour turnover
Cultural issues
Certification and
measurement issues

59
Table 3.10 contd: Barriers to TQM implementation

Pitfalls to implementation Problems (Krishnan, Shani, Problems (Newall and Dale


(Kanji, 1996) Grant, Baer, 1993) 1991)

MD undermined creation of Confusion from pursuit of Management commitment


constancy of purpose multiple quality initiatives Hostility
2. MD failed to adopt new Inability to translate broad Reluctance to change
philosophy quality goals into Outdated working practices
MD failed to become a quantitative targets Lack of drive and
leader of change Appropriate organisational determination by senior
4. MD would not resource structure for managers
training implementation Poor managerial skills
5. MD management style Communication difficulties Union relationships
relied on fear and Managing transition from Individual performance
intimidation individual to organisational appraisal
6. MD created barriers learning
between departments Emphasis on short term
7. MD inhibited growth of profits
learning culture Lack of understanding
8. MD allowed certain people
to become overworked Kolesar, 1995
9. MD failed to make
decisions based on evidence Failure to manage by fact
10. MD blocked company-wide Initiatives running out of
CQI gas
11. MD made teamworking and Lack of infrastructure to
quality improvement support problem solving
second Lack of understanding
12. MD created policies in Delegation of
secret implementation
Indicate posturing-
implementation gap

Rather than trying to fit the elements in Table 3.10 into a single taxonomy, perhaps the

value lies in their diversity. These authors identify a wide range of issues which,

collectively, appear to encompass many aspects of the total system. To illustrate this,

elements have been selected and aggregated in a whole system framework (Miles,

1997) which illustrates the pervasive nature of the potential barriers to successful TQM

implementation.

60
Table 3.11: A whole system view of the barriers to TQM implementation

1. Vision 4. Infrastructure 6. Competencies


• lack of vision • poor communications • teamwork issues
• total quality as a fad • poor planning • managerial skills
• lack of focus • lack of measurement

2. Strategy 5. People 7. Culture


• lack of strategy • lack of training/education • values/attitudes
• uncoordinated activity • workforce resistance • organisational philosophy
• lack of time • front line fear
• lack of resources

3. Structure
• lack of leadership
• lack of management
commitment/responsibility
• lack of implementation
structure

According to the experience of one author (Kolesar, 1995), the issues he has

highlighted (Table 3.10) are 'not isolated horror stories unfairly selected from

otherwise healthy TQM implementations'. Thus, as seemingly common occurrences,

these issues represent important challenges to the implementation of TQM not only in

relation to the process itself but ultimately to the quality of the outcome achieved.

Whether an awareness of such challenges originates from the experiences of others or

from an internal analysis of TQM initiatives which have failed to deliver the results

anticipated, Katz (1993) warns that organisations that fail to identify and address such

pitfalls do so at their own peril. These organisations tend to seek 'afresh TQ Something

Else' (Kolesar, 1995) which Brown (1993) regards as a 'waste of time when we haven Y

properly applied what we have'.

Given that CSFs and other aspects of implementation are highly context-specific, Black

and Porter (1996) suggest that the nature of TQM has now passed beyond trying to

capture it in convenient taxonomies and instead the research effort needs to focus on

61
the practical experiences of organisations. In light of these comments, the next section

will focus on case studies of TQM implementation.

3.4.4 Implementation Case Studies

In the preface to their collection of cases, Oakland and Porter (1994; p xxii) claim that

'the value of illustrative cases in an area such as TQM is that they inject a reality into

the conceptual frameworks developed by authors in the subject '. They go on to argue

that cases based on real situations provide a useful basis for analysis, evaluation and

comparison with one's own experience. Such cases may also serve as a learning

vehicle for groups to work through the issues raised in the safety of a theoretical

exercise without the responsibility of actual operationalisation.

Presentation of cases appears to take a variety of forms, not just in terms of single or

multiple cases but also a range of approaches within these two variables. Oakland and

Porter (1994) have arranged their cases in sequence to illustrate aspects of the 'Bradford

Model' of TQM which serves as the conceptual framework. Thus cases are included

which highlight specific issues around, for example, the 'foundations' of TQM, the role

of the quality systems, tools, techniques and measurement and so on. Olian and Rynes

(1991) adopt a similar approach; having identified a set of organisational processes to

support the implementation of TQM, the authors (ibid) then present well-known

companies such as Motorola and Honda as practical examples of the theory discussed

within the text.

In order to follow the implementation of TQM over a period of time, longitudinal case

62
studies have been undertaken which have enabled researchers to compare and contrast

approaches between study sites against selected criteria. A study carried out by Newall

and Dale (1991) looked at eight companies, in both manufacturing and service

industries, in respect of: quality department organisation, the introduction and

development of quality improvement, employee involvement, measurement of progress

and problems encountered with TQM introduction and subsequent development. The

paper highlights similarities and differences between the cases across these dimensions

and concludes that the majority of problems experienced throughout the

implementation process were the result of poor planning by the study sites.

Lillrank and colleagues (Lillrank, Shani and Kolodny et al, 1998) researched eight

companies in eight countries over a three year period to understand how they organised

for continuous improvement. Seven design requirements and seven corresponding

design elements are identified. A comparative analysis is presented across these cases

together with a discussion exploring the rationale underpinning the design choices

made. The authors conclude that there is no 'one best way' to design for continuous

improvement and suggest that whilst it might be wise not to transfer specific

programmes from one context to another, the design requirements might in themselves

be more transferable (Table 3.12).

Table 3.12; TQM design requirements_________________________


1. Is continuous improvement part of ordinary work (integrated) or not (parallel);
2. Is continuous improvement work performed at a permanent group or task group;
3. Are the group members from one or several functions;
4. Are the group members from the same or different levels;
5. Is goal setting made centrally or in groups;
6. Are decisions about implementation made by the management hierarchy or the group;
7. Are incentive and compensation systems to be used to reward effort and results.
Lillrank, Shani and Kolodny et al (1998)___________________________________

63
Single cases have included both private and public sector organisations. Some deal

specifically with TQM implementation in small and medium enterprises (SMEs) (Goh

and Ridgway, 1994; Yusof and Aspinwall, 2000c) and seek to demonstrate that they

incur particular problems during implementation due to their size and, consequently,

should not be treated as small versions of large organisations. Whilst this may certainly

be wise advice, one could argue that some of the specific issues highlighted, such as

lack of resources in terms of time, finance and human resources are not confined to

these smaller enterprises and this is supported by similar work with large organisations

(Dale and Cooper, 1994). Other authors have focused on large corporations and

illustrate the challenges of TQM implementation faced by both the private and public

sectors. Thus the experiences of a range of companies/organisations are captured and

presented as a case study format; these include commercial companies such as an

aluminium manufacturer (Kolesar, 1993), a manufacturing company in the oil

distribution industry (Snowberger, 1996), a telecommunications company (Krishnan,

Shani, Grant and Baer, 1993), oil pipeline and transport (Anderson and Adams, 1997)

through to the efforts of federal government (Dobbs, 1994).

Whether multiple or single site approaches are adopted, each provides valuable insights

from a variety of perspectives and those of Newall and Dale (1991) and Lillrank and

colleagues (Lillrank, Shani and Kolodny et al, 1998) have been cited above as

particular examples. From a careful study of cases, the reader is also likely to

determine similarities in terms of approaches, issues needing to be addressed, obstacles

experienced; however, the depth with which the cases may be explored often varies.

Common elements have been included as Table 3.13.

64
Table 3.13: Common themes from TQM case studies

• Senior management commitment/Leadership;


• Middle managers - their new role as coach/facilitator, threats to
their power base, potential or actual obstacles;
• Approaches to training and education;
• Teams - problem-solving; training and support needs;
• Stages/phases of implementation;
• Tools and techniques;
• Change - TQM fundamentally about change;
• Culture - culture to support CQI.

Diversity seems to be a common thread running through much of the TQM literature.

In the previous section, it was suggested that this needs to be acknowledged as a

pervasive feature. Perhaps it is a function of the lack of any precise definition of TQM

and the likelihood that there is 'no one best way' with regard to implementation

(Lillrank, Shani, Kolodny et al, 1998). In addition, issues receiving attention are likely

to be highly context specific; their inclusion reflecting the impact relative to other

aspects which, although relevant, may be obscured, perhaps not reported, but no less

present in the case (Oakland and Porter, 1994). The conceptual frameworks which may

have served as the lenses for the inquiry are not always as explicitly stated as in the

work of Oakland and Porter (1994) or Newall and Dale (1991) and, irrespective of

whether they are explicit or implicit, conceptually they may be quite different. As a

related but different point, given the argument that the implementation process is highly

context specific (Lillrank, Shani, Kolodny et al, 1998), it is perhaps relevant to note that

the level of contextual detail provided as background to the studies also varies. This

may have implications for the value of the case study as a learning vehicle where one is

hoping to make a comparative analysis between the case organisation and hypothetical

or real organisation experience.

65
The value of a case study could be encapsulated in a statement such as 'interesting as

far as it goes'. This should alert the reader to the fact that the case is not likely to be the

entire picture but rather a simplification which may bring an associated danger - 'the

danger of making implementation seem clear-cut and obvious' (Oakland and Porter,

1994; p xxi). Although the espoused premise guiding implementation may be that

TQM is a philosophy of management, a way of running the business, this does not

necessarily mean that the reader will get a sense of the whole system implications of

implementation from the cases. For example, Olian and Rynes (1991) address many

aspects of not only the 'what' of implementation but also the 'how'. The authors (ibid)

argue that the key to establishing TQM as a way of life is to bring all of the key

elements into alignment which they sum up broadly in terms of people, processes and

outcomes; unfortunately, there is no direct mention of the need to develop a structure

and infrastructure to support the processes described.

So far the discussion of TQM generally and cases in particular has largely been

confined to the literature from the industrial sectors. This has provided a useful

background for the main focus of this thesis and the remainder of this section will now

concentrate on TQM and CQI in health care.

3.5 TQM AND CQI IN HEALTH CARE - A GENERAL OVERVIEW

3.5.1 TQM - An Ambiguous and Hazy Concept within Health Care

Although contributors to the field of TQM and CQI in health care recommend that we

go outside of this context and investigate alternative models (McLaughlin and Kaluzny,

66
1990), the decision to start with the literature around TQM in industry largely followed

from an earlier sense of confusion generated from an initial scanning of the literature on

TQM and CQI in health care. This initial foray was accompanied by growing

confusion; terminology such as TQM and CQI is used interchangeably by some authors

(Shortell, O'Brien, Carmen et al, 1995; Motwani, Sower and Brashier, 1996; Zabada,

Rivers and Munchus, 1998), or defined as separate entities by the some of the same

authors (Shortell, Levin, O'Brien et al, 1995). There are also multiple definitions of the

concepts and characteristics; typologies of the key elements of TQM and CQI appear to

vary depending on the writer.

The rationale for looking at the industrial literature went along the lines of - TQM

started in industry over a decade before its spread to health care; therefore, there may

be greater clarity and consensus in that arena. As the discussion in the previous

sections would suggest, this search for clarity in another arena was built on an optimism

that turned out to be unfounded. The reality was indeed that a lack of clarity and

consensus was not confined to health care giving credibility to the description of TQM

as something of a "quagmire' (Nwabueze and Kanji, 1997). So as not to add to this

lack of clarity, for the remainder of this thesis, the abbreviation TQM' will be used to

denote the organisational framework that enables the process of 'CQI' to take place; the

terms will not be used interchangeably unless during the citation of other authors.

Recognition of the ambiguity described above was an important step forward but it also

suggested that many of the issues relating to the implementation of TQM and CQI in

health may be similar to those in industry. Although there are those who might

67
question the validity of this (Arndt and Bigelow, 1995), a degree of similarity is indeed

reflected in both literatures. In fact, a number of papers focusing on aspects of

implementation in health care seem to present models or sets of CSFs which appear to

have a generic flavour. For example, Nwabueze and Kanji (1997) claim that the failure

of implementation in the public sector is due to the absence of context-specific models;

the authors then present a model to overcome this which could easily be applicable to

the industrial setting. Jackson (2001) outlines a set of key actions for achieving

successful TQM implementation in health care - here again one could argue that, within

this approach, there is little unique to the health service context. These contributions

are interesting because they may serve as a challenge to the 'not invented here'

mentality described by Pollit (1996) and may even assist in the transfer of learning

from one industry to another; particularly with regard to the issues associated with

implementation.

3.5.2 The Challenge of TQM Implementation in Health Care

The challenges facing implementation in industry have been highlighted earlier in this

chapter; authors considering this notion from the health care perspective raise issues

related to a number of features which they regard as specific to this context. Short and

Rahim (1995) point to structural elements and comment on complex, bureaucratic

organisations which potentially limit cross-boundary working and communication.

Others note the presence of a culture of blame which may inhibit openness and cause

fear (Arndt and Bigelow, 1995) or the various subcultures (Zabada, Rivers and

Munchus, 1998) - related to which there is sometimes tribalism and even turf battles

(Wakefield and Wakefield, 1993). Wakefield and Wakefield (ibid) also draw attention

68
to the complexities of service delivery as opposed to manufacturing and batch

production which are considered to be easier to specify, control and measure. Still

others note that the complexity is compounded by the fact that the customer is a co-

producer of health care (Zabada, Rivers and Munchus, 1998).

Alternatively, one might take the view that the particular structural and cultural

challenges outlined above are unlikely to be confined to health care. Customer co-

production and the complexity of the service delivery process are a key characteristic of

services in general (Lovelock, 1992); however, the complexity is likely to vary

depending upon the nature of the service. Contrast, for instance, operative procedures

requiring the skill of the clinician with those of the highly regulated and scripted

delivery which occurs at some fast food outlets.

Whilst Blumenthal and Kilo (1998) share the view that many of the challenges to

implementation are not unique to health care, they also argue that there are exceptions

to this and offer two examples to illustrate this belief. The first concerns the wider

context in which health care organisations operate. The lack of competition means that,

unlike commercial organisations within a market where TQM may be considered a

matter of survival, those health care organisations that are meeting their financial

targets have little incentive to drive through the profound changes that TQM may

require. Attention may instead be focused elsewhere such as on expansion strategies

rather than the core business.

Secondly, the authors (ibid) see securing the involvement of medics as a key challenge.

69
This group is regarded as having considerable autonomy to practice which, before the

advent of clinical audit, was considered outside the remit of management. TQM may

be perceived as a challenge to both medical autonomy and independence and

consequently resisted. The authors also suggest (ibid) that medical training does not

prepare the profession for either TQM or CQI; instead, individualism is instilled

together with an allegiance to the profession. As a result, many medics find it difficult

to adopt the behaviours required to make TQM and CQI a reality - particularly with

regard to multi-disciplinary teamwork and notions of internal customer-supplier chains.

The challenge of securing medical involvement in TQM and CQI is a common theme

in the literature. Berwick and colleagues (Berwick, Godfrey and Roessner, 1990) point

to the fact that in terms of accountability, medics in the past have only, if at all, justified

their clinical decisions to each other and then only informally. Many fail to engage

with total quality for a variety of reasons; for example: they do not see its relevance to

their work believing they are already delivering quality care, thus the time involved in

improvement activity is regarded as an opportunity cost (Zabada, Rivers and Munchus,

1998). Some authors see implementation of TQM as requiring a paradigm shift.

McLaughlin and Kaluzny (1990, p9) present their perception of the professional and

TQM paradigms in Table 3.14 and this provides some indication of the likely 'shift'

that may be involved.

70
Table 3.14: The professional paradigm and the TQM paradigm

Professional TQM
Individual responsibilities Collective responsibilities
Professional leadership Managerial leadership
Autonomy Accountability
Administrative authority Participation
Professional authority Performance and process expectations
Goal expectations Flexible planning
Rigid planning Benchmarking
Response to complainants Concurrent performance appraisal
Retrospective performance appraisal Continuous improvement
Quality assurance

McLaughlin and Kaluzny (1990, p9)

In addition to the professional challenges, others relating to the health care context have

been identified. For instance, the patient as a consumer may now be gaining in terms of

Voice1 but is still unable to exert the pressure of the market mechanism and professional

knowledge and experience may still disproportionately reflect the shape of service

delivery (Wakefield and Wakefield, 1993). In the wider context, several authors have

brought to the fore a significant challenge to creating constancy of purpose - the

constant threat to goal deployment as a result of policy initiatives which seem to shift

the strategic goal posts (Kim and Johnson, 1994; Nwabueze and Kanji, 1997).

There is undoubtedly a vast US literature and an ever growing UK literature relating to

the implementation of TQM and CQI in health care to which academics, consultants

and practitioners have all made a significant contribution. There is much to be learned

from this wealth of writing and especially from the results of large-scale evaluations of

TQM implementation. Two such studies are of particular importance; one of which

took place in the UK NHS and the other in the Norwegian health care system. Both

studies provide a rich picture of the challenges posed by the attempt to implement TQM

71
and serve as important lessons for those involved in the more recent whole system

approach to quality management in the NHS - clinical governance. A brief overview of

each research project will now be presented.

3.6 EXPERIMENTING WITH TQM IN THE UK NATIONAL HEALTH


SERVICE AND THE NORWEGIAN HEALTH SERVICE

In 1989, a Department of Health initiative to pilot the implementation of TQM was

launched. Researchers subsequently evaluated the approach to quality improvement

adopted by sites in 12 health authorities over a three year period; eight of these were

taking part in the pilot and a further four non-TQM sites were studied as comparators

(Joss, Kogan and Henkel, 1994; Joss and Kogan, 1995). In 1993, three Norwegian

hospitals were selected by the Norwegian Medical Association to act as test sites for the

implementation of TQM and funding was provided. The three pilot sites and three

other hospitals (which did not receive additional funding) formed a network and met at

regular intervals to share their experiences and learn from each other. The quality

journeys of these six were evaluated over a four year period (0vretveit, 1999;

0vretveit and Aslaksen, 1999).

72
Table 3.15: Factors predictive of TQM movement

1. Demonstrated senior management commitment to and understanding of TQM;


2. A well-developed and well-documented implementation strategy put in place with clear
objectives, time-scales, action plans, and review mechanisms;
3. Strong/persevering TQM co-ordinator - board level appointment or at least direct access to
chief executive;
4. Structure overseeing implementation of TQM; strategy for integrating this with normal line-
management meetings as soon as managers are trained and continuous improvements have
moved out to front line staff;
5. Comprehensive baseline assessment of service quality;
6. Early effort to gain the support of the medical consultants;
7. Sufficient funding for TQM facilitators;
8. Standard setting only as part of strongly monitored CQI;
9. Comprehensive training using mixed classroom/workplace model; tools and techniques not
just awareness;
10. Explicit strategy/resources for recognising and rewarding progress;
11. Organisational changes only after evaluation.

Joss and Kogan (1995; p!51)

The authors of the final project report on the UK experiment (Joss, Kogan and Henkel,

1994) offered a number of factors which they regarded as predictive of significant

TQM movement (Table 3.15). Although the researchers acknowledged that there was

an extensive range of quality improvement initiatives taking place in the pilot sites and

that there had been a shift towards aspects of TQM practices, this was considered to be

variable not only between but also within sites. Overall, apart from two sites,

implementation was considered to have been unsuccessful and the final report

documents deficiencies across all 11 of the factors cited above. Those researching the

Norwegian TQM pilots (0vretveit, 1999; 0vretveit and Aslaksen; 1999) do not appear

to express any overarching view of the success or otherwise of the initiative. However,

the impression gained from their reports (ibid) is that although there were examples of

73
positive initiatives taking place, this did not constitute an integrated approach to quality

management. It would also seem that many of the implementation gaps highlighted in

the Norwegian hospitals relate to similar factors as those identified in Table 3.15 which

are derived from the UK experience.

Both of these evaluations provide extremely valuable insights into earlier efforts to

establish a whole system approach to quality within the UK NHS and within a

mainland European health system. Both experiments appear to have suffered, at least

in part, from a combination of partial implementation, implementation failure and non-

implementation. Progress was not uniform, in fact, it varied between and within sites.

The experiences of TQM implementation noted above suggest that many of the

challenges facing the pilot sites echo those identified in the earlier part of this review

dealing with barriers, pitfalls and so on. It would seem therefore that these are

problems not reserved for TQM in commercial settings but are also apparent in the

public sector. The identification of the 11 factors described in Table 3.15 should serve

as an important signpost for those presently implementing clinical governance in the

NHS as should the lessons from the wider TQM literature presented in this chapter.

The challenges to the successful implementation of TQM appear pervasive and pose a

significant threat to the unwary who do not learn lessons from those who have gone

before.

3.7 CHAPTER SUMMARY

This second part of the literature review has focused on quality in general and TQM in

particular. The literature has given a flavour of the complex and contested nature of

74
health care quality and provided an overview of the complexity surrounding TQM both

in industry and in the health sector. Far from being able to glean any form of

universally supported 'received wisdom1, the concept appears hazy and contested and

there is certainly no blueprint for the implementation process. Instead what seems very

apparent is that TQM is not for the faint-hearted as it poses a significant challenge to

the organisation which stems from multiple sources. The risk is that implementation is

only partial, perhaps only a watered down version of what the whole approach should

be. This lack of attention to the notion of wholeness may result in an initiative that

never achieves its full potential; perhaps because of the way it has been conceptualised

and/or gaps in the implementation process. The barriers to the successful

implementation of TQM are many, varied and well-documented. As such, they should

serve as valuable signposts for those wishing to avoid the pitfalls that await the unwary;

especially when the change that accompanies total quality is of such transformational

proportions.

One aspect of TQM for which there is some degree of consensus, in the literature at

least, is that an inherent feature of TQM is the notion of change. This is succinctly

expressed by Berwick (1996) who notes that while all change does not involve

improvement - all improvement involves change. Given the central role of change in

improvement, the next chapter will consider the notions of both change and change

management.

75
CHAPTER 4

LITERATURE REVIEW - CHANGE AND CHANGE


MANAGEMENT

'In a changing world the only constant is change'


(Carnall, 1999; p!43)

4.1 INTRODUCTION

The phenomenon of change whether at the level of an industry or an individual

organisation has attracted considerable interest for decades. Whilst agreeing that

concepts of change originating in the private sector should not be 'mechanistically

trundled across the sectoral divide', Pettigrew and colleagues (Pettigrew, Ferlie and

McKee, 1992; p5) suggest a preoccupation with difference and sectoral transfer has

limited the extent to which understanding of the change processes observed within the

NHS has been informed by the wider literature. The authors (ibid) acknowledge that

there are sectoral differences - particularly in terms of politicisation and also the power

and social position of the professionals, but they argue that there are enough similarities

between the private sector and the NHS for experience gained in the former to

illuminate thinking in the latter.

An issue both sectors appear to share is the challenge of effective change management.

Tushman and O'Reilly (1996) ask why, when 'organisations are filled with sensible

people and usually led by smart managers' is change so difficult. In the context of

health care, the 'implementation deficit' is a problem commonly identified by

researchers (Ferlie, 1997). Ferlie (ibid) argues that the increasing awareness of the sort

of difficulties experienced by organisations seeking to deliver the change agenda has, in

76
part, fuelled the emergence of the change management literature. This has increased

recognition that the change process can be facilitated (lies and Sutherland, 2001) and

should be managed (Nadler and Tushman 1995; Anderson and Ackerman-Anderson

2001; Cummings and Worley 2001).

Total Quality Management is considered by some as an important vehicle for delivering

large-scale change (Ferlie, 1997). TQM implementation frameworks were presented in

the previous chapter; this part of the literature review will continue with an exploration

of more generic change models. However, before reviewing the change management

literature and, in deference to Anderson and Ackerman-Anderson's warning (2001) that

change needs to be understood before it can be managed, theories of change will be

presented together with an overview of the more common conceptualisations of change

as a phenomenon.

4.2 THEORIES OF CHANGE

It appears that the literature on organisational change is not only diverse but at an early

stage of theoretical development (Dunphy, 1996). The author (ibid) observes that there

is 'no one, all-embracing, widely accepted theory of organisational change and no

agreed guidelines for action by change agents'. This diversity not only reflects the

complexity of the phenomenon but also the range of concepts imported from other

disciplines in an effort to increase understanding - punctuated equilibrium (Gersick,

1991), sense making and sense giving (Gioia and Chittipeddi, 1991), even tectonic

plates (Reger, Gustafson, Demarie et al, 1994). Given the complexity of the field,

Dunphy (1996) asks whether a search for some unitary theory would even be a

77
legitimate endeavour. On a similar theme, Hawkins (1997) advocates the use of

different lenses for organisational analysis arguing that this 'polyocularity' provides a

variety of perspectives which will ultimately lead to a more holistic view of the field.

Van de Ven and Poole (1995) make a similar point to those above and warn that

applying a single perspective will reveal only a partial account of a complex

phenomenon and advocate instead alternative pictures of the same organisational

processes. They offer four theories which they suggest might serve as the building

blocks for explaining the processes of organisational change; namely: life-cycle,

teleology, dialectics and evolution. They suggest the value of this approach is that

other theories tend to relate to outcomes rather than the process itself which is of

particular interest here, given the intended focus on both aspects of implementation,

and therefore worth presenting in a little more detail.

According to life-cycle theory, change is imminent and pre-programmed although

influenced by the environment. Change progresses as a unitary sequence of

phases/stages which must take place in a prescribed order because each is a precursor to

the next. Teleological theory assumes that the entity is purposeful and adaptive;

capable of constructing an end state, taking action to reach it and monitoring progress

along the way; development is seen as progression towards the goal although the

trajectory is not necessarily sequential as in the life-cycle theory. Dialectical theory

acknowledges the pluralistic nature of organisations and the presence of competition,

conflict and political activity. Change occurs when the status quo is overthrown but the

political process may equally sustain the status quo. Finally, evolutionary theory sees

78
the change process as a continuous cycle of 'variation, selection and retention' and, in

this way, is influenced by the natural sciences. Thus evolutionary change is ongoing,

incremental and cumulative. The authors (ibid) suggest that the four theories within

this typology offer fundamentally different explanations of the change process but

caution that they should be regarded as 'ideal types' and that change within

organisations is more complex than any one theory taken alone.

The following section will explore a number of ways in which change may be

conceptualised; the reader may recognise shades of the theories presented above in the

concepts that will now be considered.

4.3 CHANGE CONCEPTUALISED

4.3.1 Incremental and Discontinuous Change

Whilst acknowledging the theoretical pluralism surrounding the notion of change,

Wilson (1992) highlights a certain homogeneity in the language used to describe this

pervasive phenomenon. According to Nadler and Tushman (1995), at the broadest

level, there are two types of organisational change - incremental and discontinuous.

Writers on change seem to adopt a variety of synonyms in differentiating between the

two and, in addition to incremental/discontinuous change, references to first

order/second order, evolutionary/revolutionary, developmental/transformational,

continuous/episodic change may be found in the literature (Pettigrew, Ferlie and

McKee, 1992; Van de Yen and Poole, 1995; Weick and Quinn, 1999; Anderson and

Ackerman-Anderson, 2001; Cummings and Worley, 2001).

79
In describing incremental change, Nadler and Tushman (1995) present the phenomenon

in the context of organisational effectiveness where modification and improvement is

ongoing with the aim of continuously refining the fit between the strategy of the

organisation and components of the internal system. Such changes tend to be focused,

occur over a finite period of time - usually weeks/months; they are not necessarily

small, on the contrary some change may be significant in terms of the resources needed

and their subsequent impact. However, incremental changes are usually bounded by

the existing values and mission of the organisation and, in this sense, are within the

current organisational frame and so essentially concerned with continuity.

hi contrast, the focus of discontinuous change is on the creation of new configurations

rather than improvements to status quo; thus, a new strategic direction may be the

outcome accompanied by a radical reconstruction of the entire delivery system in

support of this. According to Nadler and Tushman (1995), discontinuous change

requires a complete break from the past; instead of working within the existing frame,

the goal is to change the organisational frame completely. This degree of change is

often associated with a degree of shock and is even painful and traumatic.

Discontinuous change is usually related to major change in the industry in which the

organisation operates.

Adding to the perspective outlined above, Weick and Quinn (1999) argue that

incremental change is characterised by learning and driven by an organisational

alertness and an inability to remain inert which reinforces the previous notion of

organisational effectiveness. This type of change tends to take place at the micro-level;

80
is therefore faster and consists of 'mini episodes' which could subsequently serve as the

building blocks for transformational change. In contrast, discontinuous change is

characterised by replacement and often triggered by the organisation's failure to adapt

incrementally. This type of change is strategic thus wider in scope and slower to

achieve.

As highlighted earlier, there is a certain amount of consistency in the literature

concerning the basic typology of change and its characteristics at a general level.

However, there are those who challenge the notion that discontinuous change requires

'a complete break from the past' (Nadler, and Tushman, 1995). In contrast, Goodstein

and Burke (1991) argue that frame-bending change does not imply wholesale, complete

change 'in any and all respects', some things inevitably stay the same which is

important for those experiencing the change:

'......the principle here is that for people to be able to deal with


enormous complex change - seeming chaos - they need to have
something to hold on to that is stable'. (Ibid).

According to Hamel (2001), it is more the balance between keeping hold of what is still

relevant and not being afraid to let go of what may have served the organisation well

but is no longer appropriate given the nature of the change.

Anderson and Ackerman-Anderson (2001) present a variation on the two dimensional

typology presented above and offer three types of change which are described in terms

of developmental, transitional and transformational. Developmental and

transformational change appear similar to the notions of incremental and discontinuous

change outlined earlier. Transitional change appears more as an interim point in the

81
incremental - discontinuous continuum and is offered as a response to more complex

environmental drivers which require replacement rather than improvement but not in all

components of the business. Inclusion of the notion of transitional change is a useful

reminder of hybrid forms however, it should not be confused with the notion of

transition. In the generic change implementation model developed by Beckhard and

Harris (1987), transition denotes movement from the current to future state irrespective

of the nature of the change.

In addition to this initial deconstruction of change presented by Nadler and Tushman

(1995), the authors have added another dimension, i.e. time, to that of continuity

(incremental/discontinuous). They reason that by differentiating between reactive and

anticipatory change and combining these with the continuity dimensions, a further four

types of change may be identified; each of which they regard as characteristically

different (Table 4.1).

Table 4.1: A matrix of continuity and time (Nadler and Tushman, 1995)

Incremental Discontinuous

Anticipatory TUNING REORIENTATION

Reactive ADAPTATION RE-CREATION

82
4.3.2 Planned and Emergent Change

Another perspective on change relates to the extent to which it occurs as a result of a

deliberate effort or whether it emerges over time; these aspects are generally referred to

as planned and emergent change respectively. A recent review of the change literature

(lies and Sutherland, 2001) touches on both of these notions and defines planned

change as 'deliberate, a product of conscious reasoning and actions' (ibid, pi4).

Others, such as Cummings and Worley (2001) present a more detailed treatment of

planned change. In particular, they draw attention to the role of Organisation

Development (OD) in planned change efforts which, they argue, has moved from an

earlier concentration on incremental changes to one which increasingly incorporates

large-scale, discontinuous change. The authors (ibid, p39) are at pains to debunk the

criticism of planned change as a 'rationally controlled, orderly process' and highlight

instead:

'...... (its) chaotic quality, often involving shifting goals, discontinuous


activities, surprising events and unexpected combinations of changes.'

This chaos is often compounded in their opinion by managers who initiate change

without having any clear idea of either their goals or strategies for achievement. In

contrast, emergent change is described as a phenomenon which 'unfolds in an

apparently spontaneous and unplanned way' (lies and Sutherland, 2001; p!4). The

process is characterised by drift rather than design and presumably without the

deliberate intent which characterises planned change.

4.4.3 Ideal Types and Composites of Change

The body of literature surrounding change is massive and yet the phenomenon itself is

83
one which is often poorly understood and managed (Goodstein and Burke, 1991). The

preceding discussion has attempted to present just some of the conceptualisations of

this subject and, because of the vastness of the literature, this constitutes a small

proportion of the thinking in the field around how change might be viewed. Different

approaches offer different perspectives and insights and most of that which is presented

here takes the form of what Van de Ven and Poole (1995) refer to as 'ideal types'.

These serve not only as a valuable simplification tool but also as a mechanism for

organising thinking around change (Ferlie, 1997). Ideal types contrast with the reality

of the change process which is infinitely more complex whether at the level of an

industry or an individual organisation.

The reality of complex change is not bounded by either/or decisions but represents

composites of some/all of the elements outlined within the depictions of ideal type. For

instance, with regard to the notions of planned and emergent change, Pettigrew and

colleagues (Pettigrew, Ferlie and McKee, 1992), lies and Sutherland (2001) and

Cummings and Worley (2001) are keen to dispel the 'myth' of the linear change process

which seems to form the basis of criticisms of planned change. Even planned change is

likely to demonstrate emergent properties; Klein (1995), concerning the 1991 NHS

reforms, comments that 'the dynamics of change once unleashed, created their own side

effects and surprises.' This is perhaps to be expected given the author's earlier

observation that: 'the plans as initially announced were little more than outline

sketches. The details were filled in during the course of implementation' (Ibid). Given

the emergent nature of clinical governance as a policy, it seems that Klein's comments

are as relevant today as almost a decade ago.

84
Continuing with the theme of composites, Wilson (1992) suggests that change is a

relative concept and therefore, it is more appropriate when considering organisational

change to think in terms of the degree of change taking place rather than

conceptualising it as the antithesis of stability. Nadler and Tushman (1995) make a

similar point - even when organisations appear to be going through a period of inertia,

they are not standing still. In this, Tushman and O'Reilly (1996) see the seeds of a

dilemma for managers. In the short term they are constantly struggling to increase the

degree of internal and external alignment but this evolutionary change is not enough for

sustained success. In the long run they may be required to destroy the processes which

have delivered the very alignment that has made the organisation successful.

Organisations need to become 'ambidextrous' and operate in a world that is

characterised by periods of stability and incremental change whilst at the same time

remain alert to the triggers for revolutionary change. As Pettigrew and colleagues

(Pettigrew, Ferlie and McKee, 1992; p299) observe, there is 'no respite from the ever

present duality of holding together an organisation -whilst simultaneously reshaping it'.

Given this enduring challenge, it is perhaps timely to explore the principles of change

management and identify models which might assist managers in the delivery of the

change agenda.

4.4 CHANGE MANAGEMENT

Whilst cliches such as the one cited at the beginning of this chapter are seemingly

abundant (Goodstein and Burke, 1991), they are still likely to strike a chord with those

currently working in the NHS - despite the New Labour promise of evolution not

revolution in the health care arena (Department of Health, 1997). Whilst the NHS

85
reforms of 1991 brought change within the existing structure, the more recent reforms

have heralded a new configuration of the health economy. The scale of this change

serves as a challenge to Klein's comment (1995) on the Thatcher Government reforms

about which he observed that 'the exhausting convulsions precipitated by Working for

Patients have dampened the appetite for further change'. It seems that Klein (ibid) was

a little premature in making this observation.

To facilitate the management of this most recent programme of NHS reform, a review

of the literature on change management was commissioned (lies and Sutherland, 2001).

This document aims to provide an introduction to a range of approaches, models and

tools and presents each using a uniform format - description, use, evidence and

commentary. Certain elements such as culture, organisational politics and leadership

were apparently outside the scope of the review which seems a lost opportunity as these

often present the biggest challenge to the management of change. As an introduction to

change, the review makes a valuable contribution to the literature in that it highlights a

number of concepts and tools; particularly those that will assist organisational

diagnostics and planning. However, practitioners will need to look elsewhere for a

guide to actually leading and managing the transition from the present to the future

state because, apart from Soft Systems Methodology (Checkland and Scholes, 1990),

the review does not appear to include much in the way of models that deal with the

actual process of change.

86
4.4.1 Models and Frameworks for Change Management

Although the literature does not tend to distinguish between frameworks and models

and these terms are often used interchangeably, Anderson and Ackerman-Anderson

(2001) see these as two separate phenomena. They argue that a change framework

essentially highlights the areas which will require attention during the change and cite

the Peters and Waterman (1982) 7-S Framework' as an example of this. In contrast,

they see change process models as offering guidance on the 'what', 'how' and in 'what

order' change needs to happen citing the Kotter model (1996) as an illustration. Whilst

the frameworks are regarded as static representations, the process models are

considered to be dynamic; serving as roadmaps and a 'thinking discipline' rather than a

'prescription for action' or a check list (Anderson and Ackerman-Anderson, 2001).

The distinction between models and frameworks will not be emphasised in this thesis in

deference to other authors cited such as Miles (1997) whose framework will be utilised

for the case study (Appendix 2). Although Miles refers (ibid) to his construct as a

framework, it is dynamic and captures the what and the how and therefore would

constitute a model under Anderson and Ackerman-Anderson's (2001) criteria.

Anderson and Ackerman-Anderson (ibid) argue that frameworks have a place as

educational tools but have little value in the field other than as an 'organising

construct'. From experience in the field I would argue that, in itself, is of value given

the complexity of the change process and the omissions that may occur in the absence

of either an explicit framework or a model of the change process. Perhaps it is more

about how frameworks are utilised; as the review by lies and Sutherland (2001)

87
demonstrates, frameworks in themselves may have limited value outside of a change

process model.

Given the breadth of the field of change management and the different disciplines that

contribute to the ever expanding literature, it is not surprising to discover that change

process models abound (Kotter, 1996; Miles, 1997; Pendlebury, Grouard and Meston,

1998; Anderson and Ackerman-Anderson, 2001; Cummings and Worley, 2001).

Perhaps the most famous change model is the notion of unfreeze-move-refreeze

presented by Lewin (1951); upon which most of the subsequent models seem to be

based. The sheer choice within the field makes the selection of a model a somewhat

daunting activity, particularly when used in an action research setting where the

research process includes on-going feedback to the organisation as well as data

collection. Given the variation in conceptual content found by the review cited earlier

(lies and Sutherland, 2001), it seems unlikely that a single model could capture the full

complexity of the change process.

In light of the above comments, three change process models have been selected for

inclusion in this review (Kotter, 1996; Miles, 1997; Pendlebury, Grouard and Meston,

1998). Although the models vary in terms of emphasis, detail, and level of abstraction,

taken together they appear to cover much of the complex ground relating to the

practical aspects of making change happen in real organisations. Kotter (1996) seeks to

provide an 'action plan1 for the change effort (Appendix 3) and is the more abstract of

the three models whilst Pendlebury and colleagues (Pendlebury, Grouard and Meston,

1998) seem to have developed more of a blueprint and reach a level of detail which
includes individual tasks (Appendix 4). Miles (1997) seems to be in the middle of this

continuum and offers a 'framework for transformation' which gets across the messiness

of real world organisational change (Appendix 2). The remainder of this chapter will

consider each of these models in greater detail.

4.4.2 Change in Eight Steps

Kotter (1996) presents an eight step change process which has been derived from the

common errors he has observed over time spent working with organisations to bring

about major change. He argues that each of the eight steps must be followed: steps 1-4

are concerned with 'defrosting' the status quo, 5-7 putting new practices in place and

stage 8 is about making the change 'stick. Each stage needs to be completed so that a

solid foundation is gradually established and later built upon which gives the

impression of sequential movement. However, Kotter (ibid) acknowledges that the

presentation of this step model of the change process is a simplification of reality; given

the complexity and messiness of real world change, the steps are more likely to be

operationalised as multiple phases. One of the key concerns appears to be that, under

pressure to deliver results, there is always the temptation to address stages superficially

or even miss them out altogether. In this event, it is unlikely that a solid framework

will become established; consequently early progress may soon lose momentum or any

change achieved ultimately may not become institutionalised.

4.4.3 Ten Keys to Effective Change Management

Pendlebury and colleagues (Pendlebury, Grouard and Meston, 1998) have identified

what they have termed The Ten Keys to Successful Change Management'. They

89
propose that this approach constitutes a coherent whole; each of the keys fulfils a

different function and should not be regarded as a one-off activity. In fact, to maximise

the likelihood of success, they advocate that the keys are 'applied not only

simultaneously but continuously throughout the change process' (Pendlebury, Grouard,

and Meston, 1998; p41). Although it is acknowledged that each key may be more

active during different phases of the change process, this sense of simultaneousness and

the continuity of these components is rather contradicted by the detail around

implementation which appears later in their text. Here there is a recognition that,

certain, if not most of the keys need to be in play prior to delivery. There is a logic to

the fact that some keys will need to be applied in sequence and will ultimately overlap

perhaps intermittently, perhaps continuously; this lends itself to the notion of a critical

path which highlights/predicts how and when these overlaps may occur. The core

message is that all of the keys are considered to be essential to the success of the

change effort; omitting one will cause problems, the authors (ibid) argue that omitting

more than one will cause failure.

4.4.4 A Framework for Transformational Change

Miles (1997) moves away from the 8/10 taxonomies presented above. He offers a

framework which outlines the fundamental attributes of transformation which deal with

energy, vision, alignment and what is termed 'process architecture' and presents within

these main headings, clusters of activity which must be performed in order to achieve

the transformation required.

Miles (ibid) stresses that the effectiveness of the framework relies on two important

90
factors; the initial change condition and leadership. He argues for an assessment of the

initial change condition along the axes of readiness for change and resources available.

Anything other than a state of high readiness and high resources (which he

acknowledges to be 'a largely vacant space where only a few notable companies

manage to reside for long' [Miles, 1997; pll]) needs to be addressed before the

framework may be implemented. The other factor concerns the issue of leadership

which Miles (ibid) argues needs to be of a transformational nature in order to

successfully catalyse and steer the organisation through the complexity of the change

process.

Although a total system perspective is regarded as a vital ingredient of the process

generally, perhaps it should be considered as a third factor of the initial change

condition. This demands the transition to the future state takes place, not in a

piecemeal way, but through the simultaneous articulation of all elements which are then

orchestrated to deliver the vision. As with the two earlier models cited here, success

relies on the implementation of the whole approach; if the initiative lacks more than

one of the significant elements, once again failure is predicted.

Although each of the models discussed above may present the management of change

in different ways, there is a certain internal consistency amongst several areas of

content; these will now be considered under the heading of common themes.

91
4.4.5 Common Themes

Vision-led change

In the event of large-scale, complex change which is characterised by long time-scales

and unanticipated events, it is unlikely that the whole change process will be knowable

let alone controllable. In such circumstances, by defining and making explicit the

fundamental goal of the organisation, the vision can serve as a compass by which the

organisation may steer a course through the turbulent time to come and, ideally, provide

a common sense of purpose behind which the organisation may unite. The key

challenge is the translation of this broad goal into tangible objectives capable of being

operationalised over a defined timescale.

With the exception of Pendlebury and colleagues (Pendlebury, Grouard and Meston,

1998) who refer to the re-engineering work at Leicester Royal Infirmary, many of the

examples cited in the models discussed in this section are taken from industry. As with

any generic approach, they need to be adapted and implemented sensitively according

to the individual context; this applies equally when the transfer is from one industry to

another or from organisation to organisation. As an example of the different industries;

the visioning process in a commercial organisation is closely aligned to the strategic

management process which seeks to address fundamental questions such as 'what

business would we like to be in given the current competitive climate'. In contrast, the

visioning process of an NHS organisation is likely to be determined largely by central

policy. As another example, lowering complacency and raising the sense of urgency

may be in response to the threat to the survival of a commercial organisation. In

contrast, the sense of urgency in the NHS is often generated by the time-scales that may

92
accompany the introduction of a new policy and failure to meet these is not so much a

threat to the survival of the organisation but often career-limiting for the Chief

Executive. Nevertheless, whilst the triggers for visioning sometimes differ between the

private and public sector, the process of developing the vision and defining how it is to

be operationalised is often similar.

Leadership and management

Leadership is a recurring theme both within the models discussed here and also in the

wider literature. However, successful change is not just a matter of effective

leadership; it also requires effective management - change does not 'just happen' and as

Kotter (1996; pi 29) notes - 'a balance of the two is required'. Deliberate management

intervention across the whole system is required if 'implementation drift' is to be

avoided. This refers to both the long term objectives and the short terms gains for

which management should actively plan; to passively hope for the achievement of

either is an unreliable tactic (Kotter, 1996). Whilst visioning is regarded as a leadership

function, Pendlebury and colleagues (Pendlebury, Grouard and Meston, 1998) suggest

that the responsibility for change management lies squarely with the managers. This

not only entails the translation of the vision into reality through the definition of

tangible objectives which are prioritised according to the resources available, but also

the operationalisation of these objectives through the management process; the essence

of which Kotter (1996; pi 28) describes as:

'......systematically targeting objectives and budgeting for them,


creating plans to achieve those objectives, organising for
implementation, and then controlling the process to keep it on track'.

93
Energy and capacity for change

Each of the three models refer to the energy required not only to shift the organisation

away from the status quo (synonymous with Lewin's notion of unfreezing [Lewin,

1951]) but also to sustain the momentum for what is likely to be a long haul. This

emphasises the need for the creation of a sense of urgency and an associated lowering

of complacency - a state secured by what Miles (1997) terms the process of

'confronting reality'. This involves an assessment of the current state which accurately

identifies the gap between that and the vision state; mechanisms to achieve this include

benchmarking, industry analysis and a review of internal strengths and weaknesses.

Although it is Miles (1997) who explicitly addresses the notion of the initial change

condition, the other models also highlight the need to create the capacity for change

through the allocation or re-allocation of resources which sends powerful signals down

into the organisation about what is considered to be important. There are also issues

around meeting the needs of the organisation in relation to knowledge and skills, not

only in terms of the new roles that might be needed but also with regard to the change

process itself in order to secure participation and involvement.

Organisational politics and political behaviour

Each of the models described in this section address the issues of politics and political

behaviour albeit under different headings; for instance, as the need to handle the power

dimension (Pendlebury, Grouard, and Meston, 1998), as the creation of powerful

coalitions (Kotter, 1996) or as the need to deal with the personal dynamics of change as

part of early energy generation (Miles, 1997).

94
Holism and alignment

A common theme running through the change models is the notion of wholeness; that

success relies on the application of the whole approach leaving no scope for a 'pick and

mix' attitude. Another important and related theme is the notion of internal alignment;

that is, ensuring that all elements of the system are aligned and mutually reinforcing of

the vision. According to Miles (1997), the total organisational system consists of the

vision, strategy, structures, infrastructure, people, culture and competencies; given the

scale of the change required to move all of these aspects from the current to the future

state, it would be unreasonable to expect the new internal context to emerge in what

Miles (ibid, p48) refers to as 'a moment of cosmic creation'. Therefore all of these

elements need to be carefully orchestrated (a term used by both Pendlebury and

colleagues [Pendlebury, Grouard and Meston, 1998] and Miles [1997]) to create and

maintain dynamic alignment in a way that builds up the organisational capacity to take

forward the change effort given the resources available.

Each of the models deal with the core components of the alignment mechanism slightly

differently. Pendlebury and colleagues (Pendlebury, Grouard and Meston, 1998) and

Kotter (1996) seem to present them either as separate keys or separate steps. In

contrast, Miles (1997) presents these activities together as a cluster which he terms

'process architecture'. These components are regarded as key requirements if

alignment is to be dynamic and responsive to the changes that will be needed if the

organisation is to navigate through the fluidity of the transition state. It is

recommended that all these elements are put in place at the earliest opportunity and

remain so as appropriate throughout the life of the change initiative. This suggests that

95
alignment is not regarded as a one-off activity but as an on-going process which adapts

in response to the feedback from the whole system as the change process itself

proceeds. As with all aspects of the implementation process, it seems the clusters of

activity which make up the process architecture need to be deliberately designed and

executed in a manner which, once again, is consistent with and mutually reinforcing of

all other aspects of the total system. A certain added value may be perceived in the

way that the Miles model (1997) highlights these components as an important cluster.

Practical experience suggests that these activities often get overlooked in the flurry to

change the 'hard' elements of organisational design.

Continuing with the themes of alignment and deliberate intervention, the 'hard' or more

formal elements such as strategy and structure are those which tend to be addressed

first. These are perhaps more tangible and generally respond quicker than the 'softer',

less tangible elements such as culture and people issues. Miles (1997) suggests that

there has been a mind set that if the 'hard' elements are changed, then the 'softer' design

elements such as organisational culture will automatically come into alignment;

however, the author (ibid) warns that this is not necessarily so and this will shortly be

expanded upon further in section 4.4.6.

Resistance is to be expected

Not only should there be deliberate action to align the system but each of the models

cited here concur that there needs to be deliberate action to confront and, where

necessary, remove any obstacles to this alignment. Resistance to change is regarded as

a normal response, and, as such, should be anticipated and addressed. More usually,

96
this is through enabling people to contribute effectively to the change process but,

where efforts fail to get people on board, it may then be more appropriate to move

people on or out rather than allow them to serve as obstacles/barriers during the

transition. Power issues are also considered as an inherent part of any change process.

Power seeks to be increased rather than decreased (Pendlebury, Grouard, and Meston,

1998) and organisational change threatens to disturb the existing balance of power

whether this arises from formal or informal power bases.

4.4.6 Culture Change

Another of the 'softer' elements of the whole system is that of corporate culture; a term

which can sometimes obscure the fact that most organisations consist of a variety of

subcultures (Kotter, 1996). Pendlebury and colleagues (Pendlebury, Grouard and

Meston, 1998; p30) describe the culture of an organisation as 'the set of lasting values

shared by its entire staff'. Values and beliefs are, in themselves, invisible, but their

expression in the form of behaviour is often all too apparent.

There is a consensus amongst the models that culture change, for a variety of reasons, is

extremely complex but essential nevertheless as corporate culture is regarded as the

'anchor' which serves to institutionalise the change once implemented (Kotter, 1996).

Cultural change may take years to achieve in contrast to structural change which

sometimes take weeks or months; despite this, the cultural components of the change

need to be considered at the outset if alignment is to be achieved. Whilst culture may,

because of its very nature, be difficult to address directly and values are difficult to

change, an important mechanism to achieve alignment is the translation of the vision,

97
not only into specific business objectives, but also into behavioural outcomes; the

modelling of which is monitored and for which people are held to account (Miles,

1997). Whatever deliberate intervention is advocated, it seems that culture change is

about the long haul rather than a quick trip and occurs last not first. Kotter (1996;

pi 56) offers the following as 'good rule of thumb ':

'... Whenever you hear of a major restructuring, reeingineering, or


strategic direction in which step I is "changing the culture", you should
be concerned that it might be going down the wrong path'.

4.5 CHAPTER SUMMARY

The aim of this chapter has been to provide an overview of change and change

management. This third and final part of the literature review has sought to highlight

the complexity of large-scale organisational change. Attention has been drawn to the

theoretical pluralism which surrounds the notion of change and is reflected in concepts

of incrementalism, transformation, emergence and intention (this latter referring to

planned change). It would appear that, in practice, organisational change is often

poorly understood and poorly managed. That it could and should be managed is

attracting increasing attention in the literature which is often practitioner/academic-

practitioner generated; the need for leadership and management is presented in terms of

'both/and' rather than 'either/or' components of the design process.

A number of change management frameworks have been included for discussion which

have varied in terms of detail and also in the level of abstraction. Common themes

within these have been identified; namely that large-scale change involves the whole

system, it is not entirely controllable, the destination is not always identifiable but may

98
unfold as part of the journey, transformational change of this kind is a complex and

often a messy undertaking. It is worth remembering that models/frameworks are a

simplification of reality but, as such, may serve as a useful tool in guiding the change

programme.

This chapter concludes the literature review per se; the following chapter will describe

in detail the process of translating the research questions into research findings.

99
CHAPTER 5

RESEARCH METHODOLOGY

'The biggest enemy of your learning is the gnawing worry that


you're not doing it right...but any given analytical problem can be
approached in many useful ways'
(Miles and Huberman, 1994; p!4)

5.1 INTRODUCTION

Earlier chapters have attempted to provide a flavour of the 'newness1 of clinical

governance as a concept and highlight the fact that, although many of its key

components have been in existence for some time, the introduction of an integrated

framework for quality improvement constituted an innovation for the NHS when first

presented in the White Paper of 1997 (Department of Health). Although a number of

initial 'must do's' were made explicit in subsequent guidance (Department of Health,

1999), the absence of any definitive 'blueprint' meant that the interpretation of both

content and implementation at the local level was left largely to individual NHS Trusts.

A number of surveys have been published which provide a snapshot of clinical

governance across various regions (Conduit, Morgan and Willetts, 1999; Firth-Cozens,

1999; Latham, Freeman, Walshe et al, 2000; Walshe, Freeman, Latham et al, 2000). In

contrast, the aim of this research study was to uncover, in rich detail, what clinical

governance looked like in one NHS Trust in terms of content and to describe how this

policy was being turned into practice over a period of time.

This thesis provides an insight into the journey of one large organisation as it faces the

complex tasks of both translating clinical governance from words on paper into

something tangible and also implementing this in the real world setting. The purpose of
100
this chapter is to describe the rationale for the research design selected, to provide a

detailed description of how the research strategy has been operationalised, and to give

some insights into the methods used to ensure the quality of both the research process

and the results obtained.

In writing this chapter, I have been influenced by the comments cited below and have

tended to use the first person in order to give a sense of the choices I have made from

research design through to the delivery of this work:

'Many is the time I have ploughed through a desperately boring


methodology chapter, usually written in the passive voice... ...usually
your readers will be more interested in a methodological discussion in
which you explain the actual course ofyour decision-making rather than
a series of blunt assertions in the passive voice' (Silverman, 2000;
p235).

5.2 RESEARCH DESIGN

5.2.1 A Qualitative Framework to Provide a Flexible Approach

The newness of clinical governance and the requirement for local interpretation

suggested that an exploratory study would best reflect the fact that this particular field

of enquiry is still relatively unknown (Hedrick, Bickman and Rog 1993), constituting

what Marshall and Rossman (1998) regard as new territory. Under such circumstances,

where the phenomenon itself or variables of interest cannot be defined precisely in

advance, it is considered entirely reasonable for the research questions to be expressed

in broad terms (Maxwell, 1998; Marshall and Rossman, 1998). In Maxwell's (1998)

experience, it may even be the case that a significant part of the research needs to take

place before one can get any real sense of what the specific questions should be. This

study began with two primary research questions and these have guided subsequent

101
design decisions; namely: what constitutes the local clinical governance agenda

(content)? and how has clinical governance been implemented (process)?

The questions outlined above are both exploratory and descriptive. Their main purpose

is to discover and describe, as far as possible, 'what is1 as opposed to 'how many'; the

ultimate aim being to present what Hedrick and colleagues (Hedrick, Bickman and

Rog, 1993) describe as a 'rich picture' of both content and process in relation to clinical

governance. The overall approach needs to serve a number of purposes: to capture the

'real world research' which Marshall and Rossman (1998, p21) describe as 'often

confusing, messy, intensely frustrating and fundamentally non-linear'', focus on events

taking place in their natural settings (Miles and Huberman, 1994); go beyond the

snapshot and capture complexity (ibid); remain flexible enough to allow the precise

focus of the study to evolve (Marshall and Rossman, 1998); and thereby capture

emergent insights (Maxwell, 1998). Consequently, in light of these requirements, I

decided that an overarching qualitative framework would be most appropriate for this

study. Whilst a qualitative framework has the capacity to address the kind of

challenges of an exploratory real world inquiry described above, the manner in which

research questions are translated into results is a function partly of design. Owing to

the newness of the phenomenon to be researched - clinical governance - this raises an

issue concerning the degree to which this design should be pre-structured.

5.2.2 Qualitative Designs

There is support in the literature for the proposition that qualitative designs do exist

and, in fact, it is argued by some that all research studies are based on a design albeit

102
some are more deliberate and explicit than others (Miles & Huberman, 1994). This is

proposed on the premise that, as soon as decisions and choices are made within the

research process, whether in relation to sampling or data collection, the focus of the

study is becoming defined and must therefore be based on some form of design. The

degree to which this may be implicit or explicit appears variable (Miles & Huberman,

1994; Maxwell, 1998).

An ongoing debate seems to be taking place amongst writers on the extent to which the

research design should be structured prior to field work (Miles & Huberman, 1994;

Maxwell, 1998). Patton (2002) points to the emergent nature of design which partly

unfolds during fieldwork and this is echoed by Maxwell (1998). Miles and Huberman

(1994) suggest that a case can be made for loosely structured, highly inductive,

emergent designs on the one hand and highly structured designs based on a clear

conceptual framework with well-developed research questions on the other. However,

the authors argue (ibid) that looser designs can mean that everything looks interesting

and, as a result, may lead to massive amounts of data collection which might yield little

in the way of real insight. Tight, pre-structured designs may not be sufficiently

sensitive to the reality of the situation and the research questions could turn out to be

the wrong ones. Their own preference seems to rest near the centre of the qualitative

design continuum; a preference shared by others such as Robson (1993, p20) who

points out that 'free range exploring is seldom on the cards' and the skill, therefore, is

to have a general view of what is being sought whilst being open to the unexpected. A

similar approach is advocated by Bickman and Rog (1998) who recommend the

development of a 'tentative' plan with the proviso that this will be revised should

103
emergent insights indicate this is necessary.

The views referred to above tend to add weight to the proposition that qualitative

design should be regarded as an iterative process in which modifications are made

throughout the study to incorporate new developments (Hedrick, Bickman and Rog,

1993). This unfolding and emergent approach to inquiry requires that the researcher

remains open to a degree of ambiguity and uncertainty (Patton, 2002); however, this

may prove extremely daunting for the novice and, for this reason, others advocate

(Miles and Huberman, 1994) that the "beginning1 researcher adopts a tighter design to

provide greater clarity and focus.

5.2.3 A Conceptual Framework

With this debate in mind, I decided to take a more structured approach to the design

which would require the development of a conceptual framework. This sets out to

describe the main variables that will be the focus of the study (Miles and Huberman,

1994). The framework may be derived in a variety of ways: from the knowledge and

experience of the researcher, from existing theory and research, from the findings of

pilot and exploratory studies and from speculative approaches that consider 'what if

(Maxwell, 1998).

Just as with the earlier discussion around explicit/implicit qualitative designs, there is a

view that all studies are based on some form of conceptual framework, acknowledged

or otherwise (Hedrick, Bickman and Rog, 1993; Miles and Huberman, 1994; Maxwell,

1998). What seems to be important in terms of exploratory, qualitative research is that

104
although the framework forces the researcher to be explicit about the variables for

attention, it does not have to be definitive (Robson, 1993). Conceptual frameworks can

change particularly if regarded merely as a map of the current terrain (Miles &

Huberman, 1994) upon which one's position presumably changes over time.

Marshall and Rossman (1998) warn that, in exploratory studies, the existing literature

may not entirely address the initial research questions and, to overcome this gap, the

researcher needs to look at new ways of connecting existing knowledge with the new

concept or phenomenon which is the focus of the current study. As the earlier literature

review of clinical governance demonstrates, the empirical work in this area is very

much of an emergent nature and thus there was little to shape this research in the early

stages of the study. However, there is a massive literature surrounding quality and

change which has influenced both the development of the conceptual framework for

this study and the design in general. Although the TQM literature provides valuable

insights into implementation and clinical governance may, in fact, be TQM by another

name, I thought I would be on 'safer1 ground to choose a generic change management

model to serve as the conceptual framework and, in time, selected one developed by

Miles (1997). The general themes of this model have already been discussed in

Chapter 4 and a diagram of the adapted version is at Appendix 2.

The value of the Miles model (ibid) as a conceptual framework to guide this study is

that is appears comprehensive and accommodates both the 'what1 (content in the form

of transformation initiatives) and the 'how' (process). The model also supports a whole

system perspective and a dynamic, non-linear approach which seems consistent with

105
the 'messiness' of real world research referred to earlier. It can be seen from the

framework (Appendix 2) that the whole system view of content is nested within the

change management model itself; compartmentalising the framework in this way serves

as a reminder that variables may be duplicated under the content and process categories

- for instance, communication which could be part of the implementation process or as

an element of the end state.

The intention from the outset was to remain flexible and open to the need to adjust the

conceptual framework. In practice, the choice of framework proved to be entirely

appropriate; it was specific enough to provide a focus and yet broad enough to

accommodate inductively generated insights. In this way, the Miles framework (1997)

provided a consistent structure not only for data collection but also for analysis and

subsequent reporting. Articulating a framework is one thing, what is also needed is a

mechanism for getting from the research questions to the findings and this requires a

research strategy.

5.3 RESEARCH STRATEGY

5.3.1 Case Studies, Surveys and Experiments

Once the general purpose, broad research questions and a tentative conceptual

framework had been developed, the next step was to formulate the research strategy.

Robson (1993) outlines three main strategies: the experiment to measure the effects of

manipulating one variable on another variable; the survey which permits the collection

of standardised information from groups or individuals; and the case study which is

106
defined as:

'......a strategy for doing research which involves an empirical


investigation of a contemporary phenomenon within its real life context
using multiple sources of evidence' (Robson, 1993; p52).

The choice of a strategy should reflect the research questions to be answered. Robson

(ibid) suggests that experiments will answer how and why questions but the researcher

needs to be able to exert control over the variables in the study; surveys will answer

questions concerning who, what, where, how many/much whilst the case study is best

suited to how, what and why questions. These strategies are not regarded as mutually

exclusive; however, Yin (1994; p8) argues that the case study is preferred in 'examining

contemporary events... when the relevant behaviours cannot be manipulated'. Patton

(2002) adds that case studies are particularly valuable where the phenomenon is

individualised; clinical governance is a good example of this as implementation is

reliant on the local interpretation of policy and the translation of this into practice.

Thus, the views of Robson (1993), Yin (1994) and Patton (2002) would seem to point

to the adoption of the case study as the primary strategy although with the caveat that

there would need to be enough flexibility to allow the incorporation of a survey if

subsequently indicated. An experiment was not considered to be appropriate given the

aims and context of the research.

5.3.2 The Single Site Case Study

For the purpose of this thesis, the research focus is on the single case which, as is

generally the norm, was selected purposively (Robson, 1993; Miles and Huberman,

1994; Bickman and Rog, 1998); the main criteria being that it is likely to be what

Patton (2002) describes as 'information-rich1. Patton (1999) suggests that final selection

107
is best preceded by pre-study fieldwork of some sort before committing to an intensive

period of study. In this respect, I was fortunate to have been involved in an earlier

study of NHS Trusts (Latham, Freeman, Walshe et al, 2000) and consequently had

some prior knowledge of the target site. In reality, any of the Trusts would probably

have been data-rich; whether the case study suggested non-, sub- or exemplary

implementation, this would constitute an important finding given the newness of the

clinical governance agenda per se. However, I considered the Trust selected to be of

particular interest because it conceptualised clinical governance as a vehicle for

learning, espoused a reluctance to follow a 'tick box' approach to implementation (thus

aimed at gaining hearts and minds commitment and culture change) and, finally,

articulated a strong focus on service users.

It was envisaged that the case study would be 'nested' (Yin, 1994) which would allow

me to follow clinical governance at both the corporate and divisional level; this latter

would provide a cross-sectional view down to the front line of service delivery.

There is a certain danger in selecting a single site. Hart and Bond (1995) point out that

internal circumstances may change in some way which make continued study

unfeasible with the result that permission to access the case site is revoked. In light of

this potential precariousness, data collection would be designed and conducted to

provide both an ongoing picture of the implementation process but would also be

phased in targeted cycles that would secure particular outputs; thereby giving some

insurance against any premature termination of the study.

108
5.3.3 Generalising from Case Studies

A particular concern surrounding the case study approach generally and the single case

in particular appears to relate to the notion of generalisation which is described by

Robson (1993) as the extent to which findings from one inquiry may be applied more

generally. There is a consensus amongst a number of authors that, in the qualitative

paradigm, any generalisation may only be theoretical as opposed to statistical (Hart and

Bond, 1995; Bickman and Rog, 1998; Marshall and Rossman, 1999; Yin, 1999). The

nature of qualitative findings are highly context and case specific (Patton, 1999); the

changing context of real world research means that exact replication is unachievable

(Marshall and Rossman, 1998) and the onus for determining generalisability seems to

rely on those seeking to make the generalisation rather than on the primary researcher

(Robson, 1993; Marshall and Rossman, 1998). The responsibility of the primary

researcher appears to revolve around ensuring that there is enough rich description from

design to reporting to allow the reader to assess whether there are sufficient parallels to

his/her own context/agenda (Robson, 1993).

5.4 ACTION RESEARCH

5.4.1 Origins and Applications

Early in the development process, I found it very reassuring to read the words of Miles

and Huberman (1994) quoted at the start of this chapter. It was also helpful to see this

sentiment echoed by others such as Marshall and Rossman (1998) who observe that

there is no one perfect research design. In fact, in order to capture the complexities of

real world research, it is often necessary to develop what Hedrick and colleagues

109
(Hedrick, Bickman and Rog, 1993) call a hybrid strategy. With this in mind, it

appeared entirely reasonable to incorporate an action research (AR) approach into the

case study design.

The reason for my considering the action research approach was twofold: firstly there

was a concern for reciprocity (Patton, 2002); NHS Trusts face multiple challenges of

which clinical governance is, by necessity, one of several. Thus, participation was

likely to be more attractive if the target site perceived that it would derive some sort of

benefit from engaging in the project. Secondly, as the overall approach was conceived

as formative rather than summative, there would need to be an agreed mechanism for

feeding back the data to avoid the risk of the sort of dilemma faced by others. Studies

by Joss and Kogan (1995) and Bate and Robert (2002) describe instances where the

research process had yielded information that might contribute positively to the change

process; however, neither study had been designed to incorporate real-time feedback.

Although this was subsequently negotiated in both cases, Bate and Robert's (ibid)

perception was that their intervention was construed as 'meddling' rather than 'helping'

whereas a formal action research approach would have allowed for feedback to take

place as a legitimate part of the research process.

Action research has been a distinctive form of inquiry since the 1940s (Elden and

Chisholm, 1993; Hart and Bond, 1995). It is practised in a variety of settings such as

education (McTaggart, 1994), health care (Hart and Bond, 1995; Bate, 2000), industry

(Pasmore and Friedlander, 1981; Ledford and Mohrman, 1993), local communities

(Greenwood, Whyte, Harkavy, 1993), international communities (Brown, 1993) and at

110
a variety of levels: individual, group, small and large-scale organisations. A review of

the literature suggests that AR is a somewhat amorphous approach. Greenwood and

Levin (1998) point to a mounting confusion in the field as AR increasingly comes to

mean different things to different people. The apparent lack of clarity over what is

meant by AR has led to a wide variety of activity being labelled as such (Dash, 1999) to

the point where any cyclical process of research could be included (Hart and Bond,

1995) or even sloppy research justified (Eden and Huxham, 1996).

5.4.2 Definitions and Principles

Given the observations above, it does not come as a surprise to find that there is no

consensus over a definition to describe AR; however, a number appear to capture the

essence as these two examples seek to demonstrate. According to Eden and Huxham

(1996):

'AR involves the researcher in working with members of an organisation


over a matter -which is of genuine concern to them and in which there is
an intent by the organisation members to take action based on the
intervention'.

Elliot (1991) provides a somewhat shorter definition:

'(Action research is) the study of a social situation with a view to


improving the quality of action within it'.

An area where there is some agreement relates to certain common principles. Firstly

there is an emphasis on AR as an approach which aims at research with rather than

something done to the participants and, as a result, collaboration/participation is a key

component of the research process (Eden and Chisolm, 1993; Hart and Bond, 1995;

Bate, 2000; Meyer, 2001). Hart and Bond (1995) argue that participation cannot be

mandatory - for example, in the event that the context does not support participation

111
due to the prevailing culture and/or resource issues. In any case, there is the sense that

some authors regard genuine collaboration as an objective which proves rather difficult

to achieve (Foster, 1972). Thus, although one may start with the intent and deliberately

try to build in activity that will enable participation/collaboration to take shape, the

operationalisation of this is more likely to be emergent rather than pre-determined.

Secondly, AR may be seen as an approach which is capable of generating knowledge

whilst at the same time attempting to change or improve the phenomenon which is the

focus of the research (Hart and Bond, 1995; Bate, 2000; Meyer, 2001). This focus is

often described in terms of a problem and yet Cunningham (1993) cautions that this

implies that 'something1 is wrong which might not necessarily be the case. The author

(ibid) suggests that a problem in AR terms might mean that there is a recognition of the

need for change in some form which may be corrective in nature but may equally be

value adding.

5.4.3 The Researcher Role in Action Research

One area where there is rather less consensus relates to the role of the action researcher.

Hart and Bond (1995) have developed a typology of action research which spans from

the experimental at one end of a continuum to empowerment at the other. The further

toward the experimental end, the more likely that the role of the researcher is of a

research-consultant/expert; the further toward empowerment, the greater the

collaboration and the role will more likely represent that of a co-researcher/co-change

agent. Ledford and Mohrman (1993) suggest that the more typical approach is that of

'co-experimenter' which may lead to democratic power sharing over the research design

112
and process itself. This could entail the researcher providing the means for the system

itself to act (Bate, Khan and Pyle, 2000) or, in the case of Greenwood and colleagues

(Greenwood, Whyte and Harkavy, 1993), this involved teaching research skills to those

who would participate as inside-researchers. The researcher-consultant who generally

retains control over the research process and decision-making within seems a less

common occurrence. However, Hart and Bond (1995) offer a number of examples of

this role and Ledford and Mohrman (1993) provide an account of their experiences as

content and process experts in a large-scale organisational change process.

5.4.4 A Model for Action Research

In terms of the practice of AR, several writers refer to the cyclical nature of the

approach in action (Elliott, 1991); however, Bate (2000) usefully depicts this as a

model (Appendix 5) which emphasises an iterative rather than a linear sequence of

activities - diagnosis, analysis, feedback, action, evaluation. Bate (ibid) regards action

research as having a number of different levels of application: the research process

itself, the intervention and the change process. At the level of intervention and change

process, the actions in the Bate model (ibid) have a resonance with the 'Plan, Do,

Check, Act' (PDCA) cycle of continuous quality improvement outlined by Deming

(1986) which provides a sense of consistency particularly when the purpose of the AR

approach is improvement.

In selecting a model to serve as a guide to the action research element, I decided to use

Bate's (2000) for a number of reasons. Firstly, the elements of the model described

above were consistent with the change/improvement focus of the study. Secondly, the

113
apparent simplicity of the model would facilitate discussions of the proposed approach

with the case study site and add a certain sense of tangibility to what can seem a rather

amorphous approach. Thirdly, the flexibility of the model meant that it was consistent

with other design choices such as the case study and qualitative framework and would

also accommodate the apparent unpredictability of AR. Finally, because the model

may be applied at multiple levels, it would support a variation in cycle times.

According to Elliot (1991), the higher the level of organisational focus, the longer and

more open-ended the AR cycle; the lower the level (for example: specific department,

group and so on), the cycle tends to be shorter and more focused.

5.4.5 Collaboration and Participation as Key Components

Decisions regarding the role and level of collaboration/participation of the case study

site were more problematic. I subscribe to the view that involvement is likely to bring a

greater commitment to the study findings (Hart and Bond, 1995) and have, what has

been described as, an 'ideological leaning' (Meyer, 2001) towards a democratic

approach. However, in practice the degree to which these considerations were

incorporated into the research design largely came down to pragmatism in view of the

resources available; an important consideration in any research design. Firstly, as the

only researcher, there was a real issue about what could be achieved in the time

available and it did not appear feasible, at the outset at least, for the study to support the

researcher input required to make this a genuinely collaborative study. Secondly, at the

start, the AR approach felt quite ambitious and I did not feel able to commit to a role as

expert or teacher in terms of the research process itself although I did feel comfortable

as a resource in relation to the quality and change processes. Consequently, I thought it

114
might be prudent to start as what Hart and Bond (1995) describe as a researcher-

consultant but remain open to the possibility that both the role and the level of site

involvement might change over the course of the project. Although this starting point

was negotiated with the Trust, I did not raise the issue of a possible shift in the role of

either myself as researcher or the organisation as the feasibility of this would probably

only become clear as the research progressed.

5.5 DATA COLLECTION AND DATA MANAGEMENT

5.5.1 Data Collection Methods - Interview, Observation, Document Analysis

According to Robson (1993), once some of the initial decisions about what, why, where

and who have been made, the major question of how the data is to be obtained needs to

be addressed. Patton (2002) suggests that there are three main kinds of qualitative

methods of data collection; namely: interviews, observation and document analysis.

The benefit of using multiple methods is emphasised in the literature (Robson, 1993;

Yin, 1999; Patton, 2002); a multi-method approach is recommended partly to provide a

comprehensive perspective from a number of sources and partly to compensate for the

potential limitations of individual methods. The use of multiple methods will also

enable triangulation; this may be interpreted as a means of checking the consistency of

findings (Yin, 1999) or another perspective relating to the opposite side of the same

coin which looks for divergence - often the source of valuable insights (Patton, 2002).

Each of the above methods may be pre-structured to varying degrees depending on the

nature of the inquiry. Robson (1993, p!57) offers two extremes of advice. If it is an

115
exploratory case study with little upon which to base a conceptual framework and only

very general research questions then it is inappropriate to have much in the way of pre-

structuring. However, 'ifyou know what you are after' then plan ahead. The intention

in this case is to use all three methods and keep the design flexible enough to

accommodate a survey should the emerging findings suggest this would be appropriate.

Also, although the research questions are broad, the initial conceptual framework has

identified a number of clear variables thus the level of pre-structuring of instruments

will vary depending on whether the exploratory shifts towards confirmatory.

Interviews

Given the type of information required, interviews based on open-ended questions will

be used throughout the case study to allow the respondent to use his/her own words

based on his/her perspective rather than being required to use a category constructed by

the interviewer. Patton (2002) outlines three approaches to interviewing: the informal

conversational interview which is largely unstructured, the general interview guide

approach which is semi-structured and the standardised open-ended interview where

the questions are specified in detail. At the outset, it is more likely that the interviews

will be less structured but may become more so with a desire to confirm emergent

findings.

Observation

Whilst interviews allow the researcher to hear what people say, observation provides

direct evidence not only of what they say but also what they do which is not always one

and the same. Observational methods provide the researcher with an opportunity to

116
experience the context and see the key stakeholders in action. In addition to observing

action, behaviour, activity and so on, it also gives the researcher an opportunity to

obtain a sense of what is not happening which can be of particular value when the

objective is to provide feedback for improvement. The role of the observer may vary

considerably from participant observation at one end of the continuum to complete

spectator at the other (Patton, 2002). This may vary from event to event or over the

course of the research project and it is difficult to predict how this will unfold at the

outset; yet another emergent aspect of the design.

Document analysis

Documentary analysis is a less direct and relatively unobtrusive form of inquiry in

which the data may be extracted away from the field. Organisations are often a rich

source of documentation which includes the minutes of meetings, policy/strategy

documents, memos. Access to a range of documents is likely to provide insights into

who has been involved with the decision-making process that has surrounded the

phenomenon. It also allows the researcher to track back through the history of the

phenomenon where this pre-dates the study and follow its development to the start of

the research process. It is always of interest to look at the official statements and

compare this with what is subsequently seen and heard during fieldwork (Patton, 2002).

However, it is important to remember when undertaking content analysis that the

documents were likely to have been written for purposes other than the research process

which may have implications for the inferences that can be drawn from such sources.

117
5.5.2 Data Management

It seems to be the norm that a consistent aspect of most qualitative inquiries is the large

amount of raw data accumulated from the fieldwork (Miles and Huberman, 1994;

Patton, 2002). At the outset it is difficult to predict how much data will ultimately be

collected; however, it is possible to plan the recording format and also the storage

arrangements. In this study, all fieldwork notes would be recorded sequentially in A4

booklets and stored according to units of analysis and cycle of inquiry. Detailed notes

of tapes would be made and illustrative quotes extracted. In view of the number of

participants, tapes from focus groups would be transcribed in full to assist subsequent

analysis. A decision regarding the use of qualitative software would be deferred and

reviewed as the data collection process progressed. In practice, time pressures

precluded the early investment in training needed to use this software and so its

incorporation into the research process was not an option.

5.6 RESEARCH IN ACTION

5.6.1 Phase One - Gaining Entry

As indicated earlier, I decided that the Bate model (2000) would be used to guide my

approach to the action research cycle. This is a dynamic and iterative model of

diagnosis, analysis, feedback, action and evaluation which will be illustrated through

the following account of the research process. Before the cycle could be initiated,

however, it was first necessary to negotiate entry into the organisation. The initial

agreement to participate in the study was obtained through a third party who had also

been involved in the earlier research and who was known to the Trust Chief Executive.

118
Once the Chief Executive had agreed to take part, I arranged to meet with him and the

Clinical Governance Lead. The purpose of this meeting was to provide greater detail

on the purpose of the research and the AR approach proposed. The Trust

representatives regarded participation as an opportunity to receive external feedback on

their efforts. This was considered of particular value as, at some stage, the Trust would

undergo a routine review by the Commission for Health Improvement.

This initial meeting was followed up by a series of exploratory interviews with the

Clinical Governance Lead so that I could obtain a more detailed overview of the Trust

(key stakeholders, structures, services, geography) and a sense of the history of clinical

governance implementation in the organisation to date. It was also an opportunity to

develop a rapport which I considered to be extremely important as the Lead played a

pivotal role in the Trust not only in terms of clinical governance but also in her capacity

as an Executive Director. In addition, the Lead would effectively be my sponsor and, if

necessary, might need to sanction and facilitate access to people and services within the

organisation.

These exploratory interviews were semi-structured and the questions were open-ended;

this was to ensure that I could cover the areas I thought were important initially but

would also allow the interviewee to bring other insights and perspectives that could be

followed up subsequently. These initial interviews were not taped as I thought a more

informal approach was indicated at this stage. However, once data collection began in

earnest, all subsequent in-person interviews were taped, all telephone interviews were

taped once the appropriate equipment was obtained; taping followed explicit consent

119
from interviewees.

The early interviews outlined above helped develop a map of the organisation and from

this I was able to identify the key meetings I would need to attend either as a one-off or

on a regular basis; the documents such a relevant strategies/policies that needed to be

obtained; the minutes of past meetings that would need to be retrieved plus

arrangements needed for my inclusion on the circulation lists for future minutes.

This initial entry period began in May 2000 and, in the months that followed, the

interviews with the Clinical Governance Lead continued. I was also able to meet and

present the proposed research to the Management Team and thereby made early contact

with Executive Directors, Divisional and other functional managers. In addition, I

observed a Trust Board meeting, started to attend the monthly meetings of the Clinical

Governance Sub-committee and the quarterly meetings of the Risk Management Team.

In that time I also undertook a preliminary analysis of the Trust documents relating to

clinical governance which included strategies and the minutes of earlier meetings.

5.6.2 Phase Two - Rapid Appraisal

The data from this entry phase formed part of the initial diagnosis and, in time, the

interview set was broadened to include the Chief Executive, the Non-executive Chair of

the Clinical Governance Sub-committee, the Executive Team and the Divisional

Managers. In this way, I was ensuring access to those who were likely to have been

involved in the development of the Trust approach to clinical governance and also those

who were responsible for its implementation. The purpose of phase two was to

120
undertake a rapid appraisal of clinical governance at the corporate level.

All of these interviews (except one which was conducted via telephone due to the fuel

crisis) were conducted in person. This was a deliberate decision to facilitate the

development of a rapport with respondents as we would be coming into contact with

each other for some time to come either in meetings and/or through the reports I would

be producing. Once again, these interviews were semi-structured and consisted of

open-ended questions which provided the flexibility to probe and follow up issues

raised by the interviewees.

By November 2000, this first round of interviews was completed; whilst I continued to

attend and observe routine meetings, I now withdrew from the field. As far as possible,

I had followed recommended practice with regard to early analysis of raw data (Miles

and Huberman, 1994) but, given the large amount of data already gathered, I needed

protected time to undertake a more formal analysis and write the first feedback report.

5.6.3 Feeding Back to the Trust - Report One

This initial round of data collection and analysis (which included the rapid appraisal

September -November 2000) lasted eight months. In AR, there are inevitably a

number of audiences for the feedback of results (Robson, 2000) and the timing of such

feedback inevitably contributes to its perceived relevance. So, in order to ensure that

analysis and feedback would follow as soon as rigour permitted, this interim report was

kept succinct and framed as an internal briefing paper for the Trust rather than for any

external audience. The paper focused on a number of key aspects of clinical

121
governance implementation and concluded with a series of recommendations offered to

the organisation for consideration (Appendix 6).

The briefing paper was initially sent to the Chief Executive and Clinical Governance

Lead with the understanding that it would be circulated to the Management Team and

the Clinical Governance Sub-committee. I subsequently presented the findings and

recommendations to both of these corporate-level groups. Although there was some

discussion of the report and content in both arenas, the Trust did not provide a formal

response, neither was any explicit action plan formulated in light of the findings, nor

evidence of further discussion in any detail. According to Hart and Bond (1995), this is

not an uncommon occurrence following feedback but I was aware that this could be a

precursor to further access being denied if it was an indication that my analysis and

recommendations had not been well received.

5.6.4 Phase Three - Widening the Corporate-Level Interview Set

Thus, with no action plan to monitor, I decided to maintain a lower profile for a period

of time and continue with a general focus on corporate level activity but, at the same

time, concentrate more on establishing a rapport with key stakeholders. Consequently,

I continued to observe the relevant meetings and also maintained an ongoing contact

with the Clinical Governance Lead. Often this was in the form of what Patton (2002)

describes as a conversational interview; highly unstructured but this approach allowed

me to keep in touch with new developments. I also broadened the interview set to

include the Chair of the Trust and two further members of the Clinical Governance

Sub-committee, a Non-executive Director and a Medical Officer (the other members

122
had been interviewed earlier in their capacity as Executive Directors/Divisional

Managers). In addition, I invited all Non-executive Directors (excluding the Trust

Chair) to participate in a focus group; three out of a possible five attended. Of the two

who were not available, I had already interviewed one Non-executive Director in her

capacity as member of the Clinical Governance Sub-committee.

As time went on and a level of trust developed between the Clinical Governance Lead

and myself, I was able to provide feedback on a more opportunistic and informal basis.

This had a number of benefits: firstly, the Lead was now getting ongoing feedback

which would ensure that nothing in the final report would come as a surprise.

Secondly, it allowed me to check out findings and the interpretations I was making as a

result of ongoing analysis. Thirdly, it allowed space for a dialogue and, in this way, the

Clinical Governance Lead was able to participate in some of the more emergent design

decisions. Finally, in time, my role as an observer at meetings shifted from pure

spectator to participant observer where I offered feedback on what I was observing

within the Trust or responded to requests for an opinion on the issues under discussion.

5.6.5 Phase Four - Primary Care Division

In April 2001, whilst maintaining the focus at corporate level, I began preparations to

incorporate the Division and Locality as units of inquiry and analysis. The Mental

Health Division was the preferred option as it would provide the widest access to a

range of professional groups and also to both secondary and community based services.

However, at that time, the Acting Divisional Manager had just stepped down and the

Clinical Governance Lead and I decided not to focus the research spotlight on a service

123
which was just about to get new leadership. Each of the learning disability services

were specialised in some way, consequently we were concerned that other audiences

within the Trust might not relate to issues raised in these areas. This meant that the

research focus would incorporate the Primary Care Division. Although we were aware

that there had been little deliberate clinical governance activity in this area, it was

hoped that feedback gained through the research process would facilitate future

progress. Ironically, just prior to the start of data collection in this Division, its senior

manager was moved to Mental Health which meant that Primary Care now had a new

Divisional Manager in an 'acting' capacity; however, the research went ahead as the

initial approaches and preparation had already been made. Within the Primary Care

Division, the Northern Locality was selected for closer study as funding had been

identified by both the Trust and the Primary Care Group (PCG) for the appointment of

a Clinical Governance Facilitator to work across the Locality and the PCG.

The AR approach in Primary Care followed a similar pattern to that undertaken at the

corporate level. Although access had been approved by the Clinical Governance Lead,

I met initially with the Acting Divisional Manager (ADM) and two of the three line

managers (the third was unable to attend) to provide details on the purpose of the

research and to discuss the AR approach proposed. Following this initial meeting I

conducted preliminary interviews with the ADM to gain an overview of both the

context and clinical governance activity to date. I was able to identify the key

stakeholders, documents needed, meetings of interest. I attended several of the

meetings of the Clinical Governance Forum and my role was generally that of an

observer although, occasionally, I provided feedback from the field. Participation was

124
much less than at the corporate level as the timescale was considerably shorter and

consequently the opportunities to develop a rapport were less. However, I did attend a

meeting of the Clinical Leaders so that I could get a sense of the agenda as it proved

difficult to get a comprehensive set of previous minutes because of the chairing

arrangements. Also, because of the timescale issue, I thought it would provide the

managers with an additional opportunity to meet with me and, in this way, become

familiar with both the researcher and the research process.

Interviews took place with the ADM, senior and junior managers (district nursing and

health visiting; Division and Locality) and managers from the Professions Allied to

Medicine. The geographically dispersed nature of the Division and the timescale

available meant that, after the first meeting, the most efficient method of interviewing

tended to be via the telephone which was not entirely satisfactory for either the

interviewees or the researcher but it represented a seemingly inevitable trade-off. All

interviews were taped with permission. In addition, a separate focus group was held for

district nurses and another for health visitors; five and nine members of staff

participated respectively. Both sessions were taped with permission and subsequently

transcribed in full. The decision to undertake focus groups with front line staff was

made so that data collection could take place in a more relaxed manner. In this way, it

was hoped that the interaction of group members might surface new issues/insights and

would also encourage discussion around the clinical governance agenda generally and

Trust activity in particular thus facilitating a certain amount of awareness raising.

The fieldwork at the corporate and divisional levels was completed in October 2001.

125
This end-date had been made explicit and thus my withdrawal from the Trust was

expected. Once again, analysis of the raw data had been ongoing throughout the study

but the end of data collection signalled an intensive phase of analysis necessitated by

the sheer volume of data and the timescale for reporting; it was agreed that the final

report would be distributed by the end of December 2001.

The final phase of analysis drew heavily on the conceptual framework outlined earlier.

This had been developed as an initial guide to the variables for attention and had proved

broad enough to serve as a guide for data collection, analysis and informal feedback

throughout the period of fieldwork. Thus, using the same framework to guide analysis

overall seemed entirely appropriate.

5.6.6 Phase Five - The Final Report

The final report was considerably more substantial than the earlier briefing paper. On­

going, informal feedback to participants ensured that there would be no surprises for

the key stakeholders of the Trust. In the end, the report summarised what most of the

key participants had heard already and dealt with findings at both the corporate level

and the level of the Primary Care Division and Locality. In addition, the report made a

number of recommendations concerning the content of the clinical governance agenda

as conceptualised by the Trust and its future implementation (Appendix 7). The report

was released firstly to the Clinical Governance Lead and the Chief Executive and then

disseminated to key managers in the Division. In January 2002,1 met with the Clinical

Governance Lead, Chief Executive, ADM, line managers and the Clinical Governance

Facilitator for the Division to gain feedback on the report and the research experience.

126
The Trust was complementary on both counts.

5.6.7 The Action Research Cycle

In terms of the AR cycle, the most overt stages at both levels of the study appear to be

diagnostics, analysis and feedback; the way in which this has contributed to action and

hence the opportunity for formal evaluation has been less apparent. It is important to

note, however, that action has taken place albeit not always as a discrete stage flowing

from an explicit declaration of intent. Owing to the ongoing informal feedback

provided at both the corporate and divisional levels, there was evidence of Trust

responsiveness in the form of real-time action. In addition, the Trust incorporated

many of the recommendations in its clinical governance plan for the new PCT. How

far this plan has been translated from policy to practice in the PCT was unfortunately

beyond the scope of this study.

5.7 RESEARCH QUALITY

5.7.1 Quality Criteria for Qualitative Inquiry

The issue of quality in qualitative research is part of a long and contested debate (Mays

and Pope, 2000). Robson (1993, p66) suggests that establishing the trustworthiness of

research lies within the realms of common-sense - whether the researcher has done a

thorough and honest job and has tried to 'explore, describe or explain in an open and

honest way'. However, the author (ibid) is also of the opinion that good intentions are

not enough to ensure the quality of research irrespective of whether this is of a

qualitative or quantitative nature and, in this sentiment, Robson echoes the view of

127
another seasoned qualitative researcher (Silverman, 2000).

Whilst checklists for quantitative studies are apparently common, there is no definitive

set of guidelines for delivering quality in a qualitative approach; and, according to

Mays and Pope (2000), any attempt at prescription is to be avoided. Lincoln and Cuba

(1985) argue against the qualitative application of quality criteria derived from the

quantitative paradigm (internal and external validity, reliability and generalisability)

and suggest as alternatives the notions of credibility, transferability, dependability and

confirmability. These alternative criteria are commonly cited in research texts and

there is no shortage of additional criteria offered for consideration (Mays and Pope,

1995; Marshall and Rossman, 1998; Mays and Pope, 2000). Although it is clear that

there are no easy solutions to the limitation of error in qualitative inquiry, I have tried to

address this throughout the research process from design to reporting and an overview

of the steps taken will now be provided.

5.7.2 Research Strategy: Design and Operationalisation

The case for a qualitative study has been clearly stated at the outset of this chapter thus

the reader is immediately aware of the paradigm within which the work is situated. The

rationale for the overall design has been presented together with the strategic choices

relating to this. Although the research questions have been expressed in broad terms,

the conceptual framework has been described in detail which allows a judgement on

whether the strategy and methods are, in themselves, appropriate for the purpose of the

study. Multiple methods have been selected to allow for triangulation and, in

attempting this, the search was for both convergence and divergence of evidence. Also,

128
by using more than one method, it was possible to compensate for any potential

weaknesses that might have served as a threat to validity if a single method had been

applied (Robson, 1993; Yin, 1999). The methods themselves were entirely consistent

with a qualitative approach and the literature attests to the fact that a combination of

interview, observation and document analysis had been adopted in similar studies (Yin,

1994; Hart and Bond, 1995; Bate, Khan and Pyle, 2000).

5.7.3 Sampling Strategy

Purposive sampling has ensured that data has been obtained from a variety of sources.

Although the sampling strategy had not been formulated prior to entry into the field, I

had identified the Clinical Governance Lead as a key informant. This led to the

evolution of, what has been described as, a 'snowball' approach (Robson, 1993) as I met

with other managers and staff. The relatively flat organisational structure and the fact

that the senior groupings were small provided the opportunity to purposefully select

whole populations such as the Executive Team, the Management Team, the locality

management structure which contributed to the comprehensiveness of the study. In

addition, I was allowed unrestricted access to key meetings over a long period of time

and to all relevant documentation and therefore did not rely on snapshots.

5.7.4 Generalising from Case Studies

The issue of generalisability has been outlined earlier in this chapter where it was

argued that the responsibility of the primary researcher was to provide enough rich

description to enable the reader to identify any potential parallels with his/her own

research context/agenda. With regard to generalisation, I will simply state that I have

129
attempted to provide a rich picture throughout and invite the reader to make a

judgement on whether this has been sufficient for his/her purposes.

5.7.5 Rigour in the Field

The application of methods in the field was a rigorous as possible. For the most part,

interviews were semi-structured and a selection of interview schedules are included as

Appendix 8 which will allow the reader to assess the validity of the instrument. In

addition to contemporaneous notes, the majority of interviews were taped; detailed

notes made from these and illustrative quotations extracted. Owing to the presence of

multiple participants in the focus groups, the tape recording of each session was

transcribed in full. Observation was conducted without any form of pre-structuring and

extensive contemporaneous notes were made in the setting. In these records,

annotations provide a clear distinction between evidence, interpretation and

intervention (Yin, 1999). As most of the observation took place around specific events

such as meetings, there are also minutes of the meeting taken either by a participant or

secretary to the group which adds another perspective to the proceedings.

Documentary analysis was largely guided by the original conceptual framework. All

raw data has been stored in a manner that will ease retrieval if challenged. Field notes

and tapes are in date, level of analysis and collection cycle order.

5.7.6 Analysis and Reporting

Analysis and reporting has been guided by the conceptual framework. This has served

to focus the research throughout but has been broad enough to accommodate emergent

insights. The aim of the research was to provide rich description and the sustained

130
association with the organisation has provided the opportunity to confirm my own

interpretations.

5.7.7 Participant Feedback

An extremely important test of validity was embedded in the AR approach itself and

the commitment to provide ongoing informal and periodic formal feedback to

individual participants and the organisation itself. The Trust response has been very

positive and confirmed the reports presented as fair representations of clinical

governance in the organisation.

5.8 CHAPTER SUMMARY

The purpose of this chapter has been to provide a comprehensive account of the

research process from the initial design to the presentation of the results. The rationale

for the various choices made in the design of this project is discussed; these choices

have led to the selection of a qualitative framework which has been flexible enough to

sustain a case study strategy and an action research approach. Details of data collection

methods and data management have been provided together with information

highlighting the action research process 'in action1. Finally, the notion of research

quality has been discussed together with the various steps taken by the researcher to

assure the quality of the entire research process. The design and implementation of the

overall research strategy have served as the focus of this current chapter; the results of

this effort will now be presented in the following three chapters.

131
CHAPTER 6

RESULTS - CLINICAL GOVERNANCE IMPLEMENTATION


CORPORATE ACTIVITY: CONTENT

6.1 INTRODUCTION

Qualitative methods frequently give rise to massive amounts of data (Patton, 2002) and

this case study certainly adds weight to Patton's observation (ibid). Organising and

analysing this data so that the results could be presented in a coherent format was

always likely to be a challenge. However, this task was facilitated considerably by

utilising the same conceptual framework for the presentation of the results as that which

guided data collection and subsequent analysis. This framework is outlined in detail in

Appendix 2 and encompasses notions of both content and process or the 'what' and

'how' of implementation. There has been frequent reference to the duality of meaning

within the term implementation; the following results chapters will clearly demonstrate

this difference. The aim of this chapter is to present the clinical governance initiatives

of the Emerald Trust which correspond to corporate level content activity. Each of

these elements will now be presented using the Miles framework (1997) as sub­

headings.

6.2 A VISION AND A STRATEGY FOR CLINICAL GOVERNANCE

6.2.1 The Clinical Governance Report

One of the first key clinical governance documents prepared by the Trust was the

briefing paper dated June 1999, the 'Clinical Governance Report' (Internal Trust

Document, 1999; unless otherwise stated, all future references to the 'Clinical

132
Governance Report' or the 'Report' will relate to this document). This document

introduces the Trust's conceptualisation of the clinical governance agenda and distils

this into the six key components; the rationale for these elements is also presented and

each is deconstructed further into a number of sub-elements (Table 6.1).

Table 6.1: The main components of clinical governance (Internal Trust document,
1999)

CULTURE HUMAN RESOURCE KNOWLEDGE


MANAGEMENT
• Trust-wide commitment • Training Evidence based health
• Board level focus • Continuous professional care/clinical
development (CPD) effectiveness/audit
• Clinically focused management • Appraisal R&D
strategies • Performance Management Libraries/research
resource centres

CORPORATE LEARNING USER INVOLVEMENT INFORMATION

• Complaints • Advocacy • Clinical information


• Risk Management • Consultation systems
• Adverse incidents • Individual ownership of • Knowledge management
health • Patient information

A key message contained within this Report is that clinical governance is not entirely

new to the NHS and earlier initiatives such as clinical audit, care pathways, risk

management and so on have been incorporated. However, the Report does highlight

the fact that there are a number of new aspects to this policy which are identified as: an

organisational focus for quality improvement; corporate accountability for and the

involvement of management in clinical quality; and the need to integrate new and

existing systems for quality improvement. The Report identifies a key role for the

Trust Board in refocusing the culture from financial to clinical issues and also explores

what clinical governance might look like at different levels, for instance: the

organisation, division, team and the individual practitioner.

133
Although a vision of what the organisation might want to see in place once clinical

governance is established is described, the Report does not include a comprehensive

overview of the present organisational situation in relation to this future state. Instead,

there is a brief reference to the challenges currently being experienced in relation to

several of the existing components such as clinical audit and risk management. Thus, it

is difficult to get a sense of the scale of change required in order to make this transition

merely from the information contained within the Report. Nevertheless, the Report

serves as a useful introduction as to how clinical governance has been conceptualised at

the corporate level. The stated intention was that the framework outlined in the

document would form the basis of future discussions with the divisions as part of the

process of formulating a development plan.

6.2.2 The Clinical Governance Development Plan

The 'Clinical Governance Development Plan' (Internal Trust Document, 2000a; unless

otherwise stated, all future references to the 'Clinical Governance Development Plan'

or the 'Development Plan' will relate to this document), was subsequently approved by

the Trust Board in January 2000 and has formed the basis of the corporate approach,

without revision, since that time. The Development Plan reiterates the framework for

clinical governance outlined in the earlier Report and specifies a series of short-term

objectives under each of these headings, identifies the managers with a key

responsibility for the achievement of these objectives together with the timescale for

completion.

In total, the Development Plan identifies a set of 39 objectives; of these, 36 are

134
described either as ongoing or to be achieved in year one. Many of the objectives refer

to an end state such as the aim 'to achieve CNST level 2 ', the establishment of a

corporate focus for clinical audit, the development of a policy for the reporting of poor

clinical practice. Other objectives address certain process issues such as

communicating the agenda and providing training. In addition, a subset of the

objectives are targeted specifically at the divisions with Divisional Managers clearly

identified as having the lead responsibility for their achievement. In spite of this, a

series of interviews which took place with senior managers during September -

November 2000 (9-11 months after the Development Plan had been approved by the

Trust board) revealed that, whilst all interviewees knew that the Development Plan

existed, there was little evidence to suggest that it had become a living document within

divisions or that it was explicitly guiding local approaches to clinical governance

implementation. In fact, four out of the six divisions did not have an action plan to take

forward the clinical governance agenda.

In light of the apparent lack of goal deployment described above, a review of progress

against the Development Plan was a key recommendation of the research feedback

provided to the Trust in December 2000. At that time, a period of 12 months had

elapsed following the approval of the Development Plan by the Trust Board which

suggested that it would be timely for the organisation to evaluate its achievements to

date and assess the amount of work outstanding. However, the Trust view was that the

movement towards PCT status would have implications for the shape and

operationalisation of clinical governance in the new organisation; therefore, it preferred

to focus on consolidating rather than revisiting the existing objectives.

135
The 'Clinical Governance Development Plan' represented an action set for

development rather than a comprehensive strategy for clinical governance

implementation. Taken together with the 'Clinical Governance Report', these two

documents formed the basis for the Trust approach to clinical governance

implementation; however, during the research period, these documents were not

augmented by a comprehensive implementation strategy.

6.3 STRUCTURES TO SUPPORT A DEVELOPING AGENDA

Early development activity within the Trust has included the establishment of structures

to support clinical governance. Committees and groups at both corporate and divisional

levels have been formed and an important first step has been the appointment of the

Clinical Governance Lead.

6.3.1 Clinical Governance Lead

In line with the guidance issued in 1999 (Department of Health), a senior clinician was

nominated to act as the Trust lead for clinical governance. In addition to leading the

clinical governance agenda, the Lead is also an Executive Director of the Trust with the

Research & Development (R&D) brief. Within this combined role, the implementation

of clinical governance forms a significant part of the Director's portfolio and this has

allowed the Clinical Governance Lead to protect time in order to focus on the

development of this agenda. Making this appointment at a senior level is regarded by

this Non-executive Director as an indication of the Trust's commitment to clinical

136
governance:

'Having a director almost working full time (on clinical governance) has
given clinical governance more kudos'.

Over time, the Lead's brief has expanded in line with the growing agenda at a rate not

anticipated at the outset. No formal Terms of Reference have been developed to focus

the role; although the Clinical Governance Lead has line management responsibility for

the library services, there is no functioning operational structure to support her in the

discharge of the wider clinical governance remit. As Clinical Governance Lead, she

either chairs or is a member of a number of the key groups/committees which are either

directly/indirectly associated with the clinical governance agenda. Partly because the

lack of a supporting operational structure, and partly because of the apparent difficulty

in engaging some 01
of the
me Key
key piayers
players with
wnn me
the impiemenianon
implementation process, me
the Lead
i>eaa has
n

become increasingly embroiled in operational clinical governance and this activity is in

danger of displacing the strategic aspects as the following comments illustrate:

'There is a problem with being the clinical governance lead because the
issue is, ifyou don't lead things... ... The job is about trying to get things
actioned and a lot of this takes my personal time because maybe some
people aren 't as engaged as they might be, that's maybe one of the
reasons why everything continuously comes back to me. If I haven't
done it, has it actually happened? I don't want clinical governance to
be something that I do but at this stage in the development, I often have
to be the person who kicks it off and how much time is there as the
clinical governance lead to do that. The problem is that if you have a
lead everyone thinks you are leading so it's a double whammy in a way'.

'That's one of the things I have been going on at the Chief Exec recently
to; I said I can't do this job unless I have time to think and for several
months this has just not been possible; I appreciate I generate this stuff
myself, but I can't even fit it in within very long working days, five days
a week; therefore the temptation is whenever you 're in the office I have
got loads of things I 've got to get out, got to get done, therefore there is
no time for reflection which I actually think is a weakness in my current
role at the moment... I don't have enough time to reflect. You're in
danger then of ending up being busy and I do get a lot of things just like

137
producing stufffor meetings ... because I haven't got anyone who can
produce that sort of thing for me'.

'Also, just about trying to think; You 've come to me and asked me what
is my agenda for next year, and I must admit because you had e mailed
me... it did make me think but I have got to sit down and put some time
into that question. It's about finding the time to do that'.

6.3.2 Clinical Governance Sub-committee

In addition to appointing the Clinical Governance Lead, a new sub-committee of the

Trust Board was established in February 1999 - the Clinical Governance Sub­

committee. The intention was that the Sub-committee would meet monthly and, apart

from one or two exceptions to this, has met on that basis. The Sub-committee is

chaired by a Non-executive Director of the Trust and the meetings have been

consistently attended by most of its members.

The Sub-committee's Terms of Reference suggest that it is responsible for ensuring that

arrangements are in place to deliver the overarching objectives of clinical governance

and for co-ordinating the activity required to support this which implies a responsibility

for both an assurance and a steering function. The Development Plan provides the

main vehicle for delivering both functions; and, in the absence of an implementation

plan, the objectives contained in this document provide some indication of the direction

of travel which will go some way to enabling the Sub-committee to discharge its

steering role. The Sub-committee did not go on to develop a work plan to focus its

attention in the first or subsequent year.

Whilst the Terms of Reference made clear that the minutes of the Clinical Governance

138
Sub-committee meetings would be circulated to members of the Trust Board and the

Sub-committee, there was no formal arrangement to disseminate these to a wider

audience for quite some time. Although several members of the Management Team are

also members of the Sub-committee, not all are represented and the minutes of the

Clinical Governance Sub-committee meetings are not a standing item on the agenda of

that key operational group or, for that matter, on the agendas of the Clinical

Governance Development Team or the Risk Management Team. Thus, in the absence

of an operational structure, the Clinical Governance Lead is the only mechanism for

taking the recommendations of the Sub-committee out into the wider organisation.

Initially, membership consisted of two Non-executive Directors (one of whom chairs

the Sub-committee), the Trust Chief Executive, the Clinical Governance Lead, the

Nurse Director, and the Medical Director. Later a consultant psychiatrist joined the

Sub-committee to represent the Division of Psychiatry. In September 2001, after it had

been in existence for over two years, the membership of the Sub-committee was revised

and extended dramatically to include a representative from each of the divisional

clinical governance fora and representatives from each of the three Primary Care Group

(PCG) clinical governance committees. The rationale for this was to increase the

effectiveness of communication between the Clinical Governance Sub-committee and

the other clinical governance groups both within and outside of the organisation.

The absence of any representation from the Human Resource (HR) function,

Information Management and Technology (IM&T), User Involvement, Finance and the

Divisional Managers is perhaps curious given the centrality of these functions in the

139
conceptualisation of the Trust's clinical governance agenda and the stated aim of

delivering greater integration: - 'to integrate and identify clear links between the

developed policies related to Human Resources Management, Information and

Knowledge Management' (Internal Trust Document, 2000a, p5). Although the Sub­

committee had, for some time, been considering how best to incorporate the perspective

of service users and carers into the clinical governance agenda, there was also a

determination to avoid tokenism and to ensure that the users were enabled to contribute

effectively whilst maintaining their unique perspective. Although the Sub-committee

appears committed to the concept, this is an issue which is yet to be resolved, as one

Sub-committee member put it:

Within the Trust we do quite a lot of specific user involvement,


particularly with mental health. It's how to bring that all together and
see how it informs clinical governance? It's Just too easy to say we 'II
have two users on the committee and that's cracked it because it
obviously doesn't...... ...I've been on other committees where you have
the representatives of the users and before you know where you are
they 've become quasi professional and they are not doing what they are
supposed to be doing. We need to find proper methods of working with
the public outside the clinical governance arena and bring in the results
of that back into the clinical governance structure'.

From the minutes of the Clinical Governance Sub-committee, the breadth and content

of the agenda has developed over time; however, the agenda itself is generally

compiled by the Clinical Governance Lead - apparently it is rare for other members to

put forward items for inclusion. In the early days, the focus of attention was on

defining its Terms of Reference, considering the linkages with other groups, discussing

the format for reporting to the Trust Board. Over time, the Sub-committee has received

a greater amount of information upon which to base discussion; included as regular

items are the quarterly risk management reports and the Significant Clinical Incident

140
Review reports.

The establishment of the reporting system has contributed to a feeling amongst some of

the original Sub-committee members that they are getting to grips with the agenda; this

may also be a function of sitting on a variety of committees:

'(Agenda) seemed huge at first and every meeting we went to it got


bigger and more worrying because you couldn 't really see how it would
begin to work because it was all terribly complicated but over the last
quarterly report that went to the Board, there you can see the various
strands come together and things beginning to interrelate - I'm on the
Complaints Committee and the Audit Committee and the Mental Health
Committee and all those things feed in to it'.

Another member noted that the presentation of information in the current format

enabled those reading the report to identify trends:

'We obviously look at quarterly returns; those things you see we didn '/
look at before; now we 've got all these quarterly returns, bringing all
this different information together so we can spot trends, we didn't do
that before... ... ...we are bringing from all these different areas of the
Trust all this information together in a very simple tabulated form, all
the different groups that meet - prescribing, complaints etc; all coming
together and we can simply look at it every quarter and spot
trends...... ...we've always had these floating around but they are
coming together in a more user-friendly, manageable form so we can
quickly look at them'.

The researcher has been a regular attendee at the Clinical Governance Sub-committee

meetings and has observed the lively discussion that is often a feature of the

proceedings. However, much of the information provided in the consolidated report is

operational data and primarily produced for other audiences. It is generally existing

data rather than information based on an assessment of what the Sub-committee

actually needs to be able to meet its Terms of Reference with particular regard to its

assurance and steering functions. Whilst the report does give an update against aspects

141
of the Development Plan, the existing reporting framework is yet to capture the full

range of clinical governance activity in detail; for example, the coverage of appraisal

and Personal Development Plans (PDPs); the progress of clinical effectiveness or

clinical audit; the progress of the IM&T strategy. The information is essentially

passively received rather than actively sought. Members of the Clinical Governance

Sub-committee seem to appreciate what they receive but do not generally appear to

recognise gaps.

Whilst some of the more established Sub-committee members seem to think things are

coming together nicely, some of the newer members quoted below are not even clear

why they are on the Sub-committee or the implications of being a member of a formal

Sub-committee of the Trust Board:

7 'm not sure (why I 'm here), I 'vejust been told to come'.

7 just got a letter from (my manager) saying that she put my name
forward... ... ...Ididn 't get a chance to read them (the minutes before the
first meeting)... ...... well I sat there and was thinking - obviously he is
accountable as the chief exec but he was trying to say that the group as
a whole was responsible - and I sat there and thought so how
responsible does that make me and I don't know, I don't know... ... ...I
did actually think if anyone actually questioned it how far would they
take it. If there was an issue would it come down to me, little me, would
it really and I couldn 't really believe it'.

The issue of establishing tangible linkages with other structures has also proved a

challenge for the Clinical Governance Sub-committee. From the outset, an explicit

objective was to establish formal links with a number of existing sub-committees and

groups which included: Complaints Sub-Committee, Training and Development Group,

Risk Management Team, Professional Advisory Groups, Clinical Directors, Heads of

Therapy Services, and the Information Strategy Group. Although, as indicated earlier

142
in this section, individual members of the Clinical Governance Sub-committee may

attend one or more of these groups, the link appears more in the form of the individual

as opposed to a formalised system of information exchange. Although recognised by

some in the Trust, this matter was not resolved during the 18 months of fieldwork.

6.3.3 Divisional Structures

In addition to developing a structure to support clinical governance at the corporate

level, an important objective in the 'Clinical Governance Development Plan1 was the

establishment of a clinical governance forum in each of the clinical divisions. It was

envisioned that each forum would provide direction, co-ordination and support to the

division as it took forward the clinical governance agenda. The forum would also

perform a monitoring and reporting function.

The divisions have been encouraged to develop a forum model that would reflect local

circumstances. All but one is chaired by a clinician or a manager with a clinical

background and the professional background of the chair understandably differs from

division to division. The rate at which these fora have been established has varied with

one local forum meeting for the first time as recently as February 2001 - 12 months

after the Development Plan identified this as an objective for all Divisional Managers.

The findings of the rapid appraisal, which took place towards the end of year 2000,

indicated that the divisions had not undertaken a baseline audit of the coverage or

effectiveness of existing quality improvement systems such as clinical effectiveness or

risk management processes; four out of six divisions were still to develop a local action

plan for clinical governance. In fact, there was little evidence that the Trust

143
Development Plan had diffused into the organisation and become a living document.

By early 2001, each division had established a forum although they appeared to be at

different stages of development and performance. This appears to be a function of a

number of factors which include the length of time the forum had been operating, local

priorities and even the corporate style which has a tendency towards 'shaping' initially

rather than prescription but seems to become more directive and prescriptive in the

absence of progress.

Whilst there has been a growing emphasis on sending information upwards, there has

been less attention to the mechanisms for disseminating information downwards;

reliance has been on the Divisional Managers although the Management Team does not

routinely receive the minutes of the Clinical Governance Sub-committee meetings. It

was only when the membership of the Clinical Governance Sub-committee was

expanded to include representatives from each division that all local fora had access to

the minutes.

6.3.4 Clinical Governance Development Team

From the outset, the Trust anticipated the need to provide support and facilitation to the

divisions to enable them to take the clinical governance agenda forward. It was

proposed that this role would principally be fulfilled by a new team, the Clinical

Governance Development Team (CGDT).

Initially the role of CGDT was to focus largely, although not exclusively, around the

144
'Knowledge1 component of clinical governance; however, over time, a greater emphasis

on the wider clinical governance agenda appears to have emerged. This is reflected in

the notes to the meetings which have taken place on a fairly regular basis for more than

18 months. Since the inception of CGDT, membership has gradually expanded to

ensure that all clinical divisions have access to a named CGDT member. Mid 2001, a

joint appointment was agreed for a facilitator to work across one of the PCGs and the

corresponding Locality of the Primary Care Division. Recruitment to a part-time post

of Care Pathway Facilitator is also planned but has not yet taken place.

Part of the role of the Clinical Governance Lead is to co-ordinate the work of the Team;

the characteristics of which indicate a project rather than functional group given that

several members have substantive, operational posts within the divisions for which they

also act as facilitator. Other Team members have broader, Trust-wide remits; for

example: one manager has specific objectives relating to areas such as clinical audit and

management responsibility for the Library services; another manager has a specific

remit in relation to R&D activity. Still others have functional roles within either the

Library Services or Formulary Pharmacy Audit and, as such, act in a more central-

resource capacity.

So far, the Team's input to the divisions appears to have been variable in terms of both

content and quantity. Activity has included the provision of general support to the local

divisional clinical governance fora and facilitation of the local reporting process in

relation to the Trust annual clinical governance report. The variability observed is due

to a number of issues which will now be considered. There has been a long delay in

145
getting all of the members into post so the CGDT, as an entity, has been slow to

develop:

'Think that we have been forming and re-forming and forming and re-
forming so \ve have never really got to storming, haven't got near
norming'.

Related to the delay in establishing the Team, most members expressed a lack of clarity

around this role. Although objectives for the CGDT itself had been developed and

there had been some move to translate these to the level of individual team members, a

work plan setting out priorities in-year had not been developed. For some Team

members, the broad objectives need refocusing:

'My personal opinion is that we could do with vision and focus again.
We had a meeting last week at which that person, that person and that
person were there for the first time in their current roles. So over the
last 12 months the group hasn't stayed the same... ...could do with
looking at the team and roles and whether they are clear enough'.

'My personal feeling is that there is a sense of confusion about specific


detail of roles within the team so I think we still have some work to do
clarifying exactly what our responsibilities are and how they interface
because there is going to be a lot of overlap...... other people have role
extensions... ...we need to revisit our role as a team and
individuals...... need to revisit our objectives and what we have achieved
and where the gaps are'.

"Don't really know what my role is on that (CGDT). I am pretty clear


what my role is in relation to clinical governance, I think; I don't know
- I suppose I am probably a bit thick, but I don't know where the
Clinical Governance Development Team stops and clinical governance
begins or the other way around. I 've looked often at the aims of the
Clinical Governance Development Team and they don't give me any
direction really '.

There is also a capacity issue for some individuals whose Team role is performed in

addition to the operational requirements of the division. Although this dual role has

been negotiated with the respective Divisional Manager, the additional Team

responsibilities are not always reflected in the core role objectives and, in some cases,

146
the part-time Team role has become an 'add-on' to full-time divisional duties:

' (Recent IPR) ...All my ongoing stuff was left out and I said, I'm sorry
but that is a big job, that is going to have to go in as well. The ongoing
stuff we leave out( of the IPR) because there is other stuff that needs to
be tackled... ... You obviously have a good idea of what the Clinical
Governance Development Team is set up to do; I don Y have any idea at
all because I just spend my life running between divisional objectives
and the stuff coming out of the clinical governance forum. 1 inherited
that role at the same time as I linked into the Clinical Governance
Development Team. So if the Clinical Governance Development Team
is very different to what I am doing, I don't know how I would fit it in
anyway'.

6.3.5 Risk Management Team

The Risk Management Team has been meeting regularly within the Trust for around

three years. More recently it has been chaired by the Clinical Governance Lead and

membership includes senior managers of both clinical and support divisions,

representatives from professional groups, staff-side representation and managers from

specialist areas, for example Health & Safety, Complaints. The Risk Management

Team reports to the Clinical Governance Sub-committee, the Trust Board and, in terms

of Controls Assurance, to the Audit Sub-committee.

In January 2000, the Terms of Reference of the Team were reviewed and amended to

reflect the need to incorporate clinical risk management into what had previously been

a non-clinical risk focus; thereby developing a more integrated approach to risk

management within the Trust. It was also intended that the Risk Management Team

would adopt a more proactive approach to the risk management process. The Team's

original dataset was therefore augmented to support a greater degree of both analysis

and also corrective/preventative action based on this analysis. One example of this is in

147
the area collectively known as 'slips, trips and falls'. Previously, incidents would have

been discussed locally but the practise of reviewing an integrated dataset has meant that

the Risk Management Team was able to identify a shared and recurring problem. A

working group has subsequently been formed to explore the issue in greater depth,

identify root causes and propose action to manage the risk to both service users and, in

some cases, staff.

An intended function of the Risk Management Team is the co-ordination of elements

deemed to sit beneath the risk management umbrella. Therefore, this has become the

forum for taking forward the Clinical Negligence Scheme for Trusts (CNST)

accreditation at level one and, more recently, Controls Assurance.

The Trust has recently reviewed and revised its "Risk Management Strategy1 which was

approved by the Trust Board in September 2001. In addition to providing strategic

direction, the document also sets out a detailed plan designed to take the risk

management agenda forward over the next two years. The strategy not only makes

explicit the corporate and divisional/directorate responsibilities for delivering this

agenda but also defines the specific responsibilities of the Trust Board, divisional

managers, departmental/service managers and also those of staff within the Trust.

Discussions are currently taking place with regard to the mechanism for launching the

strategy and cascading the corporate objectives to and within the divisions/directorates.

This appears to be a very different approach to the implementation of clinical

governance per se in that it appears highly structured and deliberate.

148
Risk management within the Trust is evolving with the intention that it will become

increasingly systematic and integrated. However, although there are managers leading

on a limited number of individual risk management components, there is a sense that

the only strategic and operational overview rests with the Clinical Governance Lead.

The Lead not only has the overview but also provides strategic leadership for risk

management and is engaged in delivering operational aspects of this agenda. This latter

has included: leading workshops, leading Significant Clinical Incident Reviews,

Significant Event Audits, providing facilitation and support to divisions/directorates.

Within the Trust structure, there is no provision for a risk manager post; the Health &

Safety officer collates and compiles reports on risk data quarterly for the Risk

Management Team but reports in a line management capacity to the Finance Director.

Thus, although the Clinical Governance Lead is heavily involved in the risk

management agenda, she does not exercise a line management role in this area.

It has been interesting to observe the development of this group over a period of time;

specifically the evolving agenda and the growing use of regular information as a basis

for further enquiry and action upon which a number of its members have commented

favourably:

'There has been a change in focus since (the Clinical Governance Lead)
has been chairing the meetings. Previously we just reported the
numbers but didn 't go beyond this'.

'We have become much more focused around the issue of risk
management. We've become more aware of the fact that we've got to
learn and take action from the risks that have been identified. In its
early days, it was very much a reporting mechanism with very little
direct action resulting from the risk management committee, that's the
area where we have started to build on. Once we started to identify
common risks we 've then tried to commission action to try and address

149
those risks'.

'We 've become a lot more proactive and we 're working more as a team
now whereas before we had pockets of different people doing different
things - not saying people doing all bad but we didn 't have a forum
where we could discuss best practice... ...people can see not just a paper
chase (now)'.

The developments described above are part of an on-going process but significant gaps

in the present dataset such as clinical audit, complaints, and the Significant Clinical

Incident reports inevitably limits the ability of the Team to integrate the risk

management agenda and discharge its monitoring function. Also, there does not appear

to have been a systematic baseline assessment of the risk management process per se or

an assessment of manager/staff training needs around this agenda.

6.3.6 Training and Development Group

The Trust's 'Clinical Governance Report' highlighted education and development as a

central element in the establishment of clinical governance in the organisation:

'Effective clinical governance needs to be underpinned and supported by


the education and training of clinical staff that is relevant, up to date,
flexible in its delivery and meets the needs of individual practitioners as
well as the needs of the Trust'. (Internal Trust Document, 1999; p9)

The Development Plan identifies a number of specific objectives around the education

and training agenda; these include action around appraisal and Personal Development

Plans (PDPs), best practice in user involvement, training staff to access best evidence.

There is also a reference to the 'newly established Training Strategy Group' whose

brief includes the review of education and training needs across the organisation.

Membership of the Training and Development Group reflects the membership of the

150
Management Team; essentially, the group is Management Team meeting to focus on

specific issues relating to the training and development agenda. Although the budget is

administered by an Executive Director of the Trust, training and development appears

to have been devolved to divisions; it is not part of the remit of the HR function.

Eighteen months after the Development Plan was produced, a strategy for education

and training had not been formulated. In the absence of a strategy and given that

employee appraisal is still not universal, it is not clear how priorities for education and

training are determined. Although there is training in appraisal and incident reporting,

there is no evidence to suggest that an integrated corporate approach has been adopted

to equip staff with the range of knowledge and skills to engage effectively in clinical

effectiveness, risk management, quality improvement and other specific elements of the

clinical governance agenda.

There is a sense amongst some interviewees that, apart from budgetary control, current

arrangements concerning accountability and responsibility for the delivery of training

and development within the organisation are less than clear. Some members of the

Training and Development Group did not necessarily feel they had an overview of the

whole agenda and were not entirely sure who amongst them would have:

We used to have a training manager who reported to (the ED) ... ...we
have come right away from that now and apart from the assessment
centre there is no centralised training function at all. I don't care who
leads the training function but my concern is that there is no overview
...... I don't think there is a feeling that it is being pulled together at all
...... this is not a criticism of (ED) because I don't think this is what her
role is envisaged as being. Think it is an area where you do need an
overview'.

151
6.3.7 Libraries

Another key component of the Trust's clinical governance framework concerns

'knowledge management' which has, in theory, if not yet established in practice, a

strong connection with the education and training agenda. The geographical dispersion

of Trust services presents a particular challenge when considering how to facilitate staff

access to the knowledge for practice. The Trust provides library services from three

sites. Initiatives to improve these services have focused on a number of areas: an

increase in staffing to raise the level of professional input, the development of library

systems and processes to enhance the service and meet the needs of a multidisciplinary

workforce. As a consequence, there has apparently been an increased investment in

staff, IT equipment, electronic resources, books and a rationalisation of journals.

Access issues due to the geographical dispersal of services have been recognised by the

Trust which has acknowledged the need to provide 'virtual1 library services. However,

discussions with a number of front line staff suggest that, whilst physical access can

still pose a problem for some, there may also be a cultural barrier - this clinician

described how she felt guilty about accessing the library 'in work time':

7 'm just saying it is a luxury, and I often feel very guilty if I go to the
library in work time because they are all so far away ... ...you can 'tjust
pop in and pick a book up or drop a book off. It's couple of hours to go
there to look for something and then to follow it up; you tend to do all of
that in your own time. Feel there is a stress factor there - if you 're
doing that there's something you 're not doing; back to down to the
bones again in terms of time'.

The service has been working towards accreditation under the 'Line Health Panel

Accreditation Scheme' and recently achieved level three; in her report, the assessor paid

tribute to the hard work of the team involved. Although more work is needed, the

152
content and tone of the accreditation report suggests that, in achieving level three

accreditation, the Trust has established a sound basis for future development.

6.3.8 Related Structures

Given the complex and far-reaching nature of the clinical governance concept, there

are, of course, a large number of groups within the Trust which contribute to the

delivery of the clinical governance agenda. Just some examples would be the Audit

and Complaints Sub-committees, committees and groups taking forward the work in

the areas of Health & Safety, Nursing Practices, Drugs and Therapeutics, Information

Management and Technology (IM&T). Many of these groups have been functioning

for some time and the need to establish formal links between new and existing

structures to achieve a greater degree of integration between systems was identified

early in the clinical governance implementation process. Although the Trust recognises

that progress has been made in some areas, there is an acknowledgement that more

work is needed and discussions have been ongoing in relation to mechanisms for

increasing the level of integration required but, as indicated earlier in this section, this

is an issue which is yet to be resolved.

As highlighted earlier in this report, individual managers sit on a number of groups or

committees but sharing information from one structure to another often relies on the

individual rather than an explicit system. As an example, although IM&T systems

underpin much of the work of the Trust, whether as clinical information systems or in

providing internet access to related information, it is difficult to see from the minutes of

the Clinical Governance Sub-committee how the strategy for IM&T explicitly
153
influences/informs the agenda of this group. The minutes record little in the way of

discussion around this aspect of the Development Plan.

6.4 SYSTEMS AND PROCESSES

As new structures were being introduced and, in some cases, the work of existing

structures re-focused, development of the supporting infrastructure was also taking

place with the introduction of new and the re-focusing of existing systems and

processes. Many of these initiatives are intended to 'capture the learning' either from

within the organisation itself, the users of its services, and/or from external sources.

6.4.1 Dissemination and Implementation of Good Practice Guidelines

In order to strengthen existing mechanisms for capturing the increasing amount of

evidence-based information received by the Trust such as National Institute for Clinical

Excellence (NICE) Guidelines, Technology Appraisals and Effective Health Care

Bulletins, the Trust has developed a system for the Dissemination and Implementation

of Good Practice Guidelines'. This new approach was introduced mid 2001 to ensure

that all relevant documents entering the Trust are either received directly by or sent on

to the Clinical Governance Lead. These documents will then be logged, reviewed by

the Clinical Governance Development Team and allocated a priority rating which

ranges from 'immediate consideration', to 'planned review' and Tor note only'.

Once the initial review has taken place, a course of action for each priority is specified

which may include dissemination to the local clinical governance fora and/or other

Trust groups for consideration. In each case, responsibility for the process within the
154
target group is accorded to a named individual. High priority will require a local/Trust

review of existing practice against that which is specified in the guidelines,

recommendations to address gaps identified, report on findings/action proposed to the

Clinical Governance Sub-committee - all within pre-determined time-scales.

The Trust recognises that, in addition to NICE guidance etc, it will also need to

consider how the reports arising from other external sources will be incorporated into

the system for explicit processing. Potentially, this will include the reports from

national inquiries (e.g. the Kennedy Report on paediatric cardiac surgery at Bristol

Royal Infirmary), Commission for Health Improvement reviews, external clinical

incidents (e.g. problems associated with the administration of Vincristine ) other

guidance/information such as that issued by the Royal Colleges and Department of

Health.

The system provides a clear framework for capturing, disseminating, discussing and

acting upon key information coming into the Trust. A mechanism is in place to track

these processes and a key challenge will be to incorporate this system within the

broader clinical governance agenda by, for example, informing the Trust clinical

effectiveness and audit programme, risk management etc. By the end of the fieldwork

period, the system had been in operation for approximately five months and a review

was planned of the process itself, the nature of outcomes arising from the process, and

the progress of any subsequent action (divisional or Trust-wide).

155
6.4.2 Clinical Audit

Clinical audit is a component of the clinical governance agenda which has been

undertaken within the Trust for a number of years. Despite this, the Trust

acknowledges that a number of factors have presented a challenge to its efforts to

'maximise the benefits of the programme' (Internal Trust Document, 1999; pi 1). These

factors have apparently included: a less than multi-disciplinary approach,

fragmentation, issues around the implementation of change where indicated and also

perceptions and attitudes around the importance of clinical audit. A small number of

Trust-wide projects have been undertaken which focused on consent and record

keeping however, clinical audit activity has been devolved to the clinical divisions.

The local clinical governance fora are responsible for developing a programme for

audit as part of the local clinical governance plan but there is no corporate plan for

clinical audit which would guide the fora on the key issues facing the Trust from both a

national and more local perspective. However, as part of the system for the

'Dissemination and Implementation of Good Practice Guidelines', it is intended that

audit against the national guidance will contribute to the Trust programme of clinical

audit where there are Trust-wide implications. Alternatively, where guidance is more

locally relevant, it should inform the divisional programme.

Prior to the establishment of the Clinical Governance Development Team, clinical audit

was co-ordinated and facilitated centrally. Whilst members of the Team have provided

support around aspects of audit to divisions and latterly the local clinical governance

fora, the extent of this support has varied depending on the other operational aspects of

the Team member's remit and also the number of divisions for whom the individual has

156
been acting as facilitator. It is anticipated that the recruitment of new members to the

Team will mean that support and advice around clinical audit becomes available to all

areas.

The level of clinical audit activity taking place is perceived as low overall although

some areas are doing more than others. Organisational changes are thought to have

created capacity issues but changes in arrangements for support and a lack of data are

also regarded as contributory factors as this manager commented:

'There probably isn 't (a lot happening). This is what disappoints me -


there has been a lot of audit going on and in the last 18 months it has
really rather withered and they really need to regenerate their focus.
The people who have been there are wanting to do it, it's just that they
have been applying for posts, re-organising, setting up new teams. They
are now at the point where they can start to think about quality... ... the
problem with audit work is that it is very time-consuming, just so labour-
intensive because we don't have the data. So one of the reasons why I
think we don't do a lot of re-audits is that people are so exhausted'

'I could weep; if I look at some of the stuff I am doing, it is small beer,
it's not changing practice because it is too small, it is not consistent
enough, we haven't got the data, we can't show things consistently
enough so we might do a bit of tinkering round the edges; but the time
for tinkering round the edges is gone'.

In December 2000, the Trust introduced a new summary report form to capture key

details of the audit work being undertaken in order to build up a profile of clinical audit

activity in the Trust. This not only gives an indication of coverage and methodology

but also seeks to establish the changes which have been made to practice and any

subsequent benefits to patients resulting from this audit activity - i.e. that the audit

cycle has been completed. Information on clinical audit is collated by the Clinical

Governance Support Manager. In addition to the summary sheets which are returned

centrally, the divisions have recently been required to include an overview of their

157
clinical audit activity in their annual clinical governance reports; the data from which

has been incorporated into the annual clinical governance report of the Trust. These

centrally collated summary sheets provide valuable information on clinical audit

activity; however, it is not clear from the datasets currently available how this informs

the routine clinical governance monitoring and reporting system. The lack of a

deliberate integration mechanism means that clinical audit results are not considered

with other elements of the clinical governance agenda such as risk management,

complaints, Significant Clinical Incident Reviews etc thus there is less opportunity for

each to inform the other.

Whilst training in literature searching has recently been available, training around the

wider clinical effectiveness and the clinical audit agendas has not been provided in-

house for some time due to changes in the arrangements for facilitation. Whilst it is

important for staff and managers to have the relevant knowledge and skills if they are to

undertake this activity and many staff have apparently acquired this through post

graduate study, this is not the case for all.

6.4.3 Raising Issues of Concern

Continuous improvement in the quality of services for patients and clients is a central

aim of clinical governance and opportunities for improvement may be identified in a

number of ways. In addition to the group and committee structures described earlier,

the Trust has developed a policy and procedure to assist individual staff should they

identify gaps in service quality or wish to share concerns regarding service delivery.

Through its policy 'Raising Issues of Concern', the Trust has made explicit the

158
responsibility of all staff to bring any such concerns to the attention of their managers.

A procedure for raising issues has been outlined which provides details of the multiple

avenues open to staff together with information on the action to be adopted by the Trust

in response. Both the policy and procedure have been incorporated into the Trust

Policy and Procedures Handbook.

Since the introduction of the policy in October 2000, the procedure has been invoked

on one occasion. Investigation revealed that the matter was already being addressed

locally although the individual who had made the report was apparently not yet aware

of this. It is anticipated that the investigation process will be similar to that established

for incidents in general or, if it meets the criteria for Significant Clinical Incident

Review, the latter approach could be considered more appropriate.

The Trust recognises that staff may find it difficult to report concerns, particularly

where the issue involves the actions of colleagues, some of whom may be in a more

senior position. The usage of this mechanism will continue to be monitored, and, if the

low level of reporting persists, an assessment of the underlying reasons for this will be

made.

6.4.4 Incident Reporting, Trigger Events and Significant Case Reviews

A mechanism for reporting incidents associated with the Health & Safety agenda has

been operating within the Trust for a number of years. The need to increase the

reporting around clinical incidents has been recognised and efforts are ongoing to raise

awareness of this; however, training courses have apparently been suspended due to

159
low numbers. A workshop for senior managers took place in July 2001 and was well

attended, apparently well received and should, therefore, provide a positive starting

point for further efforts to raise the profile of clinical incident reporting and, indeed,

risk management in general.

As part of the development of clinical incident reporting, a pilot project is taking place

to develop the use of'trigger events' which, should they occur, will indicate to staff that

an incident report is required. A process of 'significant case reviews' is also being

piloted. This process is intended to focus on complex clinical cases or cases where the

boundaries of practice are being extended to address the specific needs of the clients

concerned. Each of these pilots is at a relatively early stage of implementation and the

intention is that an evaluation will take place in due course.

6.4.5 Significant Clinical Incident Review

When a clinical incident occurs, there may be important lessons to be learned which, if

acted upon, may minimise or, in some cases, even eliminate the risk of such events

happening again. The seriousness of such incidents often varies as does the outcome

for the patient/client, carer and the Trust. To address this issue, a system for the

investigation of significant clinical incidents - 'Significant Clinical Incident Review' -

was introduced early in year 2000. The aim of the review is to take a whole-system

approach to investigating the incident, determine the root cause and effect appropriate

remedial action. The procedure is clearly defined and extends from the notification of

the incident, rapid appraisal to determine whether it should be dealt with under the

Significant Clinical Incident Review procedure, through to the conduct of the

investigation, remedial action and the reporting mechanisms intended to support the

160
monitoring of progress against the corrective action plan arising from the incident.

In an effort to signal a commitment to openness, to make explicit the lessons learned

and the subsequent action to be taken, the Trust has introduced a mechanism for

disseminating the Review report both internally and externally to the wider health

community. Depending on the audience for the report, the detail and format will vary;

full reports of the incident are sent to and discussed at the Clinical Governance Sub­

committee and Management Team meetings. The Clinical Governance Lead will visit

the patient and/or family to discuss the findings of the report before sending a copy to

them. Anonymised summaries are also distributed to divisional clinical governance

fora, to the Trust Board, and to external organisations such as the PCGs and the County

Quality Board. A follow up audit is undertaken after an agreed period of time to

confirm that the required remedial action has taken place.

By the end of the fieldwork period, 18 Significant Clinical Incident Reviews had taken

place over a period of approximately 18 months. Although Divisional Managers felt

that such incidents had been taken seriously in the past, there was the sense that this

recent approach had provided those whose division was directly involved in the

incident with an explicit framework for investigation and action.

The Significant Clinical Incident Review is an important initiative and has started to

signal a change in the 'way we do things here1. However, observations in the field

suggest that there are still a number of issues to be addressed; in particular, those

relating to assumptions about the level of openness at the front line of service delivery,

161
the perceived transferability of lessons learned and the report dissemination process.

As outlined earlier, reports are disseminated widely and the level of detail varies

depending on the intended audience. The effectiveness of dissemination relies on a

number of factors, one of which is the degree of openness within the different levels of

the organisational hierarchy. The openness at the corporate level may not be matched

by local approaches to complaints, significant incidents and so on. A number of front

line staff commented on the tendency to keep complaints and incidents as a local issue

involving only those staff directly affected; they see this as a cultural phenomenon.

Within the context of formal staff meetings, it does not appear that complaints or

incidents are discussed even in broad terms such as the numbers received on a quarterly

basis. Local staff are filling in incident forms but are not apparently receiving feedback

on their efforts which may lead to cynicism:

'/ think it 's about culture. We are looking at culture changes, ... ...from
a culture that hid away and didn 't discuss it at all (to) a culture that
allows you to discuss critical incidents, near misses, significant events in
an environment where you expect positives as well as negatives; it is this
sharing of information across divisions which could put you in a
negative light... ... and that is very embryonic but it's starting'.

'Feedback is really important but when you don't get any it's
demoralising; we 've done this, we 've sent in this form and you get
nothing back from it that's perhaps going to improve something - (that)
this, this and this has been done - you get a bit cynical - you think what's
changed?'

Feedback from several of the divisions suggests that, in some cases, where an incident

has occurred in a specialised part of the service, there is a risk that the relevance of the

lessons learned may not be appreciated in other areas. This has led to a situation where

local circumstances are less likely to be reviewed to determine whether similar

corrective or preventive action is indicated. In addition, whilst some managers are

162
acting on the reports, not all managers know what to do with them once received:

' Significant Clinical Incident - this information cascading down to us;


how many of us in that room this afternoon know what to do with that?
Where do we take it, what do we do with it. Do we think about it or are
we just going through the motions. I knew about the (particular)
incident, I didn 't cascade it anywhere - should I have? It's around -
should we? There's one or two who probably think, I know what to do
with this, I know how to learn from it and get my staff to learn from it to
show my staff that clinical governance is working; but not all of us
would'.

Whilst the Significant Clinical Incident Review reports are widely disseminated, they

do not appear on the agenda of certain key committees such as the Risk Management

Team meetings. Members of the Risk Management Team who are also part of the

Management Team will receive the report through the latter group but other members

currently find themselves out of the loop. How far down into the divisions the Review

reports are communicated is variable; some of the junior managers and practitioners

only became aware of the existence of this system at the Risk Management Workshop

held over 12 months after the first of these incidents had been investigated.

6.4.6 User Involvement

User involvement is another core element of the Trust's clinical governance framework.

The Development Plan contains a number of objectives under this banner and makes

specific reference to the work of the Trust Board and to the divisions. The

Development Plan preceded a review and report on user involvement in the Trust. This

report concluded that although there was enthusiasm for developing this important

component of the clinical governance agenda, there was no mechanism for sharing

good practice across divisions. Initiatives often relied on the enthusiasm of individuals

rather than being an integral part of service delivery. Based on this assessment, in
163
October 2000, the Trust developed an action plan to take user involvement forward.

The focus of this work is on three levels: care planning, service provision and

development and organisational development. Each of the clinical divisions was

required to identify an opportunity for piloting initiatives to address one or more of

these three levels; coverage has apparently been patchy and progress, overall, described

as variable. Although a number of initiatives are regarded as progressing well, one

division is yet to get any under way.

Operational monitoring of these initiatives has been undertaken by the lead manager;

however, there are no mechanisms for regular, formal reporting of the progress of these

pilots to the Management Team. Despite the fact that user involvement has been

devolved to the divisions and Divisional Managers are responsible for driving the

agenda forward, this key component of the clinical governance framework does not

appear as a regular agenda item on the Management Team meetings. Instead, updates

appear to take place on a more informal, one-to-one basis between individual managers.

Thus no mechanism has been established in the forum with explicit responsibility for

taking this agenda forward to enable collective monitoring of progress, discussion of

barriers to effectiveness and the institution of remedial action to address the current

lack of progress in these initiatives. Essentially, there is no corporate structure to drive

user involvement in the Trust other than the efforts of the User Involvement Lead. The

Lead has been trying to work with the staff on the front line but is moving more

towards trying to engage the Divisional Managers in an effort to develop a more top

down and bottom up approach.

164
Although the User Involvement Lead has presented the topic to the Clinical

Governance Sub-committee and the issue of user input to this particular Sub-committee

has been under consideration for some time, little progress has been achieved in

establishing user representation onto the Sub-committee either in the form of service

users or the lead manager.

6.4.7 Appraisal and Professional Development

Monitoring the performance of staff and promoting life-long learning are key themes

within the national clinical governance agenda. The appraisal process is one of several

mechanisms proposed for highlighting and addressing performance and development

issues (Department of Health, 1999) and the Trust objective in this regard is as follows

(Internal Trust Document, 2000a; p6):

'By April 2000: To ensure all staff have regular appraisal and have
identified personal development plans which are linked to the business
plan'.

In December 2000, the Trust undertook a staff attitude survey; 2927 questionnaires

were sent out with payslips and 1170 were returned - a response rate of 40%. Of those

who responded to the statement 'my manager gives me regular feedback on how I am

doing in my job' - 47% agreed. Of those who responded to the statement 7 have a

Personal Development Plan (or Training and Development Plan) which has been

agreed with my manager" - 35% agreed. Clearly the Trust has some way to go in order

to meet its original objective. To facilitate this process, training sessions have been

provided for managers prior to appraisal of staff. There has apparently been an

increased corporate emphasis on the need to move forward with the appraisal system.

A number of managers interviewed now have this as a personal objective and

165
consequently are requesting training in this area.

Apparently there have been a number of discussions in a variety of arenas around the

challenges of delivering the targets set for appraisal and Personal Development Plans

(PDPs). Contributory factors are thought to include, in some cases, negative

perceptions around the purpose of appraisal:

'(I) think that managers tend to use it as a whipping tool rather than an
opportunity to sit down and discuss things... ... sounds as if we may have
some staff who think it is a telling off session rather than what it should
be which is completely the opposite'. (Senior manager)

For some managers appraisal is perceived as a time-consuming extra rather than an

integral component of the management function. Current training courses not only

provide information on the appraisal process itself but also recognise the need to

overcome some of the negative perceptions surrounding appraisal and CPD that

currently exist.

Prior to the staff survey, there have been a number of ad hoc initiatives to monitor the

percentage of staff receiving appraisal however it is unclear how the progress of this

agenda is monitored on an ongoing basis, how a Trust overview is maintained or how

the development agenda informs business planning.

6.4.8 Communicating the Clinical Governance Agenda

During February 1999, the Trust undertook a series of road shows with the aim of

raising awareness around the emerging clinical governance agenda. These were held in

four locations and comprised; leaflets, posters, short presentations on clinical

166
governance and clinical effectiveness. Staff also had an opportunity to try electronic

search facilities. Two hundred and seventy one (271) staff attended which represented

around 9% of the workforce. Feedback on this initiative was, apparently,

complementary in the main although some staff would have appreciated more time and

for the content to be have been more specific. Some interviewees suspected that the

majority of those attending the road shows were managers rather than staff:

'The road shows were put on for the directorate - not well attended. It
was well publicised but I think people thought it wasn 't important - the
managers came but they were told to come'.

'My hunch is the types of staff who attended were the more senior staff
who thought this is going to be something that my manager's going to
ask me about in my IPR rather than the untrained and the more clinical
staff who cannot as yet see how this is going to affect them when they go
and see Mrs X\

A small number of articles have since been published in the in-house newsletter and the

Trust Annual Reports have included information on progress against components of the

Development Plan.

Since the early Trust-wide initiative, awareness raising has continued on a more

opportunistic basis with presentations by the Clinical Governance Lead to divisional

meetings and meetings of the local fora. In addition, where members of the Clinical

Governance Development Team have been active within a division, they have engaged

groups and individuals often on a similarly opportunistic basis. In reality, the

implementation of clinical governance has not been supported by a communication plan

and, in this managers' opinion, communication of the clinical governance agenda is

essentially left to the Clinical Governance Lead:

'Everyone will leave it to (the Clinical Governance Lead) as the lead to

167
think how do I make sure this (clinical governance) issue gets
communicated thoroughly across the Trust; that's how it will be left.
(Clinical Governance Lead) might say I think it will be a good idea if we
have a road show and that will happen. If she said today, I think it will
be a good idea to send out an update to staff in pay packets, that would
happen; but I don't think as a separate entity there is anybody sitting
down and saying this is a very important issue, what should our
communication strategy be on it?'

Although the responsibility for external communication has been assigned to a named

manager, the responsibility for internal communications is unclear; whilst there is a

framework for external communications, a corresponding framework for internal

communications has not been developed.

The Clinical Governance Development Team is currently reviewing the issue of

awareness raising in the Trust and is considering options for taking this forward. One

approach is the development of a standard presentation to provide managers and staff

with consistent, core messages around the clinical governance agenda that can

subsequently be adapted to reflect the needs of local audiences. Also under

consideration is the possibility of developing a resource/training package that will have

a Trust-wide relevance. In the meantime, a workshop is scheduled to take place to look

at the roles and responsibilities of the groups/committees and individuals; the target

audience is members of the Clinical Governance Sub-committee, the local clinical

governance fora and Divisional Managers. Whilst this is a positive initiative, it will

only connect with a small proportion of the organisation and there is currently nothing

planned for the wider audience.

The apparent gap in disseminating clinical governance is also highlighted by the

168
arrangements for the distribution of the minutes of the Clinical Governance Sub­

committee which, as mentioned earlier, tends to be to the Trust Board. The formal

Management Team membership does not routinely receive minutes and neither does the

Risk Management Team; latterly, however, the chairs of the divisional fora have been

included in the circulation. Some members of these groups will receive this

information by virtue of the fact that they are also on the Trust Board but there is a lack

of consistency in the dissemination process. Under the circumstances it was not

surprising to discover that levels of awareness and understanding around the clinical

governance agenda were variable. These trained staff when asked to say what clinical

governance meant to them had some difficulty in explaining the concept:

7 don Y really know anything about clinical governance although I feel 1


ought to'.

'Clinical governance - what is it? Is it about enlarging on topics and


finding out weaknesses? I don't really know'.

Some staff saw clinical governance as lists of elements such as evidence based practice,

clinical supervision etc; others articulated the agenda as a more integrated approach to

quality improvement:

'Government initiative to maintain, improve and monitor standards of


practice and care in the NHS - response to things that went wrong'.

'Responsibility to clients/patients to provide best practice possible and


research based... ...allowing staff to develop and discuss best practice
via clinical supervision and personal development...... initiation of NICE
and clinical excellence'.

'Providing a better service, value for money, more accessible, meets


health needs'.

'Clinical governance is everybody's concern. It is an umbrella term for


areas of practice which can be continuously reflected upon. For
example, team working, communication, to see if improvement is
required. Can also provide frameworks and strategies for quality

169
improvement'.

'A framework aiming to deliver a quality, standardised service to


patients; using evidence-based practice is a vital part of that'.

Many of the professionally qualified staff interviewed perceived their knowledge to

have been gained through professional publications, associations or professional

training rather than from Trust-based initiatives. The quotations above suggest quite a

range in trained staff awareness; in contrast, managers specifically commenting on staff

awareness generally perceived this to be on the lower end of the knowledge spectrum:

Will be some (staff) who have a fair understanding and others who are
completely in the dark'.

'Clinical governance were two words that were talked about a lot but
their understanding of clinical governance was "it isn't going to affect
me; it's not going to have an impact on me, why should it impact on my
practice; I think that I do deliver the best service I can "... ...you have an
individual responsibility but I don't think that has been taken on board
either; so ifyou like they are actually saying - well clinical governance,
that's (the manager's) problem because she is the manager... or clinical
governance, well that's the PCG clinical governance forums
problem...that's not my problem, not my area because I have no
responsibility'.

'There is no point doing (a questionnaire on understanding),...... they do


not understand clinical governance, it does not, as yet inform their
practice'.

We all know about clinical governance (managers) but whether the staff
at groundfloor level actually know... ... ifyou walked up to a member of
staff and say how is clinical governance affecting your work - I don't
know whether they would know exactly what you are talking about'.

Despite the above perceptions of their staff, it seems that few managers initiated any

deliberate awareness raising initiatives within their own sphere of influence. Whilst

staff would likely benefit from more input, it would apparently be welcome to some

170
managers too:

'There has been no awareness raising around the agenda or training for
clinical governance or CQI. Would welcome some myself - feel as if I
am fumbling in the dark'.

'It (clinical governance) is still woolly. One of the problems about the
adoption of it is there is a lot of work already been taking place but its
not tangible. Can't see it and feel it on a daily basis. Having looked at
the Trust action plan on clinical governance my first response was - how
on earth do you apply this to practice? Now I am sure I am not alone
there...... is it tangible, is it real; it will only make an impact if it's real
and tangible...... '.

6.4.9 Monitoring and Reporting Progress

The establishment of monitoring and reporting systems designed to capture clinical

governance activity within the Trust has been an evolving process. A template for

quarterly reporting has been introduced which collates data from a number of existing

quality systems into a single report format. Thus, data and information from Risk

Management, Control of Infection, Significant Clinical Incident Reviews and progress

against the Development Plan objectives has been presented for some time. More

recently, information on Complaints and Staff Sickness Absence has also be included in

an ongoing effort to present the wider and expanding picture.

The availability of data in this format is relatively recent but has already highlighted

trends within and across services. This has brought managers and staff together to look

at specific issues and, where appropriate, formulate corrective action which may be

appropriate for wider application across the Trust. This has been less evident in the

past when there was a feeling that the geographical dispersal and the diversity of

services often precluded such a joint approach; however, collective analysis of the

171
collated data has, in some cases, identified common themes indicating that

collaboration might be appropriate.

A schedule has been developed which indicates the timing and sequence of reporting.

Reports by division are to be considered at Management Team and the Risk

Management Team. This data is then consolidated into a Trust report for consideration

at the Clinical Governance Sub-committee prior to being presented to the Trust Board

where it is received in the public part of the meeting.

Another mechanism for determining progress has been the development of an explicit

reporting framework to inform the Trust clinical governance annual report for 2000-

2001. This provides a clear indication of the areas which should be receiving divisional

attention and action as clinical governance is implemented. The framework has been

developed to ensure consistency in the annual returns in order to facilitate collation.

However, it could also provide a valuable mechanism for reporting progress against

objectives in-year rather than year end which would considerably augment the

information currently available to local groups, to the Clinical Governance Sub­

committee and ultimately to the Trust Board.

Although the above constitutes important progress, it has already been highlighted that

there are still gaps in the clinical governance dataset which means that those receiving

the reports are unlikely to have a full picture of clinical governance within the Trust.

Whilst the reporting process is evolving this must be appreciated so that the Sub­

committee understands the extent to which it may discharge its assurance function

172
given the information at its disposal.

The IM&T implications of supporting the delivery of clinical governance are far

reaching; some of this diversity has been highlighted in the Clinical Governance

Development Plan. The Trust has developed an Information and Communication

Technology (ICT) Strategy to take forward the national and local agendas for IM&T.

The strategy is supported by the ICT Implementation Plan and progress is monitored

through the IM&T Development Board. Updates on progress against the objectives

outlined under the 'Information' component of the 'Clinical Governance Development

Plan' are incorporated into the quarterly clinical governance reports; however, the

objectives themselves are broad and give little specific information around the progress

of the IM&T agenda. Apart from this quarterly report, it is not clear how the IM&T

agenda explicitly informs the work of the Clinical Governance Sub-committee and vice

versa given the nature of the existing reporting framework and the absence of a

specialist as part of the Sub-committee membership. At times, verbal updates are

provided by the Clinical Governance Lead who is a member of the IM&T Board

however, it seems that this input is not systematic but tends to represent an individual

transfer of information rather than a formal monitoring and reporting mechanism.

6.5 PEOPLE

6.5.1 The Human Resource

At an aspirational level, the 'people' element within the clinical governance agenda,

both nationally and locally, is an important key to the achievement of quality in health

care. As indicated in Chapter 3, the nature of services, whether in health care or

173
commercial settings, is such that the quality of the service offering is highly influenced

by the individuals engaged in the delivery process. The guidance document issued in

1999 (Department of Health, p6), explicitly reinforces the connection between quality

and the workforce:

'Closing the gap bet-ween the present service and the desired new level
of quality will often not be possible without addressing workforce
issues'.

6.5.2 Linking HR and Clinical Governance

The document (ibid) continues by stressing the importance of a local human resource

(HR) strategy to ensure that the connections between the numerous strands are made to

facilitate an integrated approach to HR's contribution to quality improvement in general

and the delivery of the clinical governance agenda in particular. Despite the explicit

reference to HR in the Trust clinical governance framework and the identification of

key objectives in relation to this, the realisation that HR underpins much of this agenda

has been rather slow to dawn on some managers and this important element has not

received as much attention as it might have:

'Essentially what you 're talking about is quality where, clearly because
you need to do that through the staff, then there will be some HR
implications but not all the HR initiatives that have come from the
Centre (are about clinical governance)... ...I suppose you could say that
staff involvement is about clinical governance because there is a
requirement to try and involve staff in the delivery of health
care...... now you've said that there might be more overlap than I had
even thought about before... ...I suppose at the end of the day, ifyou are
trying to do something through your staff then everything you do (in HR)
is going to have an impact on the quality of health care that they
actually deliver - but whether or not you could lump it all under the
umbrella of clinical governance or that would make it too big, I don't
know. You might even talk of harassment I suppose, policies on that; if
someone is feeling harassed they are not delivering a good service are
they, they are not improving their work - but would you put that under
the clinical governance umbrella?... . I suppose clinical governance is
something that affects everybody, it's the whole organisation so in some

174
ways possibly it would'.

'There hasn 't been as much connection with HR as one would want -
became we haven't particularly focused on that'.

At the close of the fieldwork period, the HR strategy was still in draft; the explicit and

perhaps only reference to clinical governance is made in the section 'Quality

Workforce1 (Internal Trust Document, 2000b). In this section there is a brief paragraph

documenting that a clinical governance lead has been appointed, a Clinical Governance

Sub-committee established and that performance review has been introduced.

However, there is little to link clinical governance with the rest of the HR strategy and

it is not clear from the document how other elements, such as clinical supervision, are

specifically related to the clinical governance agenda. The implementation of the HR

strategy will be devolved to the divisions; however, it is unclear how this process will

be monitored.

The issues discussed above reflect the devolved nature of the HR function; the role of

which appears primarily to be one of providing operational advice and support to the

divisions. Accountability for delivering the overall HR agenda is unclear. Of particular

significance in terms of clinical governance is the lack of HR input into the Clinical

Governance Sub-committee - there is no mechanism for regular updates to the Sub­

committee although some aspects of the HR agenda are included in the quarterly

reports. Sub-committee membership has not included a representative from the HR

function either at the outset or later on when the membership was expanded which

seems rather a surprising omission given the centrality of the human resource in the

delivery of the clinical governance agenda.

175
6.6 ORGANISATIONAL CULTURE

6.6.1 The Need for Culture Change

A fundamental objective of the national clinical governance agenda is the introduction

of a new culture into the NHS; one that is open and participative, demonstrates a

commitment to quality, works with users and carers, supports multi-disciplinary team

working and so on (Department of Health, 1999). To reflect its key role in the national

picture, the Trust has incorporated 'culture' into its clinical governance framework. In

fact, culture takes prime position as the first element in both the 'Clinical Governance

Report' and the 'Clinical Governance Development Plan', two key Trust documents

referred to throughout this chapter.

In relation to culture, the 'Clinical Governance Report' highlights some of the barriers

to change within professional organisations and alerts the reader to the need for culture

change but stops short of stating explicitly what this change might entail relative to the

current culture of the organisation. From the excerpt below, culture appears to have

been conceptualised as a variable which must be addressed so that the rest of the

proposed change may follow on:

'Change implementation in the NHS is often seen as threatening to


health professionals. The concept of clinical autonomy and clinical
freedom, fear of failure and blame and often a lack of comparative
information and the non-identification of effective levers for change all
mitigate against the establishment of a culture of change within health
service organisations. Changing that culture is difficult but it is a pre-
requisite to the delivery of clinical governance. Once the culture is
changed, then challenging, reviewing and altering practice becomes
second nature'. (Internal Trust Document, 1999; p8)

176
6.6.2 Culture conceptualised

The concept of culture has been interpreted within the Trust clinical governance

framework in terms of: a Trust-wide Commitment, Board-level Focus and Clinically

Focused Management Strategies. These have been translated into more tangible

objectives such as the development of an implementation plan for clinical governance;

the establishment of structures; and the development of a number of the systems

described earlier in this chapter. A great deal of the work undertaken by the Trust and

described throughout this section could be related to the cultural variable in some way.

For example, in setting up a system of reporting for clinical governance activity, the

Trust is indicating how one aspect of a 'reporting culture' will look in very practical

terms. Similarly, with the introduction of Significant Clinical Incident Reviews, a

powerful signal is being sent to the organisation around 'the way we do things here' -

indicating the sort of cultural shift that may need to take place in some areas in order to

integrate this new approach.

Despite the central role given to the cultural element, the data does not suggest that the

initial baselining work undertaken at the outset included any systematic assessment of

the existing culture per se or that of its various subcultures; for example: professional

cultures, non-professional cultures, an improvement culture, an incident reporting

culture, a blame culture, a fair culture, an open culture, a culture of trust and so on. As

a result, in documentation and conversation, culture is generally referred to in abstract

rather than specific terms but there are exceptions to this as indicated by the initiatives

described above and the change in focus expected of the Trust Board; this latter will

now be considered.

177
6.6.3 A Culture of Trust

One of the policy objectives surrounding the introduction of clinical governance is to

ensure greater clarity around corporate accountability for quality. The intention is that

accountability will ultimately rest with the Trust board; the chief executive identified as

accountable officer on its behalf. Although the local documentation does not state the

accountability issues as explicitly as this, the Report suggests that, in order to deliver

the agenda, the Board will have to change the way it works and:

'... ...move away from (a culture) that is financially driven and focuses
predominantly on administration and contractual requirements to one
that has a clinical focus'. (Internal Trust Document, 1999; p8)

The Development Plan outlines specific objectives which identify the Trust Board as

having a key role in their achievement; these address such fundamentals as the

development of a clinical governance strategy and implementation plan. In addition,

one of the centrally mandated objectives for year one of implementation was the

clarification of reporting arrangements to the Trust Board. This led to the development

of the consolidated reports referred to in an earlier section which are submitted to the

Board on a quarterly basis after consideration by the Clinical Governance Sub­

committee. As a result of these reports, some Non-executive Directors believe that

Trust Board meetings now devote more time to the quality agenda than previously:

'There has been a shift in our whole approach - when I first came, half
the Board meeting was taken up with finance but I think now it (clinical
governance element) is still not big enough but clinical governance is
becoming much more important for us (the Board) '.

These perceptions are particularly interesting as the minutes of the Trust Board

demonstrate that finance and activity is reported monthly whilst clinical governance

reports are quarterly; also the minutes do not reflect a high level of discussion/debate

178
around the clinical governance submissions. Apart from initial briefings, there has

been no real development work with the Trust Board so that, as a body, it may clarify

its role and identify the information which will enable it to fulfil its new

responsibilities. Instead, as with the Clinical Governance Sub-committee, information

seems to be passively received and the gaps in the dataset appear to go unchallenged.

Nevertheless, there is a belief that systems are in place albeit a recognition by some of

the Non-executive Directors that determining the effectiveness of these is another

matter:

'Optimistic that the systems are rigorous but to be frank I wouldn 't
know'.

'Believe that it happens because there is good management'.

The lack of detail in relation to system effectiveness seems to be compensated for by

the sense of trust surrounding the Executive Team as individuals and also as a

collective as the following quotes demonstrate:

'(Lack of detail) I suppose I have implicit trust in the directors here;


that is a value judgement of mine. I suppose if I was (a non-exec) in a
Trust where I didn't trust the equivalent of the Clinical Governance
Lead and Chief Executive then that would be a problem wouldn't
it?...... (the agenda is) devolved down and fed back up and the board
sees the headlines, believes it happens because of good
management... ...question (how robust are the systems in place) shows
how much I rely on my faith in top management; in a way I suppose that
shows up a weakness in a sense because one of my jobs should be
making sure that the top management work through the system, because
I have such a good feeling about this Trust I would feel that those
systems are in place - my feeling would be that they are, I trust that they
are because we have been looking at incidents and picking things up'.

(The executive team) it's an established team, people had been working
together for a long time, trusted one another... ...working with this
mature team, there are other things you don't have to worry about so
much about because systems are up and running and they run well... ...if
I have any concerns or worries I would ring up (Clinical Governance
Lead) and ask,...... a lot of respect for her abilities'.

179
The high level of trust suggested by the sentiments above is not confined to the

Executive Team but also extends to other Non-executive colleagues who play a more

direct role in the clinical governance agenda:

'I've got to be honest, I am not on the sub-committee and I've got


enough on so I haven't pushed myself...... why stick your nose in where
you are not required yet'.

7 would get much more involved if I wasn 't confident of those two
people (Trust Chair and Chief Executive)... ...we get regular feedback
anyway and we get feedback on clinical incidents for example - so I am
comfortable'.

'I know that (x) has got really involved in it and have the highest regard
for (x) abilities and consequently there doesn 't seem to be a great deal
ofpoint us poking our nose in - you are right to point out that we have
collective responsibility for it and the reports we get from (x) and
(Clinical Governance Lead)to the committee keep us sufficiently
informed so we don't have to get too involved'.

We all trust each other too much'.

Thus the culture of trust appears strong at the corporate level at least; but trust can also

be a heavy burden particularly for those further down the hierarchy who are committed

but do not necessarily have the capacity to deliver; as one junior manager explained:

'He (line manager) knows it doesn't matter if it's in my job description,


if he suddenly had something to be done by the end of the week,
something that he could pass it on to me, then I would pick it up and run
with it. That's how we work. If we didn't we'd never get the
improvements to practice that we want to see. He has said - "I could
employ several people full-time for 12 months and give them one ofyour
objectives each and it would be a full-time job but there are things we
need to do and we haven't got the bodies to do them"... ...you see we do
it because we are committed to the service as well. That's how they
have got people isn 't it'.

6.7 CHAPTER SUMMARY

The aim of this chapter has been to present the clinical governance initiatives

undertaken by the Emerald Trust that correspond to the 'what' or the content element of

180
implementation. The whole system framework adapted from that of Miles (1997) has

proved to be a useful mechanism for organising the results coherently. The use of the

framework also highlights the fact that each aspect of the whole system has been

addressed in some way by the Trust. Whilst the initiatives described here are an

important step forward for the organisation, a number of gaps have been identified in its

conceptualisation of clinical governance. Although there has been an element of

interpretation within this chapter, the main discussion of the significance of the Trust's

approach will be deferred until Chapter 9.

181
CHAPTER?

RESULTS - CLINICAL GOVERNANCE IMPLEMENTATION -


CORPORATE ACTIVITY: PROCESS

7.1 INTRODUCTION
Given that the previous chapter has addressed issues of design and content, the aim of

this current chapter is to describe the process elements of clinical governance

implementation within the Emerald NHS Trust. This is not intended as a chronological

account of process initiatives; instead clusters of activity are presented which have

either been inductively generated from the data or are reflective of elements contained

within the Miles change management framework (1997) (Appendix 2)

7.2 LEADERSHIP AND MANAGEMENT

As stated earlier in the introduction to this thesis, one of the key steps to be undertaken

by all NHS Trusts by April 2000 was the establishment of leadership arrangements for

clinical governance. The advent of clinical governance means that the Trust Board now

has an explicit responsibility for the quality of clinical services and the national policy

requires the chief executive to assume the role of accountable officer (Department of

Health, 1998). The Emerald Trust was ahead of the national deadline in appointing its

Clinical Governance Lead which provided her with an opportunity to shape and lead

this agenda from the outset. Whilst the Chief Executive is a member of the Clinical

Governance Sub-committee, much of the development work around this initiative has

been taken forward by the Clinical Governance Lead.

The Trust philosophy surrounding implementation suggests that some elements of the

182
clinical governance agenda will need to be driven from the top whilst others will be

devolved for local development; therefore, it will be important for people with the

responsibility for taking forward clinical governance to know and understand what they

are accountable for. This transactional approach is set within what appears to be a

predominantly transformational style of leadership; a feature of which is the constant

articulation of Trust values such as a commitment to service. This style seems quite

pervasive and is generally espoused throughout the management hierarchy.

Traditionally, within the Trust, there has been less of an emphasis on systems and

processes and more on organisational culture; summarised by one Non-executive

Director as 'strong on culture, weak on systems or perhaps weaker on systems'. One

outcome of this is, being trusted to deliver, Divisional Managers do not appear to be

directly performance managed, and, until recently, have not been appraised on a regular

basis.

The Clinical Governance Lead also articulates these transformational aspirations and

demonstrates transformational attributes; however, there is also a strong sense of the

transactional in her style which seems more apparent when it is perhaps culturally

acceptable to act in this way. The transactional element is very apparent in some of the

new systems described in the previous chapter. Although this is often couched in terms

of 'capturing the learning', the reality is that formal feedback and control mechanisms

are being incorporated to determine whether the required change is, in fact, being

implemented. In the areas over which she has direct control, the Clinical Governance

Lead appears to drive the clinical governance agenda forward; in other areas she must

rely on influence and persuasion but seems willing to move to a more prescriptive and

183
directive style if required in order to achieve the objective.

7.3 CONFRONTING REALITY

Part of the early work in the Trust consisted of mapping the existing system. This does

not appear to have been documented explicitly in a public format, at least a copy was

not seen by the researcher; and the process and scope of this remain rather unclear.

There does not appear to be any explicit evidence of benchmarking against other Trusts

although a routine external review of the Trust clinical governance arrangements was

undertaken. The findings from this review and from the internal mapping have

apparently informed the development of the 'Clinical Governance Report' outlined in

the previous chapter. Although the Report alludes to a number of issues relating to

existing systems within the Trust, the document does not provide a thorough overview

of the current state of clinical governance or its component parts. Given the experience

and the extensive local knowledge of the Clinical Governance Lead, it is likely that,

with or without a written document, she has an overview of the performance of the key

building blocks such as clinical audit, risk management and so on. Nevertheless, it is

not evident from the minutes of either the Trust Board or Clinical Governance Sub­

committee how this has been shared in any detail with a wider audience, or, in fact,

how this assessment has explicitly informed the development of the objectives outlined

in the Development Plan.

7.4 CREATING A VISION OF CLINICAL GOVERNANCE

Another early activity was a visioning exercise undertaken with the fledgling Clinical

Governance Sub-committee at its second meeting in April 1999. Members of the group

184
were asked to consider what clinical governance might look like in the organisation two

years hence. It is not clear from available documentation how this visioning initiative

was replicated in other arenas but the process preceded the publication of the 'Clinical

Governance Report' and, as highlighted previously, the draft publication of this

document was apparently followed up by discussions with a variety of divisional

groups to obtain their input.

7.5 PLANNING FOR IMPLEMENTATION

Although the process and scope of the baseline assessment does not appear to have

been documented in detail, the culmination of this early activity was the 'Clinical

Governance Development Plan'. The objectives outlined in the Development Plan

were a mixture of end state and change process; for example: the achievement of CNST

level 2, the development of a system for managing significant clinical incidents, the

creation of the Clinical Governance Development Team, and the introduction of

structures and systems to support the implementation of clinical governance. Thus, the

Development Plan appears as a mixture of the 'what' and the 'how'. It is interesting to

note that the first objective assigned to the Trust Board, Clinical Governance Lead and

Divisional Managers was the development of an implementation strategy by February

2000 - one month after the publication of the Development Plan itself. This had not

been achieved either before or during the fieldwork period; thus, the change process

which needed to take place to deliver the clinical governance agenda was not made

explicit neither were the key objectives embedded in a comprehensive, documented

implementation plan. Despite the fact that the implementation plan was an explicit

objective, this particular approach was apparently not the norm within the Trust as this

185
senior manager indicates:

'We don't go big on implementation plans and documentation. That's


not the way we do things here and you (the researcher) won't change
us'.

Certainly the approach to the draft HR strategy and clinical audit seem to bear this out:

'(Implementation plan to accompany HR strategy) - not something I


would dignify with the word implementation plan; more a sort of lets get
on and do this bit although the business plan will pick up aspects of it
and will have time-scales and responsibilities...... (picked up by
divisions) - I suppose in a very opportunistic, incremental way rather
than any sort of plan saying we must incorporate that bit into what we
are doing... ...I don't think there is a very clearly defined way of getting
them (Divisional Managers) to implement it, it is rather opportunistic'.

We said that all departments must do clinical audit but still left the
topics to them. Then moved on to say after some time, you can do
anything as long as it fits in with the Trust's objectives.. ... there is more
emphasis now on the corporate plan'.

Other than a date for the completion of individual objectives, the Development Plan

does not give a sense of the priorities for action apart from the fact that some objectives

are scheduled for completion in 2000 and others in 2001. Within these parameters

there is no indication of the sequence in which the objectives need to be delivered or

the mechanism for monitoring progress.

7.6 CREATING AND REALLOCATING RESOURCES

A reallocation of resources has taken place to support the appointment of an Executive

Director into the Clinical Governance Lead post. The fact that the Lead has an

executive role and clinical governance forms the main, although not only, part of her

remit was seen by some Non-executive Directors as a significant investment in the

186
clinical governance agenda:

'A lot of the resource has gone into (the Clinical Governance Lead)'.

'Having a director almost working full time (on clinical governance) has
given clinical governance more kudos'.

Additional investment has also been made available for other new appointments, for

instance, an R&D manager, and library staff. Resources have also been allocated for

equipment; training has been provided in evidence-based literature searching.

Developments are also taking place, in parallel, around DV1&T although it is not clear

how specific objectives in this area have been integrated with the clinical governance

agenda.

Whilst the Development Plan outlined the objectives to be delivered, these have not

been explicitly costed. This not only has implications for the operationalisation of the

clinical governance agenda generally but also for decision-making with regard to the

resourcing of this activity. Without a costed plan, it is difficult to see how priorities for

action are decided or how they will be funded given that some elements will not be cost

neutral; for example the provision of education and training and the setting up of the

Clinical Governance Development Team.

7.7 FROM VISION TO OPERATIONS

The majority of the objectives outlined in the Development Plan identified the

Divisional Managers as or amongst the key stakeholders and, in the absence of a

clinical governance group at the operational level, the Management Team was expected

to pick up the Plan and take it forward. In addition to the general objectives for the

187
Trust, there are a number of specific ones aimed at the divisions, all of which have year

2000 time-scales or are classified as on-going. However, despite the fact that the rapid

appraisal took place 9-10 months after the Development Plan had been approved by the

Trust Board, the progress in the divisions was found to be highly variable. Although

the Clinical Governance Lead spent time discussing the implications of the

Development Plan with individuals, no development work was undertaken with the

Management Team as a collective. The rationale for this is described below:

'There has been no work done with them (Divisional Managers), we


suppose they know what to do.. ... send them the document and let them
get on with it...... there was an expectation that they would go out and
find out more what clinical governance is about'.

Given that the Divisional Managers had the Development Plan outlining their clinical

governance objectives, the Clinical Governance Lead focused on a number of

organisational design components - the 'what' elements outlined in the previous chapter.

Early efforts were made to bring existing systems such as risk management into greater

alignment and also to introduce new systems such as Significant Clinical Incident

Review; this latter serving, perhaps, as the most significant lever in the attempt to re­

shape the corporate culture. The rationale for much of this alignment has been couched

in terms of 'capturing/sharing the learning' - in particular trying to identify the root

causes of problems before finding solutions and making changes to the system as

appropriate.

Generally, progress in relation to alignment appears to have been incremental.

Between-system linkages are often dependent on the multi-meeting attendance of the

individual rather than routine information flows. Whilst the Trust is endeavouring to

188
bring a greater emphasis to the identification and correction of the root causes of

problems, a whole system approach is yet to develop. An example of this relates to the

Risk Management Team. There has been a considerable amount of work undertaken to

bring together the risk agenda; and yet, no thorough assessment of the risk management

process per se has taken place. Also, there has been little attention given to the

development of an appropriate risk management structure in support of the growing

risk management agenda.

7.8 ENERGY FOR CHANGE

Areas which have achieved the direct attention of the Clinical Governance Lead such as

Risk Management Team and Significant Clinical Incident Review have been perceived

within the Trust as having made progress. Other areas such as Human Resources,

Training and Development have received less direct attention from the Clinical

Governance Lead and there appears to be less in the way of explicit integration with the

clinical governance agenda; the Lead's attention and energy having been focused on

different areas. There is a sense that, in some cases, the progress of clinical governance

is relative to the capacity of the Clinical Governance Lead to drive this agenda forward.

Although progress has undoubtedly been made with specific initiatives, there is little

evidence that this is the result of a total system approach. Miles (1997) argues that in

order to achieve the latter, a process architecture consisting of a set of mutually

reinforcing mechanisms must be deliberately created to support large-scale

organisational transformation (Table 7.1). The Trust activity which reflects each of

these elements will now be presented although ordered differently from the list below.

189
Table 7.1: Process architecture

• Education;
• Involvement;
• Co-ordination;
• Feedback;
• Communication;
• Consulting support.

MHes (1997)_______

7.8.1 Education and Involvement

Education

Early in the development process, a workshop had been planned for the Trust Board

members; unfortunately, this was postponed and, although rescheduled, the new date

was effectively two years after the publication of the Trust Development Plan. The

Trust Chair and two of the Non-executive members of the Clinical Governance Sub­

committee have attended the occasional regional workshop/seminar on clinical

governance. There have been no specific initiatives for the senior teams to ensure that

members have a similar knowledge base in areas such as: the concept of clinical

governance, the implementation process within the Trust, the roles and responsibilities

of managers and their staff, monitoring and reporting arrangements.

A number of more widely targeted initiatives did take place and these included the

early road shows, training sessions focusing on searching the clinical evidence base, a

workshop on clinical risk management and appraisal. Since appraisal has been

included in middle/junior managers' objectives, demand for training in this area appears

to be outstripping supply. In sharp contrast are the training sessions around incident

190
reporting; some of which have had to be cancelled because of the lack of uptake.

Although staff are starting to be appraised and complete PDPs, there has been no large-

scale assessment of training needs in relation to the clinical governance/change agendas

per se.

Knowledge for implementation is not just about dealing with the specifics of clinical

governance. An important component of the National Clinical Governance Support

Unit programme deals with the management of change; in contrast, this area does not

appear to have been addressed by the Trust. The level of manager knowledge and

know-how in terms of change management was not assessed as part of this research;

however, it is interesting to note that there has been no demand from the field for

education and training in this discipline. On a related issue, neither does it appear that

the formulation of the Development Plan was informed by an explicit change

management framework.

During the research process, it was apparent that a wide variation in the awareness and

understanding of clinical governance existed and this could be observed at all levels of

the organisation. Trust Board Non-executive Directors who were not members of the

Clinical Governance Sub-committee cited briefings from the Clinical Governance Lead

and also the NHS Confederation as the main sources of their clinical governance

knowledge. Many of the professionally qualified staff interviewed perceived their

knowledge to have been gained through professional publications, associations or

professional training rather than from Trust-based initiatives.

191
Involvement

The process leading up to the development of the Trust Clinical Governance Plan

provided senior people with a number of opportunities to shape this emerging agenda.

The Clinical Governance Sub-committee members were engaged in an early visioning

exercise. Discussions were held with the Trust Board and Management Team; in

addition, a variety of unspecified groups within the Trust also took part in the shaping

process.

The phrase 'clinical governance is everybody's business' is an oft expressed cliche

however, perhaps less heard is the fact that the extent of one's knowledge base is an

important determinant (albeit one of a number) of a person's capacity to become

involved whether this is on an individual basis or as part of a group. In relation to the

clinical governance agenda, there have been a number of opportunities for manager

involvement: shaping the Trust approach, delivering the agenda and modelling

behaviour. Involvement may take place on a number of levels. Whilst the Clinical

Governance Sub-committee had the opportunity to shape the content at the outset, for

others within the Trust involvement has been a matter of commenting on what has

already been drafted. This latter situation constitutes a very different level of

involvement; the extent of which depends on how far into the process these discussions

have taken place.

A lack of understanding of what is expected might result in non-engagement; an

example of this arises in relation to the Significant Clinical Incident Review process. A

manager at a separate site from that at which the incident occurred took no action on

192
receipt of a copy of the report as she was unsure about what to do with it. Thus, the

findings were not shared with the manager's own staff and no assessment was made of

the local situation in light of the report's findings. Whether this was an isolated

occurrence was not assessed but the individual referred to here manages a significant

number of front line staff; they might have heard of the incident through the 'grapevine'

but not, apparently, through any formal process.

One observable aspect of involvement is the modelling of desired behaviour, an

important role for managers in particular. Whilst two of the Divisional Managers

demonstrated a comprehensive grasp of the agenda and appeared to be taking it forward

proactively, another colleague had apparently initiated little in the way of deliberate

activity with regard to clinical governance implementation. In addition, although some

of the key components of clinical governance appeared on the agenda of Management

Team meetings, the progress of the implementation process itself was not a

standing/regular item for update/discussion. It was not surprising therefore to find this

non-action replicated at the front line and the quotes from clinical staff cited in the

previous chapter seem to confirm this.

Whilst understanding might be adequate at an abstract level, it appears that translating

this into individual behaviour is more problematic. If the desired behaviour is not

modelled at the senior level, the chance of others moving forward regardless does not

appear universally likely. One manager, when asked why she had not taken the agenda

forward herself given the lack of action from her own manager, responded "it hadn't

been high on my agenda' and that clinical governance didn't feature in her objectives.

193
Interestingly, the Trust has apparently experienced a huge demand for training in

appraisal since it has been incorporated into individual IPRs. This suggests that one

mechanism for securing involvement is to translate corporate objectives into personal

objectives and assess performance against these as part of the subsequent appraisal

process; in a sense it seems that 'what gets measured gets done'. In reality, neither the

Divisional Managers or the middle/junior managers have been performance managed

on delivery of the clinical governance agenda per se. In fact, systematic appraisal

seems to have been as much a new experience for the Divisional Managers as it has

been for their staff. Other than the requirement to appraise their staff, a number of

middle and junior managers reported that they had no clinical governance objectives in

their personal objectives.

The Clinical Governance Lead appeared to be well aware of the spectrum of Divisional

Manager involvement in the clinical governance agenda and tended to spend time on a

one to one basis with those who were making less progress. In this way, she tried to

clarify what was required and even facilitate the process; however, her approach

appeared to become increasingly more directive if action was not taken. This pattern is

apparently not unique to the implementation of clinical governance but, as indicated

earlier, there are similarities between this and the approach adopted in the

implementation of clinical audit - devolved responsibility and the expectation of local

interpretation in the first instance but becoming increasingly prescriptive and directive

in the event of sub/non-delivery.

Generally, involvement does not just happen but requires some sort of vehicle for its

194
achievement. One of the key mechanisms is structural and the efforts of the Clinical

Governance Lead have ensured that there are a number of new groups; some existing

groups have an expanded membership and taken on a new role whilst others, such as

the Trust Board, have new responsibilities. There is something new in each of these

groupings whether it is membership or remit. However, prior to or during the

fieldwork period, there did not appear to be any explicit investment in developing and

increasing the effectiveness of these groups through attention to team building, training

around the role and responsibility of the chair and of group members, exploring ways in

which the group will function or how individual members will make a contribution. In

fact, apart from the early 'brainstorming' in Clinical Governance Sub-committee, there

does not appear to have been much in the way of 'time out1 for these very significant

groups (including the Trust Board) despite their varied and sometimes large

membership. These are mostly high level groups, but if clinical governance is to be

everyone's business, it is equally important for the development of structures lower

down the hierarchy to enable front line staff to become involved other than through

their individual clinical practice. Despite this, as we have already seen in one particular

division, the main group forum for staff is the staff meeting and clinical governance has

not featured consistently on these agendas.

The degree of involvement for some managers also appears to be a function of time and

feelings of job security; both of which seem to be somewhat stretched:

'These people (managers) have also been very busy and are now not
confident about what their jobs are going to be in a year's time.'

In reality, areas where clinical audit, risk management activity, appraisal and so on are

195
not already taking place, undertaking them anew will naturally represent an addition to

the existing workload. The issue of 'time1 was recognised in the development of the

Trust approach and one of the objectives in the Development Plan concerned an

assessment of organisational capacity. Whilst initial scoping work was apparently

undertaken, there does not appear to be evidence of a formal, systematic assessment of

capacity having taken place across the Trust and, in the absence of an implementation

plan, it is perhaps a question of 'time for what?' For instance, although there is a sense

that people believe more time is needed to deliver clinical governance, this does not

seem to take into consideration the time spent on activity which may be inappropriate

as this manager observed:

'There (may) be practices going on which are ineffective and if staff


spent some time looking at their practice then they wouldn 't be spending
their time doing ineffective things'.

The lack of an explicit assessment of capacity has implications for the allocation of

both human and financial resources for the delivery of the change agenda; however,

there does not appear to have been any deliberate phasing of objectives in line with

existing resources. Any phasing seems more in accordance with the energy of

individual managers which is naturally susceptible to highs and lows.

Scheduling time for clinical governance-related activity appears problematic for some

managers at all levels. Day to day activity with shorter deadlines appears to be

displacing the development of clinical governance; with the pressure to deliver the

routine work, clinical governance seems to be slowly sinking lower down the 'to-do'

pile. When asked directly about this, one manager answered:

'Oh gosh, yes. Clinical governance was not performance managed... '.

196
Low involvement does not appear to be a 'hearts and minds thing'; it is difficult to argue

against spending time on quality per se but it seems a function, to some degree, of

blockages around knowledge, uncertainty around future employment and the time

factor.

7.8.2 Co-ordination

The discussion previously has touched on the facilitation role played by the Clinical

Governance Lead but she has also served as the main mechanism of co-ordination of

clinical governance activity across the Trust. As such, it is likely that she has the most

comprehensive overview of the progress of clinical governance implementation within

the organisation. The Terms of Reference of the Clinical Governance Sub-committee

suggest, in part, a steering role for this group. However, without a comprehensive

implementation plan against which progress of the process can be measured, and, given

the lack of collective development in preparation for this remit, this steering function

appears to serve as something of a challenge, as one Non-executive Director

commented:

We went to our first meeting (of the Clinical Governance Sub-


committee) and none of us really knew what it was (clinical
governance). I mean, in a sense, we are only just coming to the point
where we can come and say something meaningful to you (Trust Board)
- we are only just sorting it out ourselves'.

Although some divisions received input from a member of the Clinical Governance

Development Team early on, this has only recently been extended to all. In reality,

much of the co-ordinating function seems to have relied on the energy of the Clinical

Governance Lead. Although an Executive Director of the Trust, the Lead only has line

management responsibility for some members of the Clinical Governance Development

197
Team and the library staff. Consequently, to move the initiative forward, she has relied

on influencing skills rather than the line management process and, at times, this

approach seems to have been rather circuitous.

The Clinical Governance Lead recognises the need to integrate the relevant systems

both new and existing and is trying to move away from a reliance on making the

connections through individual attendance at numerous meetings and move to a more

systematic process of information exchange. A particular area in which progress has

been made is risk management. The membership of the Risk Management Team now

consists of the right people operating at the right managerial level to ensure that the

information received is acted upon. A number of interviewees remarked that receiving

risk management data which is both consolidated and disaggregated by unit has

enabled members to recognise the trends and appreciate that other areas are

experiencing similar difficulties. Consequently, there is a greater willingness to work

together to find joint solutions and, indeed, an expectation that this should happen:

We 've noticed a few things at Risk Management - (x division) has the


same sort of things (problems/incidents) as (y division) so again, it has
been suggested that the two Divisional Managers get together to have a
look at the incidents to compare and learn from each other'.

However, this is not the case with all components of clinical governance and it is

recognised that more needs to be done in order to integrate the work of areas such as

the Complaints Committee into the mainstream clinical governance activity. There is

also a need to address the way in which the reporting of clinical audit activity informs

the clinical governance agenda as this does not appear to feed into the quarterly

198
reporting framework:

'Nobody takes a blind bit of notice (of the audit reports) - that's what
(the Clinical Governance Lead) is trying to do - integrate it with all the
other stuff that is going on and say we are not going to do a separate
audit report - we are going to have a report on clinical governance
which will include clinical audit activity from each of the divisions'.

Although many individuals are contributing to clinical governance activity, the key co­

ordination mechanism is in the person of the Clinical Governance Lead; unless and

until the Clinical Governance Development Team becomes fully operational, this is

likely to continue but, in the meantime, increasing operational involvement risks

displacement of work on the 'bigger picture'.

7.8.3 Feedback

When the Clinical Governance Sub-committee did the early Visioning' work which led

to the 'Clinical Governance Report' and subsequently the 'Clinical Governance

Development Plan', some of the Sub-committee members were by no means certain

that what they had come up with was 'it'. This sort of uncertainty is not uncommon

where policy is new and potentially so far-reaching but dealing with it is a key

challenge for the management of change. One way of decreasing uncertainty is the

establishment of multiple feedback mechanisms to gain as much information as

possible from within the system.

One route to such feedback is through the formal reporting structure. A key,

nationally-set objective was the establishment of a mechanism for regular clinical

governance reporting to the Trust Board. The initial dataset constructed by the Trust

consisted largely of data readily available from existing groups. The dataset has

199
gradually been expanded and is now considered by a number of internal audiences

(Clinical Governance Sub-committee, clinical governance divisional fora) in addition to

being received quarterly in the public part of the Trust Board meeting. The report

consists of operational performance data, data from clinical governance-related groups

and provides an update on progress against the Development Plan objectives; the report

does not, however, reflect the implementation process per se. Thus it is perhaps not too

surprising to find this Trust Board member apparently unaware of the fact that the

Clinical Governance Development Team was not fully operational:

7 understand that the Trust has a clinical governance team who are
responsible for proselytising to the rest of their team and they each have
reps which then link in to your committee (Clinical Governance Sub-
committee) ... ...but their job is also to take the message out to the
troops'.

The dataset described above remained the basis of routine clinical governance reporting

during the research period. Although an additional framework was subsequently

developed to specifically address the components of clinical governance, this was for

the purpose of the Trust annual clinical governance report. In addition to acting as a

vehicle for the reporting of information upwards, this framework also gave a clear steer

to Divisional Managers regarding the corporate conceptualisation of the components

for clinical governance and the areas on which they should be focusing local attention.

Whilst this did provide a useful end of year summary, it could also have served as an

explicit tool for monitoring clinical governance activity on a regular basis in-year.

As in the case of the work around user involvement, some initiatives have not always

included explicit arrangements for the provision of feedback on progress; yet, there is

now a growing tendency within the Trust to incorporate this into the system and a good

200
example is the Significant Clinical Incident Review process. All divisions are expected

to receive the final report, assess their area of responsibility against the learning points

and confirm to the Clinical Governance Lead whether local action will be required or is

not considered appropriate. A similar feedback mechanism has also been built into the

new system for the dissemination of guidelines. Nevertheless, as earlier comments

around incident reporting suggest, there is still a need to ensure feedback to staff on

specifics such as incidents and so on but also more generally on the progress of clinical

governance in the Trust.

Setting up feedback mechanisms is important but the ultimate test is how the

information is used once received. Early feedback to the Trust after a routine review by

the Regional Office was used to inform the Development Plan. Written and verbal

feedback was provided by the researcher to the Chief Executive and Management Team

after the rapid appraisal undertaken in the autumn of 2000. The recommendations of

this first report centred largely on the need to develop an implementation plan and the

need to clarify whether the Trust concept of clinical governance was based on an

assurance or improvement model or both. This recommendation was based on an

observation that systems were being implemented to address the assurance aspects of

clinical governance but the notion of continuous improvement did not feature

explicitly. An example of the latter was the lack of a mechanism at the local level to

facilitate the involvement of front line staff in CQI. Neither of these issues were

addressed directly by the Trust during the fieldwork period. The notion of models did

not appear to sit comfortably with the culture of the organisation. Whilst

acknowledging that the research report was much as the Trust expected, the minutes of

201
the Clinical Governance Sub-committee dated 19 January 2001 record the following

comments:

'(The Clinical Governance Lead and the Chief Executive) would argue
with respect to the 'quality models' and pointed out that they would be
difficult to convince about adopting quality models'.

This apparent conflict with the prevailing organisational culture is similar to the Trust

response to the recommendation that it develop a comprehensive implementation plan.

The stated reason for not developing this latter was that, with the Trust moving towards

the creation of a PCT, the focus would be on consolidating what had already been

achieved. However, the dissolution of the Trust was to take a further 15 months during

which time the organisation was without a comprehensive implementation plan to guide

the change process. The final research feedback was submitted in December 2001 and

the Clinical Governance Lead confirmed that a number of the recommendations from

the second research report would directly inform the plan for clinical governance within

the new PCT.

In addition to the two formal written feedback points, verbal feedback was an ongoing

feature of the action research process and was offered at corporate and divisional

meetings. At times, the researcher was able to provide observations from the front line

that challenged the corporate assumptions about the extent to which the clinical

governance agenda had cascaded down into the organisation. Verbal feedback was also

offered to interviewees during the fieldwork and will be discussed later in the section

on 'support1.

202
7.8.4 Communication

The association between change and uncertainty has already been commented upon.

The Emerald Trust is not only coping with the changes required by clinical governance

and the wider modernisation agenda but is also facing the prospect of large-scale

organisational change in the move towards PCT status. The role of feedback and

education in reducing the level of uncertainty has been discussed earlier in this section;

however, for greater impact, these elements should be incorporated into the wider

system of communication operating within the Trust generally. The need for extensive

communication in a period of change is appreciated by this interviewee:

'...... it's a pretty important issue because it's lack of communication or


poor communication that makes people make up their own minds on
what's happening. And all sorts of rumours start to fling around
then... ...with the changes taking place within the service, if you had
someone with a communications lead ...you would say - right come up
with views on how we are going to make staff appraised about what is
going on and get day to day questions answered - then someone would
(need to) come forward with a communication plan... ... no-one has been
identified with the role (internal communications), ...so it is not seen as
someone's responsibility'.

Whilst the Trust has undertaken clinical governance road shows, several articles have

appeared in the newsletter, and the Annual Clinical Governance Report has been

produced, these constitute discrete initiatives. In contrast, there is no evidence of an

on-going, multi-method communication campaign taking place to raise and maintain

the profile of the clinical governance agenda within the Trust. In fact, there is no

communication strategy to accompany the implementation process per se; indeed,

neither is there a strategy for internal communications and there is a lack of clarity

around where the responsibility for this particular activity lies. Thus, there was no

formal launch of the Development Plan prior to the road shows which reached less than

203
10% of the workforce; of these, as indicated earlier, the majority of attendees were

thought to be managers. Communication of the Development Plan appears to have

relied on the cascade of information from corporate through divisional levels and on to

those in the front line of service delivery. However, it is clear from research in the

Primary Care Division that this approach has not been particularly effective in practice.

Although elements of the clinical governance agenda appear in the minutes of

Management Team meetings, there does not appear to have been a regular discussion

around clinical governance per se or its implementation in this arena. In some

divisions, this situation has been replicated further down the hierarchy; for instance,

some of the junior managers in the Primary Care Division had not seen the Significant

Clinical Incident Reports or, in fact, the Development Plan until a meeting with the

Clinical Governance Lead early in 2001. In addition, the feedback from the rapid

appraisal in 2000 confirmed that the Development Plan was not a living document

within the divisions generally although the Divisional Managers all knew of its

existence. This senior manager expressed strong opinions on the communication issues

with the Trust:

'People don't cascade information. The Management Team probably do


but what happens after that...... Could be a power theory; could be that
people do not have structures in place like team briefing to allow
cascade. Another of this - oh it's something to do with staff therefore I
haven't got time to do it - more important (things) to do like delivering
services to clients - (they are) not making the connection between those
two things. No, we don't have any structured way of cascading
information'.

And this was another senior manager's experience:

'My clinical governance forum - nothing cascades down. I 'm giving


them all this stuff but when I go out to the staff - they haven't heard

204
about the information... ...I don't know where the blockages are or why
it's not being disseminated'.

The value of effective communication systems in establishing a common language and

a common understanding around clinical governance did not appear to have been

appreciated or these outcomes actively sought. The manager cited below was echoing

the views of a number of senior colleagues with regard to the junior staff:

'(I) don't expect people to tell me what clinical governance is but I


expect them to tell me how they use the evidence, how we can plan care,
how we monitor (it)'.

Given the sentiment above, it was therefore something of a surprise to read that the

issue of a manager briefing pack had been raised at Management Team two months

after the approval of the Development Plan; however, this was not produced within the

fieldwork period.

7.8.5 Support

The need to provide support for the divisions was recognised early on in the Trust's

developing approach to clinical governance implementation. The Clinical Governance

Development Team was established in principle in May 2000 but did not achieve its

full complement of members for almost 18 months. In the meantime, one or two team

members were able to form links with specific divisions; other than this, the main

source of internal support was the Clinical Governance Lead who provided advice to

the Divisional Managers on a one to one basis. Whilst some of the Trust Board

members thought the Team was up and running, it seems that not all managers were

clear of its remit some 18 months after the publication of the Development Plan:

'// 's like the Clinical Governance Development Team - I 've heard of it
but what are they actually doing, who are they and it's only when you

205
start to sit down and think I don V know who these are, what are they
actually doing that you actually get an answer; it is automatically
assumed that everybody would actually know about it'.

External support was obtained through several routes. A small number of natural work

teams (two for certain) have taken part in the national programme offered by the

Clinical Governance Support Unit. The catalyst to the decision to put forward these

teams was not clear - i.e. the team's own request/nominated by the Trust; however, this

initiative does not appear to have been part of a deliberate corporate approach and there

has been no move to send a large number of teams for this training.

One opportunity for obtaining peer support was through the formation of a county-wide

network of clinical governance leads which was co-ordinated by the health authority.

The Trust Clinical Governance Lead also became a member of the Clinical Governance

Sub-committees of the local Primary Care Groups (PCGs) and, latterly, a representative

from each of these groups has been invited to join the Trust Clinical Governance Sub­

committee. In contrast, the Lead does not appear to have established direct links with

other NHS Trusts providing the same/similar services as Emerald.

The action research process was also a source of external support for managers in

particular. In addition to providing real-time feedback, the contact offered an

opportunity for interviewees to explore some of the general issues around the

implementation of this agenda and obtain an outsider perspective on action they might

be considering.

Interviews generally followed a semi-structured format following a clear inquiry

206
framework. Interviewees commented that the questions would often serve as prompts

to action; firstly causing them to think - 'why has she asked me this' and secondly to

consider whether they might need to take action on the issues being discussed. Many

of these questions related to process; issues such as awareness raising with staff,

whether clinical governance was a regular/standing agenda item at team meetings, how

are incidents and complaints considered collectively, how does clinical audit happen

locally (Appendix 8). Some found the interview process offered protected time and

within this an opportunity to talk through the agenda. This helped some managers gain

a greater clarity about the concept of clinical governance and the role of the interviewee

with regard to this policy and its implementation.

As a result of the ongoing data collection, the researcher was also able to challenge

some of the assumptions being made; examples of this include corporate-level

perceptions around the level of actual knowledge amongst some of the managers and a

belief that having a local PCG with an active Clinical Governance Sub-committee

could act as a substitute to the establishment of a forum for clinical governance in the

Division.

Although the Clinical Governance Lead has advised divisions on practical aspects of

clinical governance, there appears to have been little provision of formalised support

from an organisation development perspective. This was touched upon in the

'education and involvement1 section with reference to building teams and investing in

time-out for development. This type of deliberate intervention in team/group

development does not seem to be a Trust norm. In practice, there appears to have been

207
a distinct lack of attention to the development needs of new and existing groups in

relation to implementation of the Trust's clinical governance agenda. This does not

appear to be confined to the clinical governance agenda as suggested by the following

comments from these Non-executive Directors:

'I'm unhappy at the training that I had as a non-exec - or lack of


training'.

When I first started here I was unhappy with the way I was inducted and
felt that I didn 't really have much info to work on and it took me a long
time to work out for myself what my role was... ...even now I probably
haven't got it right but that's how I felt at the time'.

'That is very typical of this Trust that it is presumed that we are going to
pick it (clinical governance) up; fortunately we are pretty able I think'.

In terms of the Non-executive Directors as a group, they do not meet separately from

the whole Trust Board although some felt that would be beneficial:

'One of the things we never do as non-execs is come together to discuss


something alone without the other directors there... ...but even if it was
once a year Ifeel the needfor us to meet'.

We (the non-execs) don't have the equivalent of a group meeting'.

'Never even discussed having a (separate) meeting'.

Earlier in this chapter, there was a comment on the lack of questions at Trust Board on

the quarterly clinical governance report; however, discussion with several Non­

executive Directors revealed that papers for Trust Board did not generally arrive to give

them the five days notice that was intended. At times, papers even arrived on the

morning of the Trust Board meeting thus the Non-executives had little time to read,

digest or make some investigation around the information with which they had been

presented.

208
The lack of group development at the corporate level is reflected in the Primary Care

Division and Locality under study and this will be discussed in the next chapter;

however, the expectation that the collective will pick up what it needs to know along

the way also seems to relate to the individual. The Trust does not appear to have made

any explicit arrangements to address the variation that exists in the experience,

knowledge and know-how of individual managers and staff, not only in terms of

clinical governance but also in relation to the process of change management.

Individuals within the Trust appear to be at different starting points regarding the above

and there has been little to address this although a workshop is planned for January

2002 - two years after the release of the Development Plan. This lack of deliberate

organisation development (OD) intervention is surprising given the centrality of

learning in the Trust's conceptualisation of clinical governance.

7.9 CHAPTER SUMMARY

As in the previous chapter, it is clear that the Trust has made a positive start in terms of

the implementation process. Each of the activity clusters have been addressed in some

way although gaps are apparent which may, ultimately, have a negative impact on the

likelihood of overall success of the clinical governance initiative. As with the Trust

approach in terms of content, the significance of these process initiatives will be

discussed in Chapter 9.

This chapter concludes the description of corporately-led clinical governance

initiatives, the following chapter will now present an overview of clinical governance in

one of the Emerald Trust divisions and a Locality within this.

209
CHAPTER 8

RESULTS - CLINICAL GOVERNANCE IMPLEMENTATION- A


DIVISIONAL VIEW

8.1 INTRODUCTION

The previous chapter has described the work that has been taking place at the corporate

level to take forward clinical governance within the Trust. One of the key aims of the

Clinical Governance Lead was to ensure that the strategic objectives were translated

into the operational reality of the clinical divisions. Thus, one of the research

objectives has been to follow the translation process from the corporate level down

towards the front line of service delivery.

The Primary Care Division and the Northern Locality have been selected as targets for

further research and the rationale for this choice has been outlined in the earlier

methodology chapter. The aim of this chapter is to present firstly an overview of the

Division and Locality and then to describe the clinical governance initiatives that have

taken place within these areas. As far as possible the Miles framework (1997) will be

utilised to shape the presentation of these results.

8.2 PRIMARY CARE DIVISION - AN OVERVIEW

The Primary Care Division consists of three localities. Over a six month period, the

researcher followed the progress of clinical governance arrangements at divisional and

locality level - the latter focusing on the Northern Locality.

210
The Division provides Health Visiting, District Nursing and Child Health Services;

health care for the older adult is provided in a Community Hospital. The Divisional

management structure comprises the Divisional Manager; three Community Health

Care Managers - each responsible for a Locality, and a Child Health Nursing Manager.

Within each of the localities there is a Clinical Leader for Health Visiting and one for

District Nursing; in Child Health, there is a Clinical Leader for School Health, the

Paediatric Nursing Team, Hospital at Home and Child Development Centres.

Since early 2001, the Divisional Manager post has been an acting position and, in

addition to the responsibilities as Acting Divisional Manager (ADM), the post holder

has retained her role as Board member on the Northern PCG. The Community Health

Care Manager post for the Northern Locality is also an acting post (the previous

manager is acting up as Divisional Manager) which comprises 50% of the post holder's

time. In the other 50%, the manager retains her responsibilities as Clinical Leader for

Health Visiting in one of the other localities which is geographically distant from the

Northern. The Clinical Leader posts were introduced in year 2000 to strengthen line

management arrangements; apparently 80% of those in post are new to a formal

operational management role.

Prior to the establishment of the Divisional Clinical Governance Forum, the key

meeting for managers within the Division was 'Clinical Leaders'; this takes place on a

monthly basis and is attended by all Clinical Leaders and Community Health Care

Managers. It is chaired by a different Community Health Care Manager each month;

access to minutes was highly problematic as there is no central generation of or

211
repository for the record of these meetings; a situation attributed to the monthly change

of chair. This is meant to be an opportunity for top down - bottom up communication

but in practice there is apparently little contribution to the agenda by the junior

managers.

8.3 CLINICAL GOVERNANCE IMPLEMENTATION IN THE DIVISION -

CONTENT

During the rapid appraisal of clinical governance within the Trust undertaken in year

2000, it was apparent that clinical governance, as an integrated system for continuous

quality improvement (CQI), was yet to become established within the Division. This is

a rather euphemistic way of highlighting the difficulty in identifying deliberate

initiatives aimed at the implementation of the Trust 'Clinical Governance Development

Plan'; although, it should be noted that individual initiatives to develop and improve

Divisional services were taking place. The researcher's next contact with the Division

was in April 2001, seven months after the rapid appraisal. In the intervening time, little

progress had been made with this agenda other than the establishment of a Clinical

Governance Forum - 13 months after the publication of the Trust Development Plan.

When questioned on progress, the response from one of the managers summed it up as

follows:

'This will be short and sweet because the brutal truth is - very little (has
happened)'.

The first meeting of the Forum took place in February 2001. At this initial meeting it

was decided that the chair of the group should be nominated on a rotational basis to

provide a development opportunity for Clinical Leaders; selection would take place by

212
drawing names out of a hat. This process duly took place and the first chair appointed

for three months accordingly; however, this person was not present during the selection

process and, therefore, did not have the opportunity to highlight the fact that she only

worked part-time in the Clinical Leader role.

Membership of the Forum included the ADM, Community Health Care Managers and

Clinical Leaders; latterly the newly appointed Clinical Governance Facilitator also

joined. The overall aim of the group was the development of Divisional strategies for

clinical governance; its Terms of Reference were to reflect the requirements of the

Trust 'Clinical Governance Development Plan'. The Terms of Reference suggest a

variety of roles for the group: assurance, steering and 'doing'; however, the integration

of all three is likely to present a challenge to this newly formed group.

Meetings of the Forum were scheduled at monthly intervals and have taken place

regularly. In general, agendas seem to have consisted of items cascaded by the ADM

or those which the Chair considered to be of interest. Once the Chair of the Forum

became the Divisional representative on the Trust Clinical Governance Sub-committee,

there appeared to be a clearer frame of reference for the local clinical governance

agenda.

The minutes of the meetings demonstrate that issues around specific components of

clinical governance such as clinical audit, clinical supervision, incident reporting are

discussed. The group has also been reviewing the progress of an initiative to re-

introduce Clinical Rounds and Individual Performance Review (IPR). The need for

213
more work to raise staff awareness of clinical governance was recently recognised and

discussion has taken place around mechanisms for achieving this. However, although a

number of key actions were identified at the outset, these have not subsequently been

translated into an action plan and there was no sense of a systematic approach to the

implementation of clinical governance being considered. According to this member of

the Forum, there was a lack of direction in these early days:

7 've not got any direction. I haven't been given any direction. I don't
know what's expected. How can you drive anything forward if you
haven't got the time and you don't know what it is you are supposed to
be doing anyway'.

From direct observation of two meetings, 'discussion' was the predominant albeit not

exclusive activity with certain issues referred to the Clinical Leaders Group (despite the

similarity in membership) for further discussion. Although issues were being

highlighted, it was not always clear how and who would take these forward and there

appeared to be a lack of clarity around the authority vested in the roles of the Chair and

in the group as a whole. The arrival of the Clinical Governance Facilitator appears to

have coincided with a change in the group approach - action points and the responsible

individual(s) being clearly identified in the minutes and separate groups being set up

with a remit to look specifically at the issue in question.

Latterly, the group has started to receive a copy of the same consolidated dataset as that

submitted to the Trust Clinical Governance Sub-committee and also copies of the

Significant Clinical Incident Review report summaries; both of which could provide an

important focus for future discussion and action.

214
8.4 CLINICAL GOVERNANCE IMPLEMENTATION IN THE LOCALITY -

CONTENT

The Northern Locality is a rural area with a population of approximately 60,000. There

are seven GP practices which are based in the main towns and the larger villages.

Health Visitors and District Nurse teams are attached to each of the seven practices;

services are provided within the Locality and, where necessary, across county

boundaries. The staffing profile as of September 2000 reported around 75 whole time

equivalent District Nursing and Health Visiting staff employed within the Locality on

Nursing Scales ranging from A to I grades. A Community Health Care Manager is

responsible for each Locality and District Nursing and Health Visiting staff are line

managed by a Clinical Leader.

Some evidence of clinical governance activity such as incident reporting, appraisal and

so on can be identified in the Locality, however, as highlighted earlier in this chapter,

clinical governance as an integrated system for quality improvement is yet to be

established within the Division and this situation is also reflected in the Locality. At

this point, it is important to recognise that, despite the fact that clinical governance is

apparently in the very early stages of development, improvement work does take place

locally. These initiatives may be enhancements to an existing service, may represent a

new service for clients or may improve clinical practice in some way; examples include

the Integrated Nursing Team pilot, the development of a wound care formulary, joint

working around smoking cessation and the introduction of client-specific support

groups such as 'Cradle Clubs' for first time mothers and Breast Feeding Support

Groups.

215
8.4.1 Structures for Quality Improvement

Up until mid 1999, active standard setting groups existed for both Health Visiting and

District Nursing but these seem to have disbanded over time apparently as group

members moved on. This gradual decline also seems to have coincided with the change

in corporate arrangements for the facilitation of clinical effectiveness and audit; that is

the movement away from a central clinical audit function:

'Basically the standards group just fell apart; the person who was
leading it left and it's never really got back together again because of
pressure of time'.

In the absence of the Standard Setting Groups, the main forum for discussing

improvement opportunities appears to be the staff meetings. These take place on a

regular basis, usually monthly, and tend to be uni-disciplinary although speakers from

different specialties/professional groups/services are invited to attend and present on

issues of interest. Minutes of these meetings suggest a very varied agenda which is

largely around information exchange; this may relate to operational issues, details of

training, national policy - the NSF for Older People, 'Improving Working Lives' and

the NHS Plan all appear to have been discussed. In some areas, feedback is provided

from staff development such as courses, conferences etc, and there is evidence of

practical outcomes arising from efforts to bring the learning back into the organisation

such as the development of a leaflet on eczema. Feedback is also provided from joint

working groups on topics such as wound care.

Whilst 'clinical governance' does not appear as a regular, explicit item on the agenda of

these meetings, components of this agenda appear to have been discussed such as

clinical supervision, clinical audits for which the Locality has provided data,

216
information around the introduction of staff appraisal and the re-introduction of Clinical

Rounds. However, there is a view that a more explicit focus on clinical governance per

se is required:

'When we meet for unit meetings, I think as many people as possible


should be informed about what clinical governance should mean in
practice because I don't think people realise it enough. We have never
had a unit meeting of all this area and said clinical governance is here
and this is what it means'.

Although one area has recently started to disseminate the summary reports of

Significant Clinical Incident Reviews, there does not appear to be a mechanism, either

as part of the staff meeting or through an alternative structure, to support regular,

collective review and discussion of clinical/non-clinical incidents or complaints. This

is despite the fact that routinely-collected data on these issues is available, albeit

centrally collated:

'Feedback is really important but when you don't get any it's
demoralising; we 've done this, we 've sent in this form and you get
nothing back from it that's perhaps going to improve something - (that)
this, this and this has been done - you get a bit cynical - you think what's
changed?'

'Things are pushedfrom one side - you 've got to improve quality, you 've
got to do this, you 've got to do that; when you do all that, nothing much
comes back from it'.

8.4.2 Clinical Supervision

There are groups within Health Visiting and District Nursing which meet regularly for

clinical supervision. The groups were established at different intervals and therefore

are at different stages of development. Some groups are perceived as working well

whilst others are apparently experiencing some difficulties. Some of this difficulty

appears to relate to the opportunity to meet as a group given operational demands

217
which can be unpredictable particularly during periods of staff sickness:

'It all comes down to time; the idea is very good but it's time. Ifyou are
busy and you 've allocated an hour to do it, -what goes is that because
your visits or (something else will get in the way)'.

'It's time...... when you have got a day when you put it down and think
right we 'II do that, you get staff taken offyou to go to help somewhere
else'.

'The minute you get reasonably well staffed and you think - we can do
this- they take them (staff) to another group'.

The Clinical Leader for Health Visiting has recently proposed a review of the current

arrangements for clinical supervision.

8.4.3 Appraisal and Personal Development

Appraisal and the formulation of Personal Development Plans is starting to take place

and in 2001 there has apparently been a particular corporate emphasis on getting this

process established. Clinical Leaders are appraising their senior staff and, as they in

turn receive training, the senior staff will appraise their teams or in the case of Health

Visiting, the Health Visiting Co-ordinator and the Clinical Leader will share this

responsibility. Apparently there has been an increase in demand for training and the

delay in getting places on courses is thought to be holding up the cascade of appraisal.

8.4.4 Clinical Audit

The Standard Setting Groups were perceived as the main locus for clinical audit work

and, although staff may be involved in data collection for audit projects led by other

disciplines, in the absence of the previous structures, little in the way of clinical audit

218
has been initiated recently by the Locality itself:

'It (clinical audit) doesn 't happen very often. Since I have been in post I
haven't done any audits. There is a real issue about getting meaningful
data... ...we do need to look at audit but sadly we lack the training and
skills'.

8.4.5 User Involvement

There was a perception amongst staff that user involvement tended to take the form of

client/patient satisfaction surveys; these would more likely be undertaken with clearly

defined groups which meet over a period of time. Involving individual clients/carers is

more likely to be seen as part of the care planning process.

8.4.6 Clinical Governance - Knowledge and Skills

As highlighted earlier in this report, the Trust led a number of road shows in 1999 to

raise awareness of clinical governance within the organisation. Most staff who took

part in this research project, either in interviews or focus groups, were aware of the

early initiatives but few had actually attended. Levels of awareness around the clinical

governance agenda vary as the earlier quotes have demonstrated. Staff interviewed

generally reported that their knowledge had been gained via external sources; for

example: post graduate study, attendance at conferences or through professional

publications or bodies. The level of knowledge around specific elements of clinical

governance such as clinical audit and risk management was also variable and whilst

front line staff might have taken part in audit or completed incident forms, there was

less understanding of how these practices formed part of clinical effectiveness/risk

management as quality improvement systems. Although a number of staff had attended

the risk management workshop held in July 2001, few reported they had received any

219
specific training around clinical audit or risk management.

In terms of acquiring the detail around clinical governance, there is a sense that

delivering the service to clients is taking up all the energy of some and clinical

governance as a high level policy has not been effectively translated for those who are

at the front line. The following comments from clinicians illustrate a number of these

points:

'It's -workload. ... ...Most of us are working to capacity in an


environment that is constantly changing. Clinical governance is one of
many names that is just hanging there and there is this mad panic - good
God where's this one come from. It hasn 't translated into action and
it's one of many. We are fully occupied, we haven V got free time to
make connections... ... too busy doing the job'.

'It would be very nice if they (management) sent out or gave us the
information as to what parts of the (clinical governance) umbrella have
been addressed or completed'.

'Staff are bogged down with delivering the service... just no time to think
or to read... just hope that someone higher up will send directives'.

7 feel lost in the vision of the NHS, never mind clinical governance;
somebody must have sat around the table in Whitehall and said right
this is the vision for the new NHS and this is how it is going to be. As
the visionary has stayed at the top and they haven't found other
visionaries to carry the baton on down here, I don 't know my place in
the vision; I don't even see a vision; I have nobody re-affirming that
vision to me as a practitioner any of the time ...... whereas when I was a
newly trained nurse... you knew what she wanted (the matron) because
she came down to the wards and told you... ... but it has become unseen,
unseen down here - they have not got visionaries on the ground. There's
nobody sitting around the table (at Trust HQ) who has caught the vision
andean inspire people to carry it'.

'It just seems to be if you appear to be getting on with the job then no-
body has any involvement with us; as long as the boat is not
rocked... you just get on with it to what you consider to be the best of
your ability; giving the best service time allows to your clients; if
nothing happens to tip the scales, that's just how it rolls on. No-one
comes and tells you what's happening, what's changing, what's going to

220
be expected of you in the future - just rolls on. If the wheel doesn 't
come off- then fine'.

The clinical governance agenda is also at risk of being displaced by the forthcoming

organisational changes:

'So much change isn 't there. We know come October they are going to
PCT status, so everything else almost goes on the back burner'.

Or as one manager commented:

'There is a huge change and transitional agenda around at the moment


in the NHS and staff at the moment may or may not have clinical
governance and some of the components within it high on their personal
agendas or on their professional agendas. But it is trying to ensure that
they do understand that some of the most important things they do are
part of the clinical governance agenda. But at the moment there will be
a very strong focus on the dissolution of the Trust which affects
everyone's terms and conditions of employment; -transition to PCT
status, am I going to have a job at the end of this, what will it mean for
me. I am aware that staff at the moment have a degree of vulnerability
and uncertainty that may or may not affect the way they are prioritising
other elements of the role they have to do'.

Others identify a clear priority for staff when there are competing demands on time:

'Clinical governance, clinical supervision, sharing good practice,


research, audit, they will all have to take a side step if a patient has to
be seen and that is difficult to marry up... ...the bottom line is, the
patients will always come first'.

The manager's assessment of clinical governance activity to date that was cited earlier

in this chapter (Very little') was supported by research in the field. There appears to

have been little in the way of deliberate attention to the implementation of this agenda

until the senior management arrangements changed. The following section will

highlight the process elements of implementation within both the Division and the

Locality.

221
8.5 CLINICAL GOVERNANCE: THE IMPLEMENTATION PROCESS IN

THE DIVISION AND LOCALITY

8.5.1 Clinical Governance Implementation - A Late Start

Although there is evidence to suggest that service improvement does take place within

the Division, little in the way of deliberate activity to implement clinical governance

was apparent at the start of the current research. This situation had been allowed to

continue for 12 months after the publication of the Development Plan but, by the time

the ADM came into post, it was clear that this agenda needed urgent attention:

'...... this has been outstanding for some time, will someone please get a
handle on it for primary care.'

This urgency would have been difficult to ignore especially as it was emphasised in a

meeting of senior managers, facilitated by the Clinical Governance Lead, to agree a

local way forward. Shortly after this meeting, the Clinical Governance Forum was

convened, the Terms of Reference of the group were made explicit and a number of

action points identified.

8.5.2 An Action Plan for the Division

Around the same time as this initial work was taking place in the Forum, the ADM was

required, for the purpose of the Trust Annual Clinical Governance Report, to provide an

update on local progress around the clinical governance agenda using the framework

promulgated by the Clinical Governance Lead. The resulting document gave examples

of certain clinical governance related activity within the Division and highlighted a

number of gaps. At an 'Away Day' in September 2001, the ADM, Community Health

222
care Managers, Clinical Leaders took 'time out1 to map the key areas for action across

the range of drivers which impact on the Division. The aim of this activity was to

develop an action plan to address the issues identified above over the period 2001-

2002.

Within the resulting action plan, a number of areas for attention have been identified

under the banner of clinical governance such a appraisal, personal development plans,

risk management, complaints. However, certain key elements such as clinical audit and

National Service Frameworks (NSF), although highlighted for action, appear to be

separate from clinical governance activity. Thus, amongst the Divisional objectives,

clinical governance appears as a discrete element and, in this way, clinical governance

seems to have been conceptualised as a separate entity to that of the modernisation

agenda. Consequently, clinical governance is something which requires prioritisation

along with everything else as this quote from a senior manager suggests:

'In order to do it (clinical governance) properly, you would need to have


one clear day a week dedicated purely and simply to driving forward the
clinical governance agenda - and that is not realistic in today's heath
agenda because it's just not clinical governance -you have drivers from
the modernisation agenda, from the national plan, from recruitment and
development, from improving working lives etc etc etc. - not to mention
the new National Service Frameworks; all of which are coming out with
clearly defined governmental time targets on them and so however high
a priority clinical governance may be both as an organisation and as a
personal view, the reality is that it will have to be prioritised along with
all the other drivers and if I was a full-time clinical governance project
manager then and only then would you see massive changes within each
area, each Division, each discipline'.

This fragmentation is surprising given the divisional framework for the Trust Annual

Report is inclusive and integrates a range of components, including the elements

identified above, under the clinical governance umbrella.

223
Despite the Divisional activity described above which has seemingly culminated in

three 'action' documents, there was no evidence during the six months of fieldwork that

these had been brought together either to form an integrated clinical governance

strategy or an implementation plan that would facilitate operationalisation within the

Division. Also, although these different action sets seem to have been generated

through discussion with managers, there has been no comprehensive assessment of the

current coverage or effectiveness of existing systems for quality improvement such as

clinical audit and risk management or, in fact, an evaluation of the clinical governance

implementation process to date. Therefore, although the Division has action points to

guide it towards a destination, these do not appear to have been formulated with a clear

sense of the starting point. This approach may do little to either challenge the status

quo or provide a solid foundation upon which to build new systems.

8.5.3 Organisation Development - A Missing Component

The lack of a deliberate OD programme at the corporate level is also reflected within

the Division. At the collective level, the impression of the early Forum is of a group

struggling to deliver an agenda but without any initial investment in its own

development needs. Consequently, a number of gaps are apparent which are likely to

have a fundamental impact on the group's ultimate effectiveness. There is a lack of

clarity around the authority, role and responsibility of the group as both a collective and

as individual members. There was also a lack of clarity around what is meant to be in

place in clinical governance terms and how this should/could be achieved.

The minutes of these early meetings reflect the newness of the group. Although issues

224
were being discussed, closure which led to action points was less evident. Items were

sometimes being passed to the Clinical Leaders' meeting for further discussion; same

people but in a different forum. In this way, issues are essentially being passed 'over

the wall' to another group rather than being taken forward in the present arena. From

August 2001, there is a clear change in the style and tone of the minutes which

coincides with the arrival of the Clinical Governance Facilitator; action points are

clearly identified and responsibility assigned to a named individual; separate working

groups are formed to address specific issues which need further action instead of these

being passed on for further discussion elsewhere.

Although the Forum holds the specific clinical governance remit for the Division, there

does not appear to have been a systematic process of translating the corporate clinical

governance objectives into a comprehensive, local strategy. Although there is now a

Divisional Development Plan which includes clinical governance objectives, the group

does not have an integrated work plan that will enable it to fulfil its Terms of Reference

in either the short or long term. As a group it had not defined its information needs or

made explicit how it would relate to the Division as a whole or to the wider

Trust/corporate clinical governance function. Neither has time been devoted to

developing a clear sense of how the group will function in terms of internal dynamics

or to making a considered assessment of the development/training needs either of the

collective or of individual members.

Organisation development is not just about ensuring collective effectiveness but also

that of individuals; this is important for all employees but particularly so where junior

225
staff are required to assume new responsibilities. The first Chair of the Forum was an

H grade, part-time Clinical Leader who was not even present at the meeting when her

name was drawn out of the hat. In adopting this approach, selection for this important

role was based on a random act rather than careful consideration of the knowledge,

skills and experience that would be required to discharge the responsibilities of the role,

the time that would be needed and the administrative support required.

The initial plan to rotate the role of Chair at three monthly intervals was soon revised; it

was decided that the present incumbent would remain but with the active support of the

Division's newly appointed Clinical Governance Facilitator. Prior to this, there had

been no provision for in-division support; the Chair did not meet on a one to one basis

with the ADM and the routine meetings with her own line manager appeared to deal

with the general business of the Locality rather than clinical governance per se.

Although the development needs of the Forum and its individual members is an

important matter that needs to be addressed, this gap is consistent with the approach to

a wider but connected issue of management development in general. Many of the

Clinical Leaders were new to line management and yet there was no systematic

assessment of their development needs which meant that, in some cases, junior staff

were going on leadership courses before their managers. This was, for some,

compounded by a sense of urgency to deliver which seemed to be reflected down the

line. One manager reported that, on appointment, she was told she would need to 'hit

the ground running'. Another manager felt that her own inexperience was holding

back progress in terms of delivering the service.

226
Within the Locality, the lack of deliberate action to implement clinical governance from

the top of the Division is reflected closer to the front line services. Although the

researcher did not come across anyone who had not heard the words 'clinical

governance1, it was difficult to determine how the corporate agenda was being

operationalised at the professional-client interfaces. Given that there are no vehicles to

deal explicitly with the translation of the clinical governance at the front line, there is a

risk that any impact clinical governance may be having at the professional-client level

is likely to be the result of individual rather than corporate or divisional initiatives.

8.6 CHAPTER SUMMARY

The purpose of this chapter has been to present a picture of clinical governance at the

Divisional and local levels. It would appear that deliberate activity in relation to the

implementation of this initiative has been a relatively recent occurrence despite the fact

that corporately defined objectives for the Division have been in existence for some

time. There appears to be considerable scope for further work around this agenda in the

areas highlighted here and this has been recognised by key managers at the corporate

level and also in the Division and Locality. It has proved rather difficult to use the

Miles framework (1997) due to the lack of related activity in terms of both the 'what'

and the 'how1. Instead, this chapter has tried to focus on the work around clinical

governance that has actually taken place whilst, at the same time, identify some of the

gaps. A full discussion of the results presented in all three chapters will now take

place.

227
CHAPTER 9

DISCUSSION OF RESULTS

'The past is never dead. It's not even past'


(Faulkner, 1951)

9.1 INTRODUCTION

The aim of this case study was to describe in detail the experience of one NHS Trust

as it implemented clinical governance. At the outset of fieldwork, clinical

governance was still a relatively new concept; its emergence as a policy has been

tracked in some detail in Chapter 1 of this thesis. Given this newness, the research

design sought to capture what the Trust was doing to implement the concept rather

than test this against some sort of blueprint. As a qualitative study, rich description is

an essential component in the presentation of the findings and Chapters 6-8 have

incorporated, as far as possible, direct quotations which bring the voice of the

interviewee to the reader. Also, rather than merely relaying a sequence of actions,

some degree of interpretation has been incorporated within the results to improve

readability, an approach recommended by some authors on this subject (Patton,

2002).

In the absence of a blueprint for clinical governance, Trusts have been allowed to

interpret this locally which is in sharp contrast to the NHS Plan (Department of

Health, 2000) with its centrally-specified action sets and corresponding milestones.

The Department of Health's approach to clinical governance seems more like the one

adopted in an earlier experiment with Total Quality Management which took place at

228
the end of the 1980s - early 1990s. Although on that occasion a technical note was

issued, during the course of the fieldwork it was apparent that the organisations had

either not accessed this or chosen to disregard it because there was little evidence that

it was being operationalised (Joss and Kogan, 1995).

Given the lack of clarity around the clinical governance concept, the Emerald Trust

should be applauded for its courage in agreeing to take part in this research. The

author was allowed unrestricted access and the Trust has not sought to edit or censor

the write up. Consequently, the trial and error of clinical governance predicted by

Lugon and Seeker-Walker (1999) has been played out under the stark spotlight of an

intensive and long-standing research process.

In the case of the previous NHS TQM experiment referred to above, just as with

clinical governance, the Department of Health encouraged an eclectic approach to

development in the hope that it would provide a rich source of initiatives for

evaluation. This posed a particular challenge for those evaluating TQM as there was

no definitive model depicting what needed to be implemented or how this should be

achieved. Then, the researchers based their evaluation on a traditional model of TQM

and also on the Trusts' own objectives. Faced with a similar situation, a similar

approach has been adopted by this researcher. Although clinical governance is not

explicitly depicted as Total Quality Management, the language and practice, where

specified in the policy documentation, is redolent with that of TQM and implies total

quality management or a whole system approach to quality if not explicitly named as

such.

229
Owing to the lack of theoretical underpinning to the emerging clinical governance

agenda, and despite the lack of a universally accepted theory of TQM, the TQM

literature has proved a useful guide to what clinical governance might look like in

terms of content or the 'what'. In light of the centrality of change to the notion of

improvement, a variety of change management models have also been utilised to

inform the 'how1; in particular Miles (1997), but also Kotter (1996), and Pendlebury

and colleagues (Pendlebury, Grouard and Meston, 1998).

Given the rich description and the initial interpretation which have been included in

the earlier results chapters, it has proved somewhat of a challenge to address this

chapter and draw out significant issues without repeating large sections of that which

has gone before. In addition, it has been important to remember that, due to the

emergent understanding of the clinical governance concept, this study set out to be of

a formative rather than summative nature and, in line with this, the researcher has

played a helping role rather than that of arbiter of success or failure. Besides, the

payback to the Trust for taking part was researcher input to the implementation

process; the deal did not include judgement per se.

9.2 FACTORS WHICH PREDICT SIGNIFICANT MOVEMENT

TOWARDS TOTAL QUALITY

In shaping this chapter, the author was strongly influenced by a sense of deja vu given

that clinical governance was not the first attempt to introduce a whole system

approach to quality improvement into the NHS. Thus, in an effort to avoid

succumbing to the 'collective amnesia' described by Klein (1998), it seemed highly

230
appropriate to draw on the lessons learned from this past experience. As a result of

this earlier experiment, Joss and Kogan (1995) offered 11 factors which they

considered to be predictive of significant TQM movement (Table 9.1). These factors

have been adapted and will be used as an umbrella framework to guide a discussion

of some of the significant findings of this most recent case study concerning the

implementation of clinical governance.

Table 9.1: Factors which predict significant movement towards total quality

1. Demonstrated senior management commitment and understanding ;


2. A well-developed and well-documented implementation strategy ;
3. Strong/persevering co-ordinator - board level appointment;
4. Structure overseeing implementation;
5. Comprehensive baseline assessment of service quality;
6. Early effort involvement of clinicians;
7. Sufficient funding for facilitators;
8. Standard setting only as part of strongly monitored CQI;
9. Comprehensive training;
10. Explicit strategy/resources for recognising and rewarding progress;
11. Organisational changes after evaluation.
Adapted from Joss and Kogan (1995; plSl)______________

9.2.1 Demonstrated Senior Management Commitment and Understanding

Joss and Kogan (1995, p!52) were of the view that demonstrated senior management

commitment was 'of central importance' to the successful implementation of TQM.

This is echoed by others writing in relation to TQM implementation specifically

(Saraph, Benson and Schroeder, 1989; Porter and Parker, 1993; Ahire, Golbar and

Waller, 1996) and change in general (Kotter, 1996; Miles, 1997; Pendlebury, Grouard

and Meston, 1998). Brown (1993) highlights the other side of this coin and

comments that hardly any TQM failure is reported without the executive being

231
blamed in some way. He goes on to comment that although most are committed in

their hearts, in practice, their employees will judge them by their behaviour and

actions.

It is for the reasons stated above that Joss and Kogan (1995) emphasise 'demonstrated'

commitment particularly as they found that management commitment to TQM at the

pilot sites was rather less than impressive. The authors (ibid) speculated that the

Department of Health's lack of any mandate to specify the leadership arrangements

required for this experiment contributed to the situation regarding management

commitment. In this matter, the Department's approach with the TQM pilot sites was

in stark contrast to its current approach to clinical governance. Clinical governance is

a statutory duty and the Department of Health has been explicit in the leadership and

accountability arrangements surrounding this at the corporate level (1998). The

consultation document (ibid) states that the Trust Board is accountable for the quality

of services and that the chief executive is the accountable officer; the lead for clinical

governance is to be a senior clinician. In this way there is no doubt where the

ultimate accountability for clinical governance lies and, in addition, by stressing that

clinical governance is the responsibility of all, the explicitness of ultimate

accountability does not displace individual accountability within the clinical

governance agenda.

The Emerald Trust clearly complied with the Department of Health requirements in

terms of making the accountability arrangements explicit at the corporate level;

however, it still faced the same challenge as all organisations currently

232
operationalising this agenda; namely how to demonstrate its commitment to clinical

governance. Irrespective of individual commitment, it has been stated in a previous

chapter that the minutes of the Trust Board meetings do not give a sense of any

detailed debate around clinical governance generally or interrogation of the clinical

governance reports in particular. Also, members not on the Clinical Governance Sub­

committee seemed to take a pragmatic view that clinical governance was in the safe

hands of their colleagues who served on this particular committee. Whilst this may

indeed be so, it is surely the duty of the Trust Board to model the spirit of the

consultation document in what has almost become a mantra - 'quality is everybody's

business' - and consider explicit and tangible mechanisms by which the Trust Board

as both a collective and as individual members may demonstrate its commitment.

Otherwise, it risks sending a signal to the rest of the organisation that quality is only

the business of some and can therefore be delegated.

The way in which clinical governance is operationalised locally will provide staff at

the front line of service delivery with a view of how seriously the organisation is

taking clinical governance generally and implementation in particular. At the time of

the initial rapid appraisal in November 2000, five out of six divisions had established

a local clinical governance forum; however, only two out of six had developed an

action plan to take forward the Trust 'Clinical Governance Development Plan'.

These early interviews with managers also suggested that little in the way of

awareness raising (other than the Trust initiative) or specific clinical governance

training had taken place in the divisions.

233
In effect, the Development Plan was not a living document within the organisation at

that time. In the division which had not established a forum, it was acknowledged by

senior managers that there had been little if any formal clinical governance activity in

between the development of the Trust Plan and the formation of the local forum, a

period of around 15 months. This supports the earlier findings of Joss and Kogan

(1995) in that, where there was no local structure in support of TQM, little local

progress with implementation was observed. In such circumstances, any perceived

rhetoric-reality gap could erode confidence in the commitment of senior managers

and lead swiftly to cynicism amongst front line staff; this and a sense of "business as

usual' was certainly evident from the focus groups undertaken as part of this latest

research.

Joss and Kogan (1995) also argue that, before commitment can be demonstrated,

there needs to be awareness and understanding. In many of the accounts of TQM

implementation, this notion is clearly appreciated (Glover, 1993; Rand, 1994;

Ghobadian and Gallear, 1994) and the most common approach to address this tends to

be through workshops. The preferred sequence for education and training (Porter and

Parker, 1993; Dale and Cooper, 1994) is to start with the senior executives of the

organisation and then cascade this down towards the front line or what Mintzberg

(1979) terms the operating core. The purpose of these workshops is generally to

ensure that everyone has the same basic understanding of the initiative and also to

provide an initial outline of the implications of clinical governance for participants.

There is some support in the literature for training to be carried out by managers from

234
one reporting level to the next and so on throughout the hierarchy in order to

demonstrate commitment and translate understanding into action (Kanji and Barker,

1990). This is particularly important for those at the corporate level because

executive behaviour sets the tone for the rest of the organisation and is even more

desirable where there is a lack of clarity around the initiative which has been the case

in both TQM and more recently clinical governance.

A workshop was scheduled for the Trust Board in the early days of implementation

but this was subsequently cancelled. Unfortunately, it was to be a full two years after

the publication of the Development Plan before this would be re-scheduled.

Such workshops as described above would have provided the Trust with a valuable

opportunity to test the existing understanding of the Trust Board and provide the

basic overview needed to bring all members to a similar level in terms of knowledge

and understanding. This shared knowledge base should ideally have given the Board

a clearer idea of what the new statutory duty of quality meant for them both as a

collective and as individuals and, in doing so, provided an insight into the very

significant and transformational nature of the change process that lay ahead. This in

turn could have laid the foundations for the work to come namely the development of

the strategic direction. It might have also allowed the Trust Board members to play a

more active role in the shaping of the agenda, the commitment of resources in support

of implementation, the identification of the information required to enable the

effective execution of the statutory duty for clinical governance and ultimately

ensured that the corporate body was in a position to challenge, if necessary, the nature

235
and rate of progress of this important initiative.

In reality, the role the Trust Board plays in relation to clinical governance is likely to

reflect its role in terms of the Trust business generally and, in the case of the latter,

some Trust Boards appear to operate as little more than 'rubber stamps' for the

executive team (Carver, 1990). If Boards are to operate effectively, they need the

knowledge and skills to do this and this requires deliberate attention to their

development needs - induction in the first instance and ongoing development

subsequently. The Non-executive Directors interviewed in this Trust were not

satisfied by the arrangements for their induction although they had subsequently been

on a number of training courses relating to their specific responsibilities. Also, as

demonstrated above, as a corporate entity there had been no collective attention to the

Trust Board's development needs in relation to clinical governance; this may have

contributed to the fact that, as a collective, it did not challenge the implementation

drift that was apparent during the research project.

An early assessment of the clinical governance knowledge base of the Management

Team would have been eminently appropriate given the relative newness of this

particular agenda and the complexity of implementing total quality per se (Kanji and

Barker, 1990). It would appear that to assume a level of knowledge around quality

management is risky; according to some (Dale and Cooper, 1994; Taylor, 1996; Yong

and Wilkinson, 1999), many senior managers have merely a superficial understanding

of TQM and that is often confined to the 'buzzwords'. According to Yong and

Wilkinson (ibid), making such assumptions without a proper assessment of

236
development needs is unwise as they regard many of the 'roadblocks' to the successful

implementation of total quality as having emanated from senior/middle managers -

often because they feel threatened in some way by the initiative (Walsh, 1995).

The lack of a shadow quality structure to bridge the gap between the corporate level

Clinical Governance Sub-committee and the divisional clinical governance fora

inevitably meant that the Management Team needed to play a central role in the

implementation of clinical governance for several reasons. Firstly, as Divisional

Managers, they were essentially responsible for turning the corporate objectives into

local objectives and subsequently ensuring the operationalisation of these within the

divisions. Secondly, most of the Divisional Managers had a relatively high level of

visibility within their area of responsibility and thus their response to the clinical

governance agenda would ultimately set the tone for the staff. Given the value of

behaviour over words in demonstrating commitment to the clinical governance

agenda, it was, therefore, important that they model the appropriate response through

local action (Brown, 1993; Katz, 1993; Shea and Howell, 1998). However, the

degree to which Divisional Managers took forward the clinical governance agenda

was extremely variable and, as indicated earlier, lack of deliberate action in one

particular division seems to have signalled a 'business as usual' approach which

cascaded down through the hierarchy to the front line. In reality, whilst some of the

senior managers did indeed have a good understanding of the agenda, others were less

fortunate but, despite this, the Management Team was expected to 'get on and do it'.

The experience of this Trust appears to add weight to Joss and Kogan's earlier

237
assertion (1995) that an important precursor to commitment is an understanding of the

principles of quality management and what is expected of the individuals involved.

In addition, in order to avoid the cynicism which may arise from any perceived

rhetoric-reality gap, the words need to be reflected in action. One vehicle for making

explicit both the words and the action required for implementation is the strategy and

this will now be considered.

9.2.2 A Well-Developed and Well-Documented Implementation Strategy

Corporate strategy is the main vehicle for making explicit the vision of the

organisation, the objectives that will deliver this vision, the people and other

resources required in its delivery and the time-scales by which the objectives are to be

achieved. The way in which clinical governance is conceptualised within this vision

will determine the scope and scale of the change involved in its implementation; for

many organisations, this will/should represent a transformational, frame-breaking

change rather than more of the same. Strategy in transformational change is vision-

led (Miles, 1997). This is largely because the future state is either not clear or,

perhaps because of the nature of the change, it is unknowable. In such circumstances,

it is easier to understand why the implementation of total quality is often referred to

as a journey rather than a destination (Dale, 1994). In this uncertain environment, the

vision acts as a compass outlining the general direction whilst the strategy becomes a

map of the route to be taken.

The Trust's vision does not suggest that clinical governance has been conceptualised

as TQM; albeit TQM by another name. In fact, although the term 'quality' is

238
mentioned frequently, there is no explicit statement within the two key Trust

documents of what constitutes 'quality'. As the literature review in Chapter 3 has

demonstrated, the notion of quality is often contested; thus an insight into the Trust's

conceptualisation of this core element of clinical governance might have served as a

useful starting point. The recurring theme within the Trust approach is 'learning' -

learning from problems that have arisen and learning from new knowledge that is

generated from within or outside of the organisation. This is subsequently reflected

in many of the 'what' elements of the Trust clinical governance implementation

outlined in Chapter 6; examples include the introduction of the Significant Clinical

Incident Review process and the development of the library and knowledge

resources. The aim of the Trust is to generate and capture the learning with the

intention that this will deliver improvement. However, the national experience of

existing quality improvement systems such as clinical audit and risk management is

that this knowledge does not always lead to a closure of the loop and the

implementation of the change and improvement required (Berger, 1998; Walshe and

Dineen, 1998). The Trust has recognised the potential for the lack of action

described above and built into many of the new systems a review process to

determine whether change is actually taking place.

The Significant Clinical Incident Review process is a particularly important example

of the above and it also highlights an attempt by the organisation to signal a change

in 'how we do things here'. The Review process aims to get to the root causes of

problems and take remedial action which addresses the whole system. The Trust

regards this as a very positive advance as this did not tend to happen before, at least

239
not in such a systematic way; however, the Significant Clinical Incident Review

process is yet to be incorporated within the wider risk management system.

Valuable as such initiatives are in themselves, they are yet to be encompassed in an

integrated framework for quality improvement; also a common problem for both the

NHS TQM pilot sites (Joss, Kogan and Henkel, 1994; Joss and Kogan,1995) and the

Norwegian sites (0vretveit, 1999; 0vretveit and Aslaksen, 1999).

Whilst 'learning' is important, when trying to implement improvement it is essential

that the learning process is regarded as a means to an end rather than the end in

itself. For this reason, perhaps it would be more appropriate to focus on the goal of

CQI and develop a vision which describes what this might look like in practical

terms. In this, the guidance from the Department of Health (1999, pll) is

informative, at least in relation to front line staff:

'It must be recognised, however, that the practice of clinical governance


at service level - clinical teams analysing and assessing the quality of
their services and seeking ways to improve them - will be a multi-
disciplinary and often also a multi-agency activity'.

The picture presented above captures a flavour of CQI at the operating core; groups

focused on processes in an effort to improve them. What is also needed is a vision of

CQI at the corporate and middle levels which brings the whole organisation at each

level of the hierarchy into alignment and thus creates an environment so that the front

line may, indeed, operate in the way described above. This notion of alignment is

emphasised throughout the TQM and the change management literature and plays an

important role in ensuring the consistency needed for a successful change process

(Miles, 1997; Pendlebury, Grouard and Meston, 1998). For instance, in one division

240
of the Trust, the vehicle for quality improvement had historically been the standard

setting process and there was a strong desire locally to re-establish the standard

setting groups. However, this may not be an appropriate way forward in the absence

of any other quality improvement structure at locality level. Joss and Kogan (1995)

recommended that standard setting should only be part of a broader CQI approach.

Their rationale is based on the observation that standards tend to be set but neither

audited in terms of compliance or revisited/revised in a timely manner - hardly

consistent with the dynamic nature of CQI.

The lack of a central blueprint for clinical governance has meant that Trusts have

been left to interpret the concept for local application and the action set outlined in

the Trust Development Plan reflects a vision of clinical governance derived in this

manner. In its present form, the Development Plan is not explicitly geared to deliver

a whole system approach to CQI and is therefore unlikely to achieve this with the

present focus. What is evident from the case study is that some of the quality

assurance elements are being put in place (reporting mechanisms etc); nevertheless,

whilst a CQI model may yet emerge, this was not apparent during the fieldwork

period.

Total Quality Management requires a strategic approach to quality management

(Walsh, 1995); for TQM to be considered a priority and receive appropriate funding,

CQI must be part of the corporate plan (Davis, 1997). Both of these statements might

be considered equally applicable to the implementation of clinical governance. In

addition, there needs to be a focus on the core processes which constitute the business

241
of health care and an ability to address both positive and negative quality. Positive

quality is regarded as proactive and is about adding value in the absence of any

identified problem. Negative quality is seen as reactive. It is usually a response to

complaints, incidents etc, and in this way focuses on dealing with the cost of poor

quality - that is putting right what should have been right first time (Zairi, 1994).

The strategic approach to total quality must recognise and deliberately address both of

these elements; unless there is a commitment to strive towards positive quality and

CQI, an organisation may find itself trapped in a reactive spiral of fire-fighting and

problem orientation.

Whilst the design and development of strategy is important, implementation is

everything. Without the implementation process, plans stay as words on paper and it

is often the process of getting them off the paper which is the most problematic

(Glover, 1993). As Sproull and Hofmeister (1986) commented, implementation is not

sexy, it is about the nuts and bolts of getting something in place. Although the Trust

Development Plan outlined an ambitious action set of 39 objectives (36 of which

were either ongoing or to be achieved in year one), it did not appear to be a living

document within the divisions even 12 months after its publication. Indeed, only two

out of six divisions had any form of local plan of action for clinical governance

despite there being some very explicit objectives specifically for these areas of the

Trust. Rather than a sense of dynamic goal deployment, there was evidence of a

general implementation drift having occurred despite a number of important

initiatives having been put in place (Chapters 6-8).

242
In reality, although milestones indicated a timescale for the achievement of the

objectives, there was little in the way of an explicit emphasis on the main priorities.

Although progress against the Development Plan was reported to the Clinical

Governance Sub-committee, the implementation drift continued throughout the first

12 months at least. Also, despite the fact that many of the objectives had resource

implications, the Plan was not explicitly costed.

In essence, despite the scale and scope of the implementation process, a project

management approach was not adopted; in fact there was even a sense that this would

be contrary to the prevailing culture. Although clinical governance is not a discrete

project, there are elements of the implementation process that clearly are. Several

TQM practitioners strongly advocate the adoption of a project management

methodology (Dale and Cooper, 1994; Stamatis, 1994). Given the stress placed upon

the active management of change and total quality (Oakland, 1995; Kotter, 1996), it

could be argued that project management could also provide a valuable underlying

structure for clinical governance implementation in that each aspect of the

management process is emphasised.

A project management approach may have helped this Trust in a number of ways. In

addition to identifying objectives, costing these would have likely forced explicit

prioritisation; although, ultimately quality might be free (Crosby, 1979), it is

recognised that the introduction of CQI involves set up costs (Porter and Parker,

1993; Joss and Kogan, 1995) which need to be resourced appropriately. The

allocation or re-allocation of resources is considered to be a powerful signal of what

243
the organisation considers important (Miles, 1997). Whilst the Trust did invest in

terms of the executive lead post, it failed to allocate specific resources at the outset

for the Clinical Governance Development Team. This delayed its formation for

almost two years despite the fact that it is generally advocated that the facilitation of

large-scale change generally needs to commence at the outset and remain ongoing

(Joss and Kogan, 1995; Miles, 1997; Pendlebury, Grouard and Meston, 1998).

The need to allocate resources could also have eased the process of identifying the

'critical few1 in terms of objectives and thus facilitated a move to phasing

implementation rather than having a large number of objectives to be

achieved/commenced in year one - as Miles (1997, p48) comments 'the new internal

context of the organisation does not emerge in a moment of cosmic creation'. The

Department of Health has also recognised that phased implementation would be

required in accordance with the resources available (Department of Health, 1999); an

approach which receives support from the wider TQM literature (Yusof and

Aspinwall, 2000a). In this way, the Trust could have produced some tangible, early

"wins' which included positive quality (Miles, 1997; Pendlebury, Grouard and Meston,

1998); thereby emphasising the proactive aspects and reinforcing the belief that

clinical governance is achievable rather than something so big as to almost induce the

sort of management paralysis that can be associated with TQM (Dale, 1994). Also,

the focusing of resources for delivery could also serve to emphasise the commitment

of senior management to the clinical governance objectives. Contrast this approach

with the reality of clinical supervision in the Primary Care Division where some staff

reported problems attending because of clinical duties. Where situations such as

244
these remain unresolved, there is a risk that the organisation inadvertently sends a

signal to staff that clinical supervision is not a high priority for the Trust despite its

continued encouragement of staff to take part.

According to Yusof and Aspinwall (2000a), one of the most influential factors in the

successful implementation of total quality is the development of a sound

implementation plan before embarking on the change process. The potential benefits

of utilising a model or framework to guide implementation have been discussed in the

earlier chapters of this thesis. Motwani and colleagues (Motwani, Sower and Bashier,

1996) highlight the fact that much of the implementation of total quality in health care

is not based on any specific implementation guidelines 'except for those in directives'.

The authors (ibid) consider the use of a model/framework to be a necessity if the

effectiveness of the implementation effort is to be improved. The use of an

overarching implementation framework such as the one developed by Miles (1997)

could serve to make explicit the elements of both 'what1 and 'how' and in this way

force managers to address issues that might be avoided either deliberately or through

oversight (Yusof and Aspinwall, 2000a). Certainly, utilisation of the Miles

framework (1997) would have highlighted to the Trust the gaps in what Miles has

termed the 'process architecture' (Appendix 2).

The elements which constitute the process architecture are apparently quite

commonly overlooked; both the TQM and the change management literatures abound

with examples of this. Unfortunately, these elements are considered to be the ones

that will 'orchestrate' the transition from the present to the future state. A consistent

245
message amongst practitioners of change management is the need to take a holistic

approach and omitting elements will have negative results (Kotter, 1996; Miles, 1997;

Pendlebury, Grouard and Meston, 1998). This tends to support the argument for a

well-formulated implementation framework albeit one that serves as a guide rather

than a prescription.

9.2.3 Comprehensive Baseline Assessment of Service Quality

Joss and Kogan (1995) found that few of the TQM pilot sites had undertaken what

they described as pre-implementation diagnostics; this led to a lack of clarity about

the exact starting position of the organisations in relation to TQM. The lack of any

clear view of the quality systems already in place obviously had implications for

planning and, in particular, decisions about what needed to be reinforced/changed in

order to implement the initiative. The subsequent measurement of progress was also

impeded by this general failure to obtain an accurate picture of the pre-change

position.

The omissions described above are apparently not uncommon and a number of

authors have commented on the propensity for managers to either by-pass or only

address superficially this important activity; alluding to a preference for action rather

than careful diagnostics and planning (Dale and Cooper, 1994; Anderson and

Ackerman-Anderson, 2001). Interestingly, the guidance on clinical governance

issued by the Department of Health (1999) included the need for a baseline

assessment in its 'must do1 list. The guidance also provided a clear indication of the

areas that Trusts need to address as part of this assessment: the effectiveness and

246
integration of existing systems for quality improvement (such as clinical audit), the

quality of existing data for monitoring quality activity, the identification of

problematic services, the degree to which existing strategies (HR, IM&T etc) support

the clinical governance agenda. In addition, it was also made clear (Department of

Health, 1999, pi7) that the findings of the assessment should be shared within the

organisation:

'The baseline assessment should let the whole organisation see what it is
good at, what it is less good at, and the areas needing to be developed'.

So, it seems from the above, not only is the baseline assessment required for planning

purposes but the findings should also be communicated to a wider audience than

those involved directly in the development process. Although it was reported that the

Trust had undertaken a baseline assessment, it proved difficult to determine what had

been included in this process, how it had been conducted and the findings in detail.

Although the 'Clinical Governance Report' made a broad reference to a number of

areas that needed attention, the findings were not presented in any detail in either of

the two key Trust documents neither, according to the minutes of the meetings

obtained by the researcher, does this appear to have been discussed in depth at the

Trust Board. Thus, although the Trust Development Plan presents the objectives to

be achieved, it is not possible to link these directly to the current state of the areas

prescribed within the guidance (Department of Health, 1999).

This lack of detail does not just have implications for the planning and monitoring

processes but is also likely to have implications for other aspects of implementation.

Kotter (1996) describes the need to create a sense of urgency whilst Miles (1997)

247
talks instead about confronting reality. Each approach is intended to increase the

ability of the organisation to create the energy needed to make the transition from the

present to the desired future state. However, the absence of a detailed baseline

assessment or any formal external benchmarking did not seem to affect the sense of

urgency of the Clinical Governance Sub-committee, at least at the outset. Although

the risk of paralysis when faced with the notion of implementing a whole system

approach to quality improvement is well documented (Dale, 1994), the Sub­

committee was very keen to make a start. Whether the level of baseline information

available was sufficient for the Sub-committee to fully appreciate the reality of its

starting position is perhaps an issue for the Trust to consider. Given a clearer picture

of the effectiveness and coverage of existing systems such as clinical audit, risk

management, appraisal, and perhaps knowledge of the work taking place in other

Trusts, the strategy may have looked rather different. The Sub-committee may have

decided to focus initial efforts on addressing gaps in the existing systems in an effort

to provide a solid foundation for further development rather than setting up new

processes within these systems as was generally the case.

In the earlier review of the clinical governance literature (Chapter 2), details of one

Trust's devolved approach to baseline assessment was briefly outlined (Holland and

Fennell, 2000). In this case, directorates across the Trust undertook a self-assessment

against a selection of pre-determined criteria. This informed local clinical

governance action plans and also the corporate plan. Locally, it focused attention on

the issues to be addressed and, in the authors' view (ibid), this process fostered local

ownership of the objectives. It is unclear how the divisions within the Emerald Trust

248
were involved in the baseline assessment but, almost 12 months after the appearance

of the Development Plan, the rapid appraisal found that none had undertaken an

assessment of the effectiveness of their existing systems. Although, when eventually

produced, the Primary Care Division Plan included objectives relating to existing

systems, these were still not based upon a systematic review and this omission may

have contributed to the rather fragmented nature of the Divisional Plan.

Another important reason for undertaking the baselining work referred to here is the

need to make a careful assessment of the existing organisational capacity to pursue

the intended change. Miles (1997) argues that organisations that find themselves with

a high level of resource and a high capacity for change are in the minority and those

with low resources and low capacity should not embark on the change unless this

situation can be successfully remedied. Spurgeon (1999) makes a similar point and

observes that many health care organisations are not in a position to undertake

transformational change and asks whether commercial organisations facing similar

circumstances would embark on this; the author (ibid) concludes that they would not.

However, in the case of clinical governance, its status as statutory duty essentially

means that implementation is not optional and the establishment of the Commission

for Health Improvement to review progress reinforces this message. Where capacity

and resources are not high, Trusts will need to consider such issues carefully and take

a deliberate decision regarding prioritisation and phasing in order to increase the

likelihood of successful implementation. Related to this theme of 'first things first1,

one of the areas which normally receives early attention during organisational change

in genera] (Kotter, 1996; Miles, 1997) and clinical governance in particular is

249
structure (Latham, Freeman Walshe et al, 2000) and this will now be considered.

9.2.4 A Structure to Oversee Implementation

Structure is defined by Miles (1997, p36) as: 'the formal structural arrangements of

the organisation that delineate its basic units of authority and accountability'.

Expressed in another way, the structure provides a framework of order and command

through which the process of management in terms of planning, organising, directing

and controlling may be applied (Mullins, 1999). Thus, structure is regarded as an

important design feature in the implementation of transformational change generally

(Miles, 1997) and quality improvement specifically (Oakland, 1995). The TQM

experiment demonstrated that, in Trusts with no structure below corporate level,

implementation was observed to be poor (Joss and Kogan, 1995).

An important consideration in the implementation of total quality is whether the

structural design should reflect a 'shadow' or 'line' structure. Neither choice is

apparently without its problems. A shadow structure risks creating/reinforcing the

perception of quality as the responsibility of the quality function; however, where the

responsibility for implementation rests within the existing line management

arrangements, there is a risk that the quality agenda will be displaced by day to day

activities (Dale and Cooper, 1994). Joss and Kogan (1995) found that where

implementation was left to the line managers, there tended to be less progress

achieved. Consequently, the authors (ibid) recommend that a shadow structure

should be established initially but with a view to integration within the line as the

initiative becomes more established operationally. Irrespective of whether the

250
initiative starts with a shadow or line arrangement, there is support for the proposition

that responsibility for the management of quality should rest within the line

management remit (Brown, 1993;Davis, 1997).

Both of the issues raised above were apparent in the Emerald Trust. There was a

perception that the Clinical Governance Lead was responsible for driving the agenda.

The corollary of this seems to have been that several of the areas which did not

receive her personal attention did not necessarily move forward at the same pace as

others. Also, the lack of a forum in one division corresponded to a lack of specific

clinical governance activity in this area. Although managers locally were aware of the

clinical governance agenda, until the Forum was established, there did not appear to

be a vehicle to take it forward. In the absence of a specific structure for clinical

governance, there was no evidence that the existing line management arrangements

naturally incorporated the requirement to implement this initiative.

One important gap that needs to be addressed by the Trust is the lack of a quality

structure below the divisional fora. This may take the form of process improvement

teams, quality improvement teams or quality circles (Oakland, 1995) and, given their

absence at the case site, one must question how CQI is to be operationalised by

clinical teams. Although, as discussed earlier, this is not evident in the Trust vision

for clinical governance, it is an explicit feature of the Department of Health thinking

and highlighted in the guidance (Department of Health, 1999). Without a vehicle for

improvement, initiatives at the front line risk being piecemeal (Walsh, 1995) and/or

do not reflect corporate objectives which may ultimately limit the chances of

251
resources being released to ensure success.

In reality, clinical governance structure at the Emerald Trust appears to be a mixture

of shadow and line. There are explicit forums for clinical governance at the corporate

level (the Clinical Governance Sub-committee) and at division level (the divisional

fora); between these two levels, there is the line function in the form of the

Management Team. Although it was implied in the Development Plan that

Divisional Managers have a key role to play in the implementation of clinical

governance, it was not explicitly stated that Management Team would be the main

vehicle through which this agenda would be operationalised - it was merely assumed

that this would take place. In reality, the extent to which the Management Team as

individuals have taken the Development Plan forward has been variable. Although

clinical governance issues are taken to Management Team by the Clinical

Governance Lead, the minutes of these meetings do not reflect a collective

operational responsibility for managing the change process associated with clinical

governance. Thus, under the current arrangements, a lack of clarity around the lines

of accountability and authority at divisional management level and below appears to

have created something of an operational vacuum.

The structural gaps described above have implications for all aspects of the

management process. The minutes of the Clinical Governance Sub-committee are not

routinely circulated to the Management Team and, until the chairs of the divisional

fora became members of the Sub-committee, the minutes were not routinely received

by the divisional groups either. This gap has important implications for the

252
effectiveness of quality policy deployment and thus the likelihood of goal congruence

and alignment (Krishnan, Shani Grant et al, 1993; Zairi, 1994).

In addition to the above, there is also the question of accountability and control, both

of which are important core elements of clinical governance. Although the Clinical

Governance Lead has a clear overview of the progress of clinical governance

implementation throughout the Trust, this is not actively monitored by Management

Team and consequently this function rests with the Clinical Governance Sub­

committee and ultimately the Trust Board. However, both the Sub-committee and the

Trust Board are presented with a mixture of detailed operational data and information

relating to performance against the Development Plan which tends to be stated in

broad terms reflecting the way in which the initial objectives were formulated.

To enable the Clinical Governance Sub-committee to perform this monitoring and

control function and for the Trust Board to be assured that this is being carried out, it

would seem that the form and content of the quarterly reports may benefit from

further consideration in terms of appropriateness and usefulness. Information is an

important determinant of the effectiveness of Trust Boards and finding the right

balance of providing enough detail to enable members to take a proactive approach

without 'drowning' them in data is recognised as an on-going challenge (Audit

Commission, 1995).

Data as opposed to information can obscure rather than enhance the picture presented.

The reporting mechanism currently in place does not seem to have led either the Trust

253
Board or the Clinical Governance Sub-committee to challenge the fact that one of the

divisions had not established a forum until 15 months after the publication of the

Development Plan or to question the presence of a general implementation drift. This

may be partly accounted for by the quality of the information but there is also an issue

of the nature of the control system in place to deal with any sub/non-implementation.

Where there was a lack of action, the Clinical Governance Lead approached this on a

one to one basis and, in this way, corrective action appeared to rely on her influencing

skills rather than any formal control mechanism. Given the centrality of

accountability to the notion of clinical governance, this is an area which the Trust

perhaps needs to address. Why this lack of a formal control function is allowed to

continue is unclear. This is not just a feature of the executive level but relates to

management at all levels within the hierarchy. As was the case in the earlier the

TQM experiment (Joss, Kogan and Henkel, 1994; Joss and Kogan, 1995), the

individual objectives for managers within the Emerald Trust did not contain specific

objectives relating to the clinical governance agenda other than perhaps the need to

undertake appraisal of subordinate staff.

Perhaps the above is a manifestation of the high trust culture operating within the

senior management tiers. Trust is an important element in all organisations but it is

especially so where there is a high level of professional judgement required which is

often the case in health care. Davies and Mannion (1999) argue that the

establishment of trust and control is not an either-or decision. Trust without a control

function may lead to the creation/maintenance of a dependent relationship between

254
the organisation and those who will deliver the services. Instead, the authors (ibid)

suggest that the challenge lies in 'finding the balance between checking and trusting'.

The advent of clinical governance and corporate accountability for quality means that

assuming certain action is taking place is no longer enough; systems need to be in

place that will not only demonstrate this is so but also that the appropriate remedial

action has been taken where necessary.

Given the scale and scope of change which has accompanied the 1997 White Paper

(Department of Health) and the NHS Plan (Department of Health, 2000), it is

reassuring to see the emphasis that is being placed on effective leadership and the

action being taken to make this available to the NHS. Whilst acknowledging the

importance of leadership, there are those who are also cautioning that management

must not be forgotten in the process (Spurgeon and Latham, 2003). Neither total

quality or change will 'just happen', both must be actively managed (Oakland, 1995;

Kotter, 1996). Kotter (ibid) suggests that the leadership/management notion is a

both/and situation rather than either/or, having argued in an earlier publication that

leadership and management are different conceptually and in practice (1990).

Structure plays an important role in the effectiveness or otherwise of both leadership

and management. Drucker (1989, p223) argues that a good structure does not

guarantee a positive performance but a poor structure 'makes good performance

impossible' irrespective of how good individual managers may be.

Having considered the issues surrounding structure and clinical governance in the

Emerald Trust, the next section will continue with a similar theme - namely the

255
establishment of a vehicle for co-ordination and facilitation.

9.2.5 Strong/Persevering Co-ordinator - Board-level Appointment

Joss and Kogan (1995) found that most of the Trusts in their study had appointed a

manager or co-ordinator to take the total quality initiative forward but generally these

posts were set too low within the hierarchy. In view of this, the authors (ibid)

recommended that this post should be a board-level appointment - a recommendation

which is clearly reflected in the Department of Health requirements for clinical

governance (1998).

Earlier research (Latham, Freeman, Walshe et al, 2000) found that the post of clinical

governance lead tended to be a jointly held appointment of the medical/nurse

directors and that the majority had little or no dedicated time to carry out this role. In

contrast to the above, the Emerald Trust created the executive post of Director of

Clinical Governance which has meant that most of the Lead's responsibilities are

concerned with the discharge of this wide-ranging remit. This particular director, in

addition to a clinical background, has also played a strong operational management

role within the Trust and seemed to be perceived as a credible lead throughout the

organisation.

The appointment of an executive clinical governance lead was regarded at the

corporate level as a powerful signal of its commitment to the clinical governance

agenda; however, as in the case of TQM, this has not eased the process of translating

corporate commitment into commitment and ownership locally. Although there was

256
no evidence of managers breathing 'a sigh of relief because there was no longer a

need to worry about this quality stuff (Brown, 1993), there was certainly a variability

in the proactiveness of their approach - and some evidence that this was due to the

fact the Clinical Governance Lead seemed to be on top of the agenda. Whilst having

provided a strategic lead in terms of clinical governance, the Lead was nevertheless

becoming increasingly embroiled in operational detail. The lead took forward all of

the Significant Clinical Incident Reviews and, as Chair of the Risk Management

Team, was increasingly leading on risk management issues. A number of elements

relating to the implementation process could be contributing to this increasing

entanglement with the operational side of implementation and these will be now be

considered.

Firstly, against each of the objectives outlined in the Development Plan there is a

schedule of responsibility, however most of these are set against multiple 'key

players'. Identifying each person/group with a responsibility is important but seems

to have left the Clinical Governance Lead with the task of co-ordinating the activity

involved in reaching each of these goals across all of the six divisions. Alternatively,

in addition to each individual manager being assigned responsibility for the local

implementation of each objective, a Divisional Manager could have been identified as

co-ordinator for several objectives with the remit to take these forward across the

divisions. The Clinical Governance Lead would then have been free to retain an

overview and influence progress. At the same time, the Lead would be able to

devolve some responsibility, directly engage the Divisional Managers and streamline

the implementation and monitoring of the process.

257
Secondly, in the absence of an operational group to support the clinical governance

implementation process, there was little opportunity for the Clinical Governance Lead

to delegate. The issue of delegation related not only to the objectives discussed above

but also to those activities that would normally be the remit of more junior staff such

as the preparation of agendas, responsibility for minutes and the general servicing of

the various committees. In addition, the lack of a comprehensive risk management

function led to a situation where the Clinical Governance Lead increasingly assumed

responsibility for operational aspects of risk management almost by default.

Thirdly, a related but rather different issue is that of line management. In the absence

of a direct line of management to those tasked with the implementation of the

Development Plan, the role of the Clinical Governance Lead was inevitably one of

influencing rather than directly managing. This is consistent with the sort of 'shadow'

arrangements that are evident in certain models of total quality (Oakland, 1995) and

change management (Pendlebury, Grouard and Meston, 1998) initiatives; however, in

the absence of strong monitoring and control mechanisms, this could, as perhaps in

this case, contribute to a degree of implementation drift.

Finally, the provision of facilitation and support are important factors in the effective

implementation of total quality (Glover, 1993; Rand, 1994; Oakland, 1995) and large-

scale change generally (Miles, 1997; Pendlebury, Grouard and Meston, 1998). It is

important that this is available throughout the initiative and particularly in the early

stages when there is the risk of managerial paralysis (Whalen and Rhamin, 1994;

Dale, 1994). This may be due to a limited understanding of the requirements of

258
quality management (Dale and Cooper, 1994; Yong and Wilkinson, 1999), or simply

because of what some consider to be 'a fact of life' - that employees will give more

attention to the activities for which they are actively called to account (Dale and

Cooper, 1994). Whilst some managers were able to pick up the clinical governance

agenda and start taking it forward, others found this more problematic and did not

move forward until they had received assistance.

Pendlebury, and colleagues (Pendlebury, Grouard and Meston, 1998) suggest that

facilitators may play a valuable role as catalysts and in the Emerald Trust they could

probably have done much to clarify issues of local responsibility and accountability.

Through the provision of content and process expertise, the Clinical Governance

Development Team might also have helped in the translation and implementation of

the corporate objectives into the divisional setting. Accompanied by stronger

collective monitoring and control at the corporate level, the spotlight would thus have

focused very clearly on the responsibility of the divisions for delivering the clinical

governance agenda.

The Trust recognised, early on, that the divisions would need assistance - some more

than others. However, it was to be almost two years before the Clinical Governance

Development Team was fully established. During this time, some of the facilitators

made positive contributions to the divisions; other part-time members found their role

rather more difficult to discharge. In this, their experience was similar to that of

others elsewhere (Joss and Kogan, 1995); trying to work for the Team part-time,

members found that this activity was generally displaced by the demands of their

259
other role. This situation may have been avoided if, as suggested earlier, explicit

priorities had been set at the outset and funding allocated to ensure that the facilitation

had been available.

It is worth noting that the appointment of a facilitator to the Primary Care Division

brought immediate, observable added value to the implementation process. Amongst

other things, the Clinical Governance Facilitator ensured that the discussions within

the local Forum resulted in clearly minuted action points with deadlines for

completion and the identification of a member of the group as lead/manager

responsible. In addition, there was the creation of working groups to explore issues

outside of the meeting and a requirement to report back to the Forum at a later date.

This approach meant that issues that could not be addressed immediately would

remain within the remit of the Forum rather than 'going over the wall1 to another

group such as Clinical Leaders and risk falling into the 'white space' between the

various groups.

The Trust's appointment of a Clinical Governance Lead to such a senior post is to be

applauded; however, the Trust will need to address the current operational gap so that

the Executive Director may continue to take a strategic view and play a leadership

role. This will be essential in order to avoid the initiative degenerating into a 'list of

confusing projects' (Yong and Wilkinson, 1999) and/or the Executive Lead becoming

submerged by operational and administrative issues that essentially could and should

be delegated elsewhere.

260
9.2.6 Early Involvement of Clinicians

Over a decade ago, Berwick (1989) warned:

'Quality improvement has little chance of success in health


organisations without the understanding, the participation, and in many
cases the leadership of individual doctors'.

Joss and Kogan (1995) remarked on what was considered to be a significant lack of

involvement of consultant medical staff in the pilots which they described as a

'serious blow to the credibility of TQM' (ibid, p!07). In the Emerald Trust, the

largest group of medics were employed in mental health and learning disabilities and

both specialties were represented by a consultant on the Clinical Governance Sub­

committee.

Although, some might see the medical consultant as the key stakeholder in terms of

the clinical governance agenda in an acute unit (Hackett and Spurgeon, 1999), the

sheer diversity of the clinical workforce in a combined trust such as Emerald perhaps

challenges this notion in this context. It is interesting that, apart from the work on

consultant appraisal that was being taken forward by the Medical Director, the

initiatives highlighted in Chapters 6 and 7 do not reflect any significant corporate

engagement with the medics as a specific target group. Whether this has had any

impact on the progress of the mental health and learning disability divisions was not

explored explicitly as the focus shifted to the Primary Care Division which employs

few medics.

A review of the implementation activity outlined in earlier chapters suggests that

apart from the early awareness raising sessions and the risk management workshop

261
much later in the process, there was little in the way of formal initiatives undertaken

to engage clinicians in the clinical governance agenda. Whilst new approaches such

as Significant Clinical Incident Review had been introduced, a number of managers

felt that staff did not see this as part of the Trust clinical governance approach per se.

In fact, some of the managers did not see that the recommendations arising from these

reviews could have any relevance in areas other than where the incident actually

occurred; this was not necessarily the case.

Interestingly, the conflict between business and clinical approaches to quality

described by Pollit (1996) was not apparent; perhaps this is not surprising because, as

suggested earlier, the Trust vision of clinical governance had not been conceptualised

as TQM and CQI. Instead, the main tools for its operationalisation related to clinical

audit, risk management and so on rather than cross-departmental/divisional quality

improvement teams using specific process improvement methodologies. Certainly, in

the Primary Care Division, there did not appear to be any vehicle to take forward CQI

in a systematic manner and, unfortunately, although improvements to services were

taking place, this did not appear to be the result of a systematic approach to quality

improvement but often the outcome of individual interest and effort - a situation also

found at many of the TQM pilot sites (Joss, Kogan and Henkel, 1994; Joss and

Kogan, 1995). In fact, there was a sense of'business as usual1 amongst interviewees

at the front line of service delivery.

Yong and Wilson (1999) found that confusion over what TQM meant in practical

terms was an important barrier to implementation. Any assumption that the Trust's

262
clinical governance Development Plan was diffusing naturally towards the front line

staff seems to have been unfounded. In reality, although some of the clinical staff

interviewed had a good grasp of clinical governance in principle, others demonstrated

the sort of confusion found by the authors above (ibid). In the absence of an

organisation-wide training initiative, it is difficult to see how significant commitment

and involvement will be secured from front line staff.

9.2.7 Comprehensive Training

According to Oakland (1995, p26), the message for implementation is 'train, train,

train, train and train again' and the importance of education and training to the

effectiveness of implementation is echoed by others. Although there appears to be

some consensus that wholesale awareness raising is an important first step (Davis,

1997), more specific training in terms of both knowledge and skills needs to be

targeted at teams that will work together to solve real quality problems (Mann and

Kehoe 1995). In the absence of this 'just in time' approach, there is a risk that training

and education will take place in a vacuum; the knowledge gained quickly dissipating

unless employees are able to utilise this directly and become involved in quality

improvement activity (Dale and Cooper, 1994).

Joss and Kogan (1995) recommend a mixture of classroom education and workplace

application and this is now the approach adopted by the recently formed national

Clinical Governance Support Unit. The TQM experiment (ibid), highlighted a

significant difference between the amount of training provided at NHS and

commercial sites. The experience of the latter demonstrated that the provision of

263
adequate training required a serious commitment in terms of funding so that

appropriate trainers could be engaged and also so that staff may be released from

daily responsibilities. In practice, few of the pilot Trusts progressed beyond

awareness raising; apart from the opportunistic, localised contacts of members of the

Clinical Governance Development Team and the work around appraisal, this was also

the case at the Emerald Trust.

The provision of the necessary knowledge and skills to deliver the clinical

governance agenda is essential. The complexity of the change means that a wide

range of issues need to be addressed from the principles of clinical governance

through to the tools to deliver this agenda. This latter includes both the theoretical

and technical aspects of existing systems such as clinical audit and risk management

through to change management, team dynamics and interpersonal skills. Any

programme should be based on a thorough assessment of the education and training

needs of staff. As Firth-Cozens (1999) discovered, these requirements will inevitably

vary; it is, therefore, not prudent to assume that even senior clinicians in management

will have the same skills as senior managers without a clinical background - they

have inevitably followed a different route to the top.

Neither should it be assumed that, because systems such as risk management and

clinical audit are not new to the NHS, clinicians have the skills to undertake this

activity. This current study has highlighted this as a gap in certain areas of the Trust;

however, it has been interesting to note how the increased attention being given to

clinical audit by the clinical governance agenda has led to calls for training from front

264
line clinicians in the Trust. Similarly the increased corporate focus on appraisal has

meant that managers must now appraise all staff and this requirement has been

incorporated into their individual objectives. However, the managers must

demonstrate that they have attended training before undertaking this activity which

has led to a considerable increase in demand for training in this area - so much so that

it is proving difficult for the Trust to meet the demand in a timely manner. In

contrast, courses in clinical incident reporting, not part of the IPR, are being cancelled

because of lack of uptake. This seems to add some weight to the adage 'what gets

measured gets attention1 and highlights the need for training and development to be

linked to a robust system of individual performance review; this is yet to become

established on a Trust-wide basis.

The earlier discussion around structural issues touched on the importance of

information flows to the effectiveness of teams/groups but, as alluded to above, issues

around knowledge and skills, team dynamics and so on also play an important role.

In reality, effective teams/groups do not 'just happen' and deliberate effort needs to be

directed at facilitating this process (Dale and Cooper, 1994). Members of committees

do not automatically have the experience and/or skills to add value in these arenas

(Katz, 1993). Despite this, there is a perception amongst some interviewees that they

have been left 'to get on with it' and this seems borne out by the fact that no formal

development work has taken place with new or existing groups either as collectives or

with individual members. This is something that the Trust needs to reconsider given

the number of new groups associated with clinical governance, the existing groups

such as the Trust Board and Management Team which have, in theory, taken on new

265
responsibilities, and the fact that many of the junior managers in the Primary Care

Division are new to management per se.

From Trust Board through to divisional fora there appears to be a lack of clarity

around a variety of key issues such as roles, responsibilities, accountability and

authority in relation to the clinical governance agenda. This may have serious

implications for the effectiveness of these groups. For instance, it has been suggested

earlier in this thesis that the Trust Board minutes do not provide evidence of a robust

discussion of the reports. In addition, the reports received are largely based on

routinely available data rather than the Trust Board having taken time out to decide

what information is needed to discharge its statutory duty. Until the appointment of

the facilitator to the Primary Care Division, its Forum appeared to lack direction.

Although the group had developed Terms of Reference, these did not address the sort

of issues just described. Broad based objectives had been outlined and yet the group

did not go on to develop a work programme to ensure delivery. Neither did the

Forum identify the current information systems that should report into the group or

clarify arrangements for reporting to others (upwards, downwards or laterally within

the organisational hierarchy) on a regular basis.

Essentially, it did not appear that any of the groups discussed here took time out to

work through the elements that need to be addressed deliberately to ensure

effectiveness - whether this occurred in the other five divisional groups has not been

explored. This is an important activity for all levels of the clinical governance

hierarchy: the Trust Board is ultimately accountable for quality, the Clinical

266
Governance Sub-committee is responsible for steering the initiative and also for

fulfilling an assurance function; the Management Team is meant to be translating the

corporate objectives into local objectives and the purpose of the divisional fora is to

operationalise the clinical governance agenda at the front line of service delivery.

When one considers the education and training needed to lay the foundations of

effectiveness for each of these groups, this provides some indication of the likely

commitment that is needed from the organisation and challenges the notion that

clinical governance is likely to be cost neutral. It also highlights the fact that any

education and training programme needs to be carefully developed and planned,

based on needs assessment, phased and adequately resourced both financially and in

terms of appropriately experienced personnel.

The need for training was recognised in the Development Plan as was the need for the

Training and Development Group to develop a corporate training strategy in support

of the clinical governance agenda. Unfortunately, neither objective was achieved

during the research period although the Trust's experience in this respect appears to

echo that of the TQM pilot sites (Joss, Kogan and Henkel, 1994; Joss and Kogan,

1995) and also the findings of others (Katz, 1993; Davis, 1997). Awareness and

understanding are important foundations for building commitment and involvement

within the organisation. Unless the Trust takes a proactive approach to the education

and training of employees, it may find itself having to go 'back to the drawing board'

(Kanji and Barker, 1990) at a future point in time just to ensure people have even the

basic knowledge and skills. If these elements are essential ingredients for successful

implementation, it would seem appropriate for the necessary investment in staff to be

267
made sooner rather than later.

9.2.8 Explicit Strategy/Resources for Recognising and Rewarding Progress

The development of mechanisms to ensure the recognition and reward of initiatives

that demonstrate both the application of total quality principles and the improvements

achieved through this process are a central feature of the TQM literature; albeit one to

which Deming (1986) does not appear to subscribe. On one level, recognition and

reward may serve as an important mechanism for rewarding achievement; at another

level, rewarding certain behaviour over others is a powerful signal of what the

organisation regards as valuable (Miles, 1997).

In practice, this is a rather complex area if one differentiates between negative and

positive quality. The former is improvement through the correction of problems,

errors, service failures and the latter is improvement which constitutes added value

(Zairi, 1994). Initiatives such as the Significant Clinical Incident Reviews are aimed

at addressing negative quality which has very different connotations and how this is

recognised needs to be handled with care. As interviews with front line staff have

suggested, it should not be assumed that the openness at the corporate level which

surrounds Significant Clinical Incidents or complaints is as evident as one goes

further down the hierarchy and closer to those individuals and groups directly

associated with the incidents themselves. Some report that there is still a desire to

keep such occurrences in-house and even within the group as opposed to any wider

dissemination within the division. The Trust is approaching this issue by trying to

emphasise the learning that has arisen from investigation of the incident but it

268
inevitably remains a process that puts poor quality and sometimes individuals under

the spotlight which challenges somewhat the notions of recognition and reward.

9.2.9 Organisational Changes after Evaluation

Although certain sites developed metrics in terms of TQM activity, Joss and Kogan

(1995) found that few of the TQM pilot sites had undertaken any formal evaluation of

the implementation process itself. The Emerald Trust received a written quarterly

report of progress against specific Development Plan objectives. This seemed to

provide a comforting sense of progress which one could argue was, in part, the result

of the lack of a clear appreciation of the starting point by some senior managers

which, in turn, seems to have almost obscured the true scale and scope of the task that

lay before them. Apart from the two formal feedback points which had been

deliberately incorporated into the research process, there were no other formal

collective evaluation points scheduled to enable a review of the implementation

process as a whole. Inclusion of an internal formal evaluation might have allowed

organisational members to identify for themselves gaps in vital areas such as

organisation and individual development, the lack of a project management approach

and so on. Alternatively, without the benefit of external facilitation, it is possible that

the prevailing culture might have supported the status quo rather than promote the

adoption of approaches which may have seemed different to the way implementation

is traditionally addressed by this particular organisation.

Written feedback from the research process was first provided in December 2000; the

second and final feedback was delivered in December 2001. The first report was

269
short and succinct and a large proportion of the recommendations focused on the need

for an implementation plan to guide the process. The second report echoed this but

also highlighted specific issues relating to the whole system that might benefit from

further attention. The recommendations of each report are included in this thesis as

Appendix 6 and 7 respectively.

Although the first report was presented to the Clinical Governance Sub-committee

and the Management Team, there did not appear to be any subsequent in-depth

discussion of the findings and neither was an action plan developed to address its

recommendations. According to Hart and Bond (1995), this is not uncommon; but, at

least in this case, the research was allowed to continue. In contrast to the first report,

the second was almost a formality in that the issues raised had already been fed back

to key individuals and groups over the preceding period. In this way, there was little,

if anything, in the final document that had not previously been brought to the

attention of the Trust prior to publication. Most of the recommendations reflected the

gaps highlighted in this and the previous three results chapters; many were

subsequently addressed in the clinical governance plan which was a central feature of

the Trust application for PCT status.

It is difficult to say whether what appeared to be a greater acceptance of the feedback

second time round was because the message was presented in a manner more in

keeping with the Trust culture, whether the content was more acceptable or whether

the Trust had just got used to the messenger - perhaps it was a combination of all

three elements. However, irrespective of notions of the message and messenger etc,

270
one message to the Trust was clear - change needs to be actively managed - even

more so in the face of the massive structural change that was about to take place as

the organisation moved towards PCT status.

9.3 KEY MESSAGES FROM THE EXPERIENCE OF EMERALD NHS

TRUST

Patton (2002) warns qualitative researchers against the temptation of making

excessive claims as to the generalisability of their results arguing that, in the past, this

has fuelled the arguments of those intent on criticising qualitative methodologies. I

have also taken note of Spurgeon (1999) who emphasised the fact that

transformational change is highly context-specific; Lane (1987) who warns that

recipes for successful implementation can seem deceptively simple and thus enticing

to the unwary and 0vretveit (1999) who suggests that it is the principles that are more

likely to be transferable rather than specific programmes of quality improvement.

Given the insights cited above, it is with extreme caution that I approach the task of

identifying 'lessons' from this case study. My aim has been to provide enough 'rich

description1 of one Trust's clinical governance journey to enable the reader to

compare Emerald with their own experience or the experience of other authors and/or

to make their own evaluation if minded to do so. Nevertheless, I would like to offer a

number of observations arising from this case study that will certainly inform my own

future work in the area of change management in general and clinical governance in

particular.

271
9.3.1 Learning from the Past

The research completed almost a decade ago into the implementation of TQM in the

NHS (Joss and Kogan and Henkel, 1994; Joss and Kogan, 1995) and in Norway

(0vretveit, 1999; 0vretveit and Aslaksen, 1999) has provided valuable information

about the challenges faced by complex health care organisations as they attempt to

introduce a whole systems approach to quality management and improvement. It is

positive to see that certain key recommendations arising out of the earlier work in the

UK is reflected in the more recent thinking around clinical governance (Department

of Health, 1998; 1999). In addition, insights from of these earlier research projects

have also informed this most recent case study into the implementation of clinical

governance.

The factors outlined in Joss and Kogan's framework (1995) (Table 9.1) have served

as a useful heuristic for shaping the discussion contained within this chapter. This

framework has been used as an organising mechanism which has enabled me to

highlight and discuss not only the positive work undertaken by the Trust but also to

present what I consider to have been gaps in the implementation effort, both end state

and process. Deciding whether the Trust approach has been a success or failure is

beyond the scope of this research as the design was essentially formative rather than

summative. However, there is evidence of sub-implementation and even non-

implementation; the former in that the Trust did not deliver against some of its own

objectives for clinical governance implementation (training and development as an

example). An example of non-implementation was the prolonged lack of deliberate

activity around the implementation of clinical governance in one of the divisions.

272
Perhaps a fair assessment of the Trust approach would be to say that it has made an

important start but still has a long way to go; in that respect, the organisation probably

has much in common with other NHS Trusts.

The action research process provided the Emerald Trust with a series of

recommendations for addressing the gaps highlighted in this and previous chapters

(Appendix 6 and Appendix 7). What should perhaps be of a more wider concern is

that many of the issues that seemed to get in the way of this Trust's progress have

much in common with the numerous pitfalls identified in the general literature review

on TQM (Chapter 3) and also in the UK and Norwegian experiments (Joss and

Kogan, 1995; 0vretviet, 1999). Thus, although clinical governance might be a newer

concept, many of the challenges to successful implementation are not. With regard to

TQM, these unresolved challenges have apparently led to partial implementation

(Kolesar, 1995; Yong and Wilkinson, 1999). Consequently, the concern is that that

the manner of introduction may ultimately detract from what could be achieved

through full implementation of the concept and philosophy (ibid, 1999). These

concerns are expressed in relation to TQM but it is easy to see how they may also be

applicable to the implementation of clinical governance.

Thus, it seems the NHS has much to learn from the past if it is not to succumb to the

'collective amnesia' described by Klein (1998) and risk repeating the same mistakes

as others; behaviour which some aspects of the Emerald Trust experience appear to

demonstrate only too well.

273
9.3.2 A Study of Implementation - A Study Of Change

Jenkins (1978, p203) states unequivocally that 'a study of implementation is a study of

change' and it would seem important that this notion is kept to the fore of any

implementation effort. Whole system change is extremely complex and there is a risk

that the central theme of change may become lost in the subsequent activity. In the

case of clinical governance, the nature of the change required for implementation is

essentially transformational in nature as, for most organisations, it is a frame-breaking

concept; nevertheless, this does not exclude incremental improvement within this

transformational framework. This realisation is important so that the scale and scope

of the change effort likely to be involved may be appreciated and the energy,

resources and the timescale needed to move from the present to the future state

accurately assessed.

The effective leadership of transformational change is undoubtedly important but

arguably not enough in itself. The creation of the vision, the shaping of values and

beliefs are essential for mobilising and sustaining the workforce but change and

quality also need to be managed, neither will just happen (Oakland, 1995; Kotter,

1996). If the objectives within the clinical governance agenda are to be achieved, the

organisation must pay attention to each element of the management process and

managers at all levels need to have the knowledge and skills (as well as the other

resources) to deliver their objectives. The demands of delivering total quality and

transformational change quickly highlight the development needs of organisations,

teams and individuals. Given the scale of the intervention required to implement

clinical governance, it would also seem that the discipline of a project management

274
methodology could serve as a considerable source of added value.

9.3.3 Clarifying the 'What' of Implementation

It would seem, from the early policy documentation (Department of Health, 1997,

1998, 1999), that clinical governance was not fully formed as a concept when it was

first presented to the NHS. As indicated previously, this is not uncommon and the

detail often emerges during the implementation process (Klein, 1998). However, as

Wolman (1981) has indicated, the likelihood of successful implementation is not

merely a function of the implementation process but is also influenced by factors that

originate further 'upstream1 during the policy formulation process itself. Whilst it is

important to recognise the role of local context (Spurgeon, 1999), it is perhaps

worthwhile questioning whether, in the case of clinical governance, conceptual

development upstream could have been taken further before its presentation to the

NHS.

Although clinical governance resonates with the language of TQM and the goal of the

initiative is CQI, the policy documentation somehow stops short of making the link

totally explicit. Although TQM might be implied, the lack of clarity around clinical

governance as a concept leaves space for local interpretation of the concept itself

rather than limiting the scope of this interpretation to decisions relating to the

customisation of the principles. As is evident from TQM, interpretations may range

from a philosophy for running the business to a particular quality programme

(Witcher, 1995); these constitute very different ends of a spectrum which would

require very different approaches to implementation.

275
The Emerald Trust has conceptualised clinical governance as a vehicle for learning;

however, this is not automatically synonymous with either quality improvement per

se or the delivery of a comprehensive, integrated framework for CQI. The efforts of

the Trust during the period of the research seem to reinforce this; although the

initiatives introduced were positive steps, attempts to integrate the individual

approaches were far more problematic. This seems largely due to the Trust's

conceptualisation of clinical governance which led to a design which aimed to deliver

a vision of learning rather than CQI. In addition, designing a whole system approach

to CQI is challenging; as the earlier literature review in Chapter 3 has demonstrated,

TQM and CQI are not underpinned by a uniform set of principles nor are they

accompanied by tried and tested recipes for implementation. Thus it is unwise to

assume that managers have an in depth knowledge of TQM or CQI by virtue of their

position in management. I am able to confirm this through my own experience - it is

one thing talking about TQM and CQI at a conceptual level, it is quite another to get

past the "buzzwords' and translate the concept into something tangible that can be

operationalised in the real world.

9.3.4 Implementation Frameworks to Deliver the 'What' and the 'How' of the

Change Process

Motwani and colleagues (Motwani, Sower and Bashier, 1996) have drawn attention

to the general lack of adoption of formal frameworks to guide the implementation of

total quality in health care. Thus it appears that the Emerald Trust is not unusual as

there was no evidence that such an approach was adopted by this organisation either.

Each of the three results chapters and the previous sections of this current chapter

276
have highlighted a number of significant gaps in both the 'what' or end state

objectives and also the 'how' or process objectives in the Trust approach to

implementation. Thus, in keeping with Yusof and Aspinwall's (2000a) argument that

implementation frameworks force managers to take a comprehensive rather than

selective approach to all elements, it is the contention of this writer that the

implementation efforts of the Trust could have been augmented significantly if

frameworks to guide both the 'what' and the 'how' of clinical governance had been

utilised. It is also acknowledged here that a single framework is unlikely to provide

all of the answers (Elmore, 1978; Lane, 1987) which points towards the use of more

than one as the following discussion will demonstrate.

Although it is appreciated that clinical governance has not been explicitly defined as

TQM in the Department of Health documentation (1997; 1998; 1999), the author

found sufficient similarity to use Oakland's work on total quality (1995) as a tentative

guide to possible design elements of a whole system approach to quality improvement

under a clinical governance umbrella. Even recognising that some of the tools and

techniques may vary if the Trust had utilised such a framework, the principles of

TQM and CQI might have been more apparent and the implementation design more

appropriate for the delivery of an integrated whole system approach to continuous

quality improvement. In order to realise this goal there needed to be a greater

emphasis on the improvement of key processes, clear initiatives to engage the users of

the service, structures and a supporting infrastructure from the corporate level through

to the operating core to provide the vehicles for quality and clinical governance to

take place in practice. Instead, there is the situation where, although important

277
quality improvement initiatives have been introduced, there is a risk that they will

remain as individual initiatives rather than elements within the integrated whole.

A framework for the 'how' of implementation was based largely although not

exclusively on the Miles framework (1997). This framework served as a very useful

guide to the management of change; part of its attraction lay in its conceptual clarity

and also in the successful integration yet differentiation of the dual aspects of

implementation - end state and process. It has been seen from the discussions in

Chapters 2 and 3 that differentiation between the two is not always made explicit in

the literature and from the evidence of this case study, neither does this necessarily

happen in practice.

Just as the 'what' elements need to be made explicit, so do the 'how' elements. A lack

of clarity around the implementation process, particularly in terms of the scope and

scale of change required to establish clinical governance, may lead to insufficient

emphasis being given to significant elements or even to them being overlooked

completely as happened in some areas of the case study. An important theme within

each of the change management frameworks that have informed this inquiry has been

the notion of wholeness and its relationship with the success or otherwise of the

change initiative; omissions are likely lead to failure (Kotter, 1996; Miles, 1997;

Pendlebury, Grouard and Meston, 1998). Adoption of a framework such as the one

proposed by Miles (1997) could have alerted the Trust to a number of important gaps:

the lack of a communication strategy to support the implementation process, an over-

reliance on the Clinical Governance Lead to provide support, facilitation and co-

278
ordination of the entire process across the whole organisation, the lack of an

organisation development programme and so on.

The notion of wholeness referred to above does not mean that everything should

happen at once but emphasises the need for deliberate phasing of activity. Phasing in

turn does not imply a piecemeal approach but instead a deliberate and dynamic

orchestration of all components to achieve internal alignment; the scope of this will

vary depending on the resources available (Miles, 1997). A lack of deliberate

intervention in the system such as that just described is one of the factors that leads to

the sort of piecemeal introduction of initiatives observed by Kolesar (1993) and Yong

and Wilkinson (1999) and there is also evidence of this in the approach of this case

study site.

Finally, frameworks should not be regarded as a recipe for success; as Lane (1987)

has warned, they can seem deceptively simple to the unwary. Successful application

requires knowledge and skills in relation to change and change management.

According to the literature and from the findings of this case study, it seems unwise to

assume that either will be found automatically in every level of the management

hierarchy. Frameworks should not be seen as a way of providing the answer to

implementation but instead as a mechanism for organising the questions. Careful

utilisation may ensure that any omissions are the result of a considered phasing

decision rather than a design element that has been overlooked.

279
9.3.5 Culture Matters

The reader may, at this point, be wondering why, given this was an action research

project, these frameworks were not proposed by the researcher. Essentially, the key

recommendations of the first report drew attention to the need for a CQI model and

an implementation plan but the response from the Trust was clear:

'(The Clinical Governance Lead and the Chief Executive) would argue
with respect to the 'quality models' and pointed out that they would be
difficult to convince about adopting quality models'.

We don't go big on implementation plans and documentation. That's


not how we do things here and you (the researcher) won't change us'.

The above was interpreted as 'that's not the way we do things here1. Instead of trying

to impose a preferred approach, the researcher worked around this issue and used the

frameworks to guide her own work which then led to recommendations for future

action based on the areas illuminated by the frameworks selected. So, although the

frameworks were not used explicitly by the Trust, they have indirectly contributed to

its implementation approach.

This reluctance to use frameworks and models highlights an interesting aspect of

culture. The need for culture change in relation to clinical governance is expressed so

often it has almost become a mantra; however, this seems to be expressed most often

with reference to clinicians, for example, the need to adopt evidence-based practice.

Yet the notion of culture change also applies to managers who, it seems, need to be as

wary as their clinical colleagues of the neutralising force that the prevailing culture

may exert on new initiatives (Bate, 1994). This may be particularly evident in

successful organisations which continue to apply tried and tested formulae to

280
innovations such as clinical governance which may actually require very different

ways of managing the business.

The clinical governance agenda brings a more explicit emphasis on openness, probity

and accountability not only to the clinical quality of services but, in a broader sense

which is partly the governance element, it also includes the quality of business

management. Thus, in this sense, frameworks that offer what Anderson and

Ackerman-Anderson (2001) describe as an organising vehicle for decision-making

rather than a prescription for action could allow an organisation to demonstrate not

only that a systematic approach to implementation is being adopted but also the

rationale for the design of both the 'what' and the 'how' of the initiative. With the

advent of corporate and clinical governance, managers may also need to look at their

practice and make changes accordingly in pursuit of increased effectiveness -

irrespective of whether this represents a departure from previous ways of managing.

9.4 LIMITATIONS

This was an ambitious study for a new researcher in a number of ways. It focuses

initially on the corporate and then divisional level and therefore includes breadth as

well as depth. An action research approach posed additional challenges given the

lack of clarity around the policy at the centre of the research process. Also, from past

experience (Latham, 1996), I was well aware that the use of qualitative methods may

precipitate the 'real research' debate. Despite these challenges, it is proposed that this

study makes a significant empirical contribution to the slowly emerging research into

clinical governance and its implementation. However, there are a number of

281
limitations which will now be considered.

As highlighted in Chapter 5, action research may be regarded as something of a

continuum with a consultancy-type model at one end and a democratic, co-researcher-

type model at the other; this work sits nearer the consultancy end of the continuum.

Consequently, there is a flavour of the research having been 'done to' rather than

'done with' the organisation. This was a conscious design decision in light of an

important constraint - the time available to the researcher; hence, intervention in the

organisation was generally confined to the provision of feedback. Whilst the Trust

considered this to have been beneficial, a more facilitative approach might have made

a greater impact in terms of moving the implementation process forward. A co-

research approach in which the host played a more active part in the research process

might have secured greater ownership of the feedback which appeared to be

something of an issue at the outset.

The focus of the study initially rested on the corporate level and, whilst still keeping

the corporate level in view, attention was extended to one of the divisions. The

intention was to gain an overview of clinical governance as it was being

operationalised further down the hierarchy and nearer to the front line of service

delivery. Whilst this was achieved, it threw a spotlight on non-implementation which

was of value to the research process but, as the other divisions were not also studied

as closely, there is a risk that this is perceived as indicative of the state of clinical

governance implementation at divisional level across the Trust as a whole. The initial

rapid appraisal suggests that this was not the case although it was true that the other

282
five divisions were at different stages of implementation. Unfortunately the research

design did not permit any subsequent work of any detail with divisions other than

Primary Care.

Joss and Kogan (1995) recommend that studies of this kind should be conducted over

as long a period as possible given the nature of a total quality initiative itself and the

scale of change required for implementation to take hold. Although this study took

place over a period of 18 months, the focus only specifically incorporated the

Division for the last six months. This was rather unfortunate as, during this period,

the local Forum was really still in the forming stage and rather lacking in direction

until the arrival of the facilitator whose early interventions appeared to promise

progress. Unfortunately, it had not been anticipated that there would be such a delay

in the setting up of this Forum so the research process ran out of time before it could

really capture movement at the local level.

Finally, a single case design may be regarded by some as a limitation in itself;

however, that surely must depend on how it is used subsequently. It has not been the

intention here to provide a universal prescription for others to follow given that the

implementation of clinical governance is likely to be of such a highly context-specific

nature. Instead a rich description is offered; this highlights the issues facing a real

organisation as it attempts to implement a concept whose inherent complexity is

increased by what might be perceived as a lack of conceptual development on the part

of the Department of Health. The rich picture in terms of the 'what' and the 'how1 will

allow practitioners to uncover similarities with their own efforts and the

283
recommendations from the reports to the Trust indicate how gaps may be addressed

(Appendix 6, Appendix, 7). This study also provides a comprehensive account of

method and results which may help others intending to undertake similar research

within this arena.

9.5 FURTHER RESEARCH

The earlier view of the clinical governance literature presented in Chapter 2

highlights the fact that empirical work in this field is emerging slowly. Thus there

seems to be immense scope for further research into what is essentially 'new territory'.

In considering a possible focus for further work, it is worth noting the comments of

those who have been involved in researching TQM. Joss and Kogan (1995) and

Yong and Wilkinson (1999) make similar points in that, because of the different ways

in which TQM is interpreted, it is difficult to know what is actually being

implemented even though organisations may say (or think) they are implementing

TQM. In light of this, Black and Porter (1996) suggest that instead of trying to

capture convenient taxonomies what is required are more case studies which focus on

the practical experiences of organisations.

The observations above could also be applied to further research into clinical

governance. Clinical governance is a complex construct; there is no universal

blueprint either in terms of content or implementation and the Department of Health

approach has encouraged local interpretation. Dewar (1999) has described clinical

governance as being 'under construction', but it is perhaps more accurate to think of

this as being 'under local construction'. For this reason, there is a real need to

284
understand how clinical governance is being conceptualised by NHS Trusts (the

'what') and also to identify the processes by which the policy is being implemented

(the 'how'). This would seem to suggest that additional descriptive case studies such

as that of the Emerald Trust would be desirable. Given the current absence of an

evidence base to support the notion that clinical governance will actually deliver

quality improvement (Goodman, 2002; Thomas, 2002), in depth case studies might

also be seen as an essential precursor to future attempts to establish the effectiveness

of clinical governance as policy.

The apparent emphasis on the whole system which the above discussion implies does

not preclude research into the various component parts which make up this complex

agenda. There is likely to be value in following research themes across organisations

such as the strategies, structures, systems and processes introduced; the roles of key

actors and in particular clinical governance leads, Trust boards and so on. However,

it should be remembered that at the core of clinical governance is the language and

rhetoric of a whole system approach to continuous quality improvement which is a

goal that requires the strategic alignment of all components and not merely the

summation of its parts. To capture this, one surely needs the kind of rich picture

provided by the case study.

9.6 CHAPTER SUMMARY

The focus of this chapter has centred on providing a review of the results arising from

the research process. Once again, faced with a rich picture from the field, a framework

has been utilised to shape the subsequent discussion. This time it is a framework

285
derived from the evaluation of an experiment to implement TQM in the NHS over a

decade ago. Using this as a vehicle it has been possible to highlight areas of progress in

relation to clinical governance at the Emerald Trust and also gaps in both the content

and the process of implementation. Arising from this discussion, a number of key

messages are offered for the consideration of practitioners and researchers alike. The

limitations to the study are presented; and finally, suggestions for further research

proposed. Concluding comments are reserved for the final chapter which now follows.

286
CHAPTER 10

CONCLUSION

'Those that cannot remember the past are condemned to repeat it'
(Santayana, 1905)

This thesis has been based on an action research project which has followed the

experience of one NHS Trust as it attempted to implement clinical governance.

Implementation may be conceptualised as both a change process and an end state; to

capture this duality, two broad research questions have been posed namely: what

constitutes the local clinical governance agenda (content) and how has clinical

governance been implemented (process). Given that the main purpose of these research

questions is to explore and describe, an overarching qualitative framework has been

adopted to guide this study. Data collected using a range of qualitative methods has

been presented in three earlier chapters: 6, 7 and 8. In these chapters, the readers may

find a rich picture of Trust progress in relation to the clinical governance agenda which

has been clearly differentiated in terms of both the what/content and the how/process of

implementation.

There are some who would say that the implementation of TQM is the most complex

activity an organisation can undertake (Kanji and Barker, 1990) and those tasked with

the implementation of clinical governance may say the same about this recent initiative.

The research evidence suggests that the Trust has succeeded in moving the clinical

governance agenda forward in terms of both content and process on a number of fronts;

sometimes these efforts have been constrained by the existing corporate culture and

287
other initiatives have been introduced which have signalled a definite change in 'the

way we do things here'. Significant gaps in the Trust approach have also been

identified during the course of the research process which have related to

implementation once again in terms of both content and process issues. However, the

organisation has addressed a number of these both in real-time during the life of the

project and has also incorporated many of the research recommendations into the

clinical governance plan for the new Primary Care Trust.

It has been argued earlier in this thesis that the effectiveness of policy implementation

is influenced by factors originating both upstream during formulation and downstream

during implementation; apparently deficiencies in the former are least likely to be

acknowledged (Hogwood and Gunn, 1984). Gaps in the implementation process have

been referred to above and elsewhere within this thesis. Whilst it is seemingly not

uncommon for the detail of a new policy to emerge during the implementation process,

this approach seems to have contributed to a lack of clarity in the field; the early

clinical governance literature attests to this with contributors trying to make sense of

the concept. It would appear that leaving the field to interpret the concept is very

different from leaving space for local interpretation of the design elements. The risk is,

as in the case of the Emerald Trust, that it will not be interpreted as CQI and alternative

conceptualisations will bring different goals and organisational designs. For instance,

the Trust focus on learning is different from a CQI focus; the former has brought,

amongst other things, an emphasis on negative quality and seen the creation of more

library facilities; the latter is intended to deliver an approach which results in the whole

organisation being aligned in support of natural work teams addressing both the
288
positive and negative aspects of quality.

Implementation is the most difficult aspect of change. Klein (1998) warns against

allowing the challenge of new policy to divert attention from the fact that there are

lessons to be learned from the decades worth of difficulties governments have

experienced with the quality agenda. To reinforce this view, it is worth noting that

most, if not all, of the gaps identified in the Emerald Trust's approach to

implementation have been seen before as the chapters reviewing the literature on total

quality management and change demonstrate quite clearly. On a more positive note,

the Emerald Trust has learned from its experience and sought to address the earlier gaps

in the implementation process. There is also evidence that the recommendations from

previous research (Joss, Kogan and Henkel, 1994) seem to have been heeded and are

reflected in the upstream formulation of clinical governance as policy.


*

Whilst clinical governance might be a new concept, the underlying principle of CQI

most definitely is not. In fact, one could be forgiven for thinking that clinical

governance was TQM by another name; an 'old wine in a new bottle' (Asubonteng,

McCleary and Munchus, 1996) or perhaps, more appropriately given the context, an old

medicine with a new prescription. It has been argued that business approaches to

quality are hard for the NHS to swallow (Pollitt, 1996) and so it will be fascinating to

watch whether the new prescription (clinical governance) proves easier to tolerate. The

Chief Medical Officer (Donaldson, 1998) is in no doubt that this remedy is needed as

289
the following reference to the 'Bristol Case' demonstrates:

"The spectre of bereaved parents picketing the General Medical Council


•with cardboard children's coffins was a harrowing sight. Through
clinical governance the NHS has the opportunity to make a major leap
forward in the prevention of such catastrophes so that history is never
seen to repeat itself and public confidence is sustained'.

Clinical governance is described as the 'linchpin' of the quality strategy for the NHS

(Department of Health, 1999; p4); whether it will successfully deliver the proposed

agenda, it is perhaps too early to say. What seems clear in the meantime is that unless

NHS Trusts recognise the magnitude of the change involved in implementing clinical

governance, unless these organisations understand exactly what it is they are meant to

be taking forward and unless the knowledge and skills around total quality and change

management are actually present in these organisations, the odds may be more in

favour of Goodman's assessment of clinical governance as 'empty phrases' (1998)

rather than that of Scally and Donaldson (1998) who paint a more positive picture 'of

the big idea that has shown that it can inspire and enthuse'.

oooooOOOOOooooo

290
APPENDIX 1: IMPLEMENTATION TYPOLOGIES

Appendix la: Implementation typologies

Five steps to implementation (Glover, 1993)


1. Awareness 2. Education 3. Structural change 4. Necessary activities 5. Expected
improvements
Leadership Leaders TQM design Empowerment •
Employee ownership
Review of options Middle managers Steering committee Participation

Productivity and
Seminars, books Supervisors Quality teams Consensus
quality
Planning session Employees Revised information system Harmony
Selection of change agent Facilitators Facilitation process Measurement
• Consumer satisfaction
Readiness survey QT leaders Fact-based problem solving• Market share and
Involvement of union Continuous improvement demand
• Improved goals and
financial performance
Seven steps with project management Implementation framework for SME - 10 steps Stages of implementation (Kanji and Barker,
(Stamatis, 1994)_____________ (Ghobadian, Gallear, 1997)___________ 19900
1. Energise the organisation 1. Recognition of the need for the introduction of 1. Identification and preparation
2. Change the culture of the organisation TQM
3. Define the scope of your commitment 2. Developing understanding among management and 2. Management understanding and commitment
4 Identify key process and product variables supervisors
5. Implement SPC Establishing goals and objectives of the quality
6. Incorporate process improvement activities in the improvement process
organisation Plan the TQM implementation 3. Scheme for improvement
7. Assess the quality improvement in the organisation Educate and train all employees
Create a systematic procedure
Align organisation 4. Critical analysis
Implement the TQM concepts
Monitor the implementation of TQM concepts
10. Engage in continuous improvement by going back
to step 3

291
Appendix 1 b: Implementation typologies

Process for designing and implementing a CQI strategy (Rand, 1994) TQM implementation (Oakland, 1995)

Three key phases: Initial steps following one day seminar for top management (links in with
commitment, communication, culture - see text):
1. Exploration and design
• Review mission, values and culture • Formation of quality council (top management team)
• Review theories, models and techniques • TQM attitude survey
• Select a framework profile of organization
• Gap analysis quality costs
• Develop model: principles, process, support strategies strengths/weaknesses
• Design organizational accountabilities • 2 day strategic planning workshop (quality council)
charter
2. Implementation mission statement
• Establish goals and plans quality policy
• Communicate - CSFs
• Educate Critical processes
• Implement model Implementation action plan
• Measure • Formation of process quality teams and/or site steering committees
• Reward and recognise • Teamwork seminar for quality council (may precede strategic planning
workshop)
3. Sustain • Identify team facilitators
• Measure and celebrate • Run specific training and team-forming workshops
• Revise model • Company-wide awareness training on customer/supplier interfaces
• Revise organizational accountabilities • Implementation/improvement projects for quality deployment
• Revise reward system quality costing
customer/supplier framework
- DPA
Systems
Techniques
• Feedback/follow-up workshops throughout implementation_______ _

292
Appendix Ic: Implementation typologies
Dale and Boaden 1994
Organizing Tools and techniques Measurement & feedback Culture change
Long term strategy for quality Identification of applicable tools and Key internal and external performance Assess the current status of
improvement formulated and integrated techniques at each stage of QI measures identified, defined and organizational culture before developing
with other strategies; developed plans for change
Quality improvement plans developed

Definition of quality, TQM, and QI Training in the use of tools and On-going discussion with customers Recognise the ongoing nature of culture
developed and agreed techniques, for the right people at the about expected performance change and the need to outline specific
right time culture changes
Identification of sources of advice Use of a formal quality system Benchmarking once QI is under way Plan change consistently and
incrementally
Choice of approach to TQM Identification of other systems and Means for celebration and Recognize the role of people as an asset
standards that may be required by communication of success and teamwork
customers or legislation developed
Stages of improvement activity Identification of key business processes Consideration of the link between results Consider the inter-relationships of all
identified, taking the starting point into and improvement based on these from QI and rewards activities within the organization in order
account processes to minimize conflict
Vision and mission statements developed Means of assessing the progress towards Consider the national and local culture
and communicated to all members of the world-class performance; eg EQA,
org MBNQA
Formal programme of education and
training for al members of the
organization
Organizational infrastructure established
to facilitate local ownership of QI
Teamwork established as a way of
working and part of the infrastructure

293
APPENDIX 2: FRAMEWORK FOR TRANSFORMATIONAL CHANGE
(Adapted from Miles [1997, p6])

Generating energy

• Confronting reality
• Creating and reallocating
resources
• Raising the bar
• Modelling desired behaviors

Process architecture \
Vision
• Education
• Involvement Transformational Visioning
• Co-ordination Leadership Business modelling
• Feedback Analyze total system
• Communication Focus on Transformation
• Support Initiatives (TI's)

Aligning the organization


The TIs (The 'what')
• Vision & Strategy
• Restructuring • Structure
• Implement infrastructure • Systems & processes
• Reshape culture • People
• Build core competencies • Culture

294
APPENDIX 3: EIGHT STAGE PROCESS FOR CHANGE
(Adapted from Kotter, 1996)

1. Establish a sense of urgency

2. Create the guiding coalition

3. Develop a vision and a strategy

4. Communicate the change vision

5. Empower employees for action

6. Generate short-term wins

7. Consolidate gains and produce more change

8. Anchor new approaches in the culture

295
APPENDIX 4: TEN KEYS TO EFFECTIVE CHANGE
MANAGEMENT (Pendlebury, Grouard, Meston, 1995)

1. Defining the vision

2. Mobilising

3. Catalysing

4. Steering

5. Delivering

6. Obtaining participation

7. Handling the emotional dimension

8. Handling the power issues

9. Training and coaching

10. Communicating actively

296
APPENDIX 5: ACTION RESEARCH - A MODEL FOR
PRACTICE (Adapted from Bate, 2000)

297
APPENDIX 6: RECOMMENDATIONS - ACTION SET 1

FIRST REPORT & FEEDBACK TO THE TRUST


December 2000

The trust is asked to consider the following action points:

1. Clarify and make explicit the model(s) of quality (QA/CQI) underpinning the
trust approach to clinical governance and ensure that the trust clinical
governance framework is congruent.

2. Distil the trust's vision for clinical governance into a short, concise paragraph
that is meaningful, likely to engage the organisation at all levels (clinical and
non-clinical staff) and convey clearly the trust aspirations. This should then be
communicated in a comprehensive and sustained manner internally and
externally and modelled at every opportunity.

3. The current clinical governance development plan offers a valuable outline of


the Trust's short term objectives for clinical governance. As it is now almost 12
months since the plan was published, a review is recommended in light of the
progress already made by the Trust and the comments contained in this paper.
The Trust's house style is recognised and it is acknowledged that the Trust will
need to strike an appropriate balance between central direction, prescription vs
letting the trust find its own way, local freedom. However, for the purpose of
Trust-wide implementation, given the size and nature of the agenda and the
varied and developing experience within MT, we believe that it is important that
the plan should serve as a map which addresses the following:

clear aims and objectives with short and long term milestones;
structures in place throughout the organisation accompanied by clear terms
of reference;
supporting infrastructure;
- clear roles, authority and responsibilities, accountability, co-ordination
arrangements;
individual, team (corporate, MT, clinical, improvement teams) and
organisational training and development needs specific to the quality
management/leadership and innovation agendas;
- sources of support and facilitation (internal and external);
- communication strategy;
resource implications of plan.

4. Re-launch clinical governance highlighting recent quality improvement


initiatives (large and small scale) that will illustrate clinical governance in action
eg: ?lung cancer pathway, hospital at home, SCI, other.

298
APPENDIX 7: RECOMMENDATIONS - ACTION SET 2
SECOND REPORT & FEEDBACK TO THE TRUST
December 2001

1) It is recommended that the Clinical Governance Sub-committee gives


consideration to the development of an implementation plan that will not
only make explicit but also integrate all aspects of the implementation
process.

2) It is recommended that the current Terms of Reference of the Clinical


Governance Sub-committee are reviewed, the focus of its monitoring
remit (strategic/and or operational) made explicit.

3) It is recommended that the present clinical governance framework is


reviewed in terms of content and coverage to ensure that it reflects the
monitoring remit of the Clinical Governance Sub-committee.

4) It is recommended that the collective development needs of the Clinical


Governance Sub-committee are identified and addressed accordingly.

5) It is recommended that the role and membership of the Best Practice


Development Team are reviewed .

6) It is recommended that the collective development needs of the Best


Practice Development Team are reviewed and met accordingly.

7) It is recommended that the focus of the Risk Management Team's


monitoring remit is made explicit and that the current data set is
reviewed in terms of content and coverage.

8) It is recommended that consideration is given to the merits of appointing


to an operational risk management role in support of the Clinical
Governance Lead, the Risk Management Team and also to provide
specialist support to the directorates/divisions.

9) It is recommended that consideration is given to the development of a


dataset to facilitate regular, collective monitoring of existing training and
development initiatives.

10) It is recommended that consideration is given to the 'cultural' aspects of


accessing library resources and to overcoming 'cultural' barriers to
access in particular.

11) It is recommended that the collective and individual development needs


of the Trust Board are identified and met accordingly.

12) It is recommended that a review of both internal and external


communications is undertaken and the development of an integrated
communication strategy considered.

299
13) It is recommended that an explicit and on-going communication plan is
developed to support the Trust-wide implementation of clinical
governance.

14) It is recommended that current arrangements for training provision in


relation to risk management and its components are reviewed - in the
short term to address the issue of attendance and in the longer term to
inform the development of the Trust training and development strategy.

15) It is recommended that a formal response to Significant Clinical Incident


Review reports from each of the Trust divisions is incorporated as a
feature of the post review audit process.

16) It is recommended that the Significant Clinical Incident Review process


is integrated within the wider risk management process of the Trust.

17) It is recommended that the current mechanisms for reporting clinical


effectiveness and audit activity are reviewed and this information is
incorporated into the regular clinical governance reporting framework.

18) It is recommended that an explicit Trust clinical audit programme is


developed which makes explicit both corporate and divisional priorities
and serves as a framework against which progress may be monitored and
opportunities for synergy identified.

19) It is recommended that consideration is given to the inclusion of a


representative of the IM&T function to the membership of the Clinical
Governance Sub-committee.

20) It is recommended that information on progress against clinical


governance-related aspects of the IM&T action plan is incorporated into
the regular clinical governance reporting framework.

21) It is recommended that consideration is given to the incorporation of the


management and reporting arrangements for user involvement into the
mainstream management arrangements of the Trust.

22) It is recommended that information on the progress of user involvement


initiatives is incorporated into the regular clinical governance reporting
arrangements.

23) It is recommended that the current arrangements through which the


Human Resource issues inform the clinical governance agenda are
reviewed and action taken to achieve greater integration.

300
APPENDIX 8: A SELECTION OF INTERVIEW
SCHEDULES / GUIDES

APPENDIX 8a: INTERVIEW CHIEF EXECUTIVE - OCT 2000

1. What do you see as the main objective of clinical governance

2. What changes do you expect to see in the way that the trust works when clinical
governance is embedded

3. If a junior member of staff wanted to know what the trusts vision for clinical
governance is; where would they look it up

4. Accountable officer - what does this mean in practice

5. How did the development plan come into being (involvement)

6. How does the development plan relate to the business plan

7. What is the mechanism for implementation of the development plan (co­


ordination, leadership)

8. How is the implementation process being monitored

9. What systems are in place to deliver clinical governance

10. How is the clinical governance agenda communicated to the trust

11. Have the directorate managers had any specific training in relation to clinical
governance

12. Culture change - specific initiatives aimed at this

13. Quality circles

14. What does the trust do particularly well/What could be improved

15. What helps deliver the service/What gets in the way

16. What needs to happen next

301
APPENDIX 8b: INTERVIEW TRUST CHAIR

1. What do you see are the main objectives of clinical governance

2. Key elements of clinical governance

3. Newness

4. Your role as Chair

5. Role of trust board

6. Clinical governance - statutory duty for quality - has that made a difference to
the way the board operates, business of the board, thinks about its role and
responsibility (SCI reviews - quality issues explicit)

7. Preparation and training for members; assessment of awareness/understanding


of the implications for them - particularly non-execs

8. Key priorities for the trust in relation to clinical governance

9. How were these determined; role of the trust board in relation to this

10. Systems in place to deliver the clinical governance agenda; systems needed

11. What information tells you that these systems are working effectively

12. Role of clinical governance sub committee

13. Challenges facing the trust in the implementation of clinical governance

302
APPENDIX 8c: INTERVIEW CLINICAL GOVERNANCE LEAD - OCT
2000

1. Tell me a bit more about your role in relation to clinical governance

2. Has this changed over time, how

3. Cultural emphasis in the trust; what interventions have been directly aimed at
changing culture

4. Development plan; implementation and monitoring processes

5. Strategic role of clinical governance committee; how is the operational function


discharged

6. Rationale for elements in the clinical governance reporting framework

7. Progress of the Serious Clinical Incident Investigations

8. Overall progress in implementing clinical governance agenda

9. Next steps

303
APPENDIX 8d: INTERVIEW DIRECTOR OF FINANCE - OCT 2000

1. Please tell me about your responsibilities within the trust

2. What do you see as the main objective of clinical governance

3. How is clinical governance different to other quality initiatives

4. What's happened so far around clinical governance in the trust

5. What has been your role in that

6. Where is the trust up to with its development plan

7. Links with business plan

8. Financial implications of clinical governance

9. Implications of clinical governance for managers

10. Rhetoric of policy - raise profile of quality over activity and finance

11. Is this feasible - how could it be achieved

12. Links with controls assurance

13. What needs to happen next

304
APPENDIX 8e: INTERVIEW NON-EXEC DIRECTORS - OCT 2000

1. Can you tell me what you see are the main objectives of clinical governance

2. What differences do you expect to see in the way the trust works when clinical
governance gets to be part of the way things happen

3. Can you think back over the last 18 months and describe what the trust has done
to take the agenda forward

4. If you were asked by a D grade nurse to explain the Trust's vision of clinical
governance what would you say; where would s/he find this written down

5. How do you see your role in relation to clinical governance

6. How has this come about

7. What structures are in place

8. What systems are in place

9. What are the key indicators that tell you that these systems are working

10. Key examples of improvement (not capital developments); what was the catalyst
(part of CQI/problem initiated)

11. Development plan; how did this come about

12. What is the mechanism for implementing this

13. If you asked a front line clinician to describe what the trust was doing about
clinical governance and what their responsibility as a clinician is; what sort of
response do you think you would get

14. What do you think the organisation does particularly well

15. What could it do better

16. What helps deliver the service

17. What gets in the way

305
APPENDIX 8f: INTERVIEW DIVISIONAL MANAGERS - SEP 2000

Could you start be telling me about the work of the directorate


• What sort of services do you provide; To whom
• By whom - professional groups, wte, management hierarchy, other agencies
• How is the service delivered / Where - location / When
• How long have you been here - what changes have you seen
• What helps you deliver the service you want to give your clients
• Is there anything that gets in the way of delivering the sort of service you want
to deliver

Directorate
• Strategy and vision - business plan, resources, outputs, environment, technology
• Structure - hierarchy. Main decision/co-ordinating group, other decision groups
• Infrastructure - communications, HR (appraisal, CPD), reporting
• Infrastructure - quality: quality group, quality strategy, quality monitoring, key
quality indicators, reporting
• Infrastructure - clinical audit, risk management, incident reporting, significant
clinical incidents
• People (see service)
• Competencies - and what could be done better
• Culture

Quality improvement
• What initiatives over last 3 years / When did these happen
• What/who was the catalyst for the change. Who was involved - design,
implementation,
• How was the impact evaluated / What was the outcome
• Is there a programme for improvement

Clinical governance
• What do you see as the main objectives of clinical governance - QI, QA, control
• What has the directorate done so far to implement clinical governance
• What is the aim - how developed
• Action plan - key interventions, how developed, how, who, when
will these be implemented - links to business plan and resources
• Action plan interventions - inclusion of education and
involvement, co-ordination mechanisms
(directorate/corporate/clinical governance subcommittee),
feedback and communication, support, leadership
• Baseline assessment of ? what, ?existing quality systems such as
clinical audit, CRM, CPD, knowledge management complaints

What has been the response of clinicians - assessed


Changes in the way the directorate works
Significant clinical incident investigations

306
APPENDIX 8g: FOCUS GROUP NON-EXECUTIVE DIRECTORS - SEP
2001

General role
• How do you see your role as non-execs

• Strategy/policy - formulation/monitoring
- what stage involved in strategy/policy development process
- away days to focus on strategy

• Each have a lead focus/ Written objectives

• What sort of preparation and training have you had specifically for your role as
non-exec - anything on-going

Clinical governance

• Preparation for role in relation to clinical governance agenda - education,


training; briefings

• What impact has clinical governance had on your role as non-exec


- corporate accountability for quality - what has this meant in practice
- how does the statutory duty for quality impact on your role
- what tells you that you are discharging that duty

• What specific information do you get to enable you to discharge your


responsibilities in relation to clinical governance - how were info needs
identified

• Do you feel able to challenge the executive team around this agenda

• Where do you think the trust is currently with clinical governance - structures,
processes. What tells you that clinical governance is going forward

• What information tells you that systems are in place and working
risk register/risks managed
- clinical audit - closing the loop
- programme of QI activity
- appraisal / CPD

• What's new about clinical governance

• Has there been any team building as a board development - new executive team

307
APPENDIX 8h: INTERVIEW PRIMARY CARE DIVISION MANAGERS

Role of manager
• How long in this post
• Key areas of responsibility
• To whom are you accountable; how is this discharged (? Div manager)

Department
• Strategy - business plan, objectives for this year, quality strategy - objectives
• Structure - decision making group, hierarchy, quality group
• Infrastructure:
Decision making
Problem solving
Communications, regular meetings, mechanisms for feedback up and down
Appraisal, training, PDPs
Clinical audit, risk management, incident reporting
Complaints
R&D
Access to evidence (library, hardware, skills in EBP)
NICE guidance, NSF; how is this introduced if relevant
User involvement
• People - see service
• Culture
Recent improvements (eg: ICPs,)

Quality improvement
• Key quality indicators
• What initiatives over last 3 years
• When did these happen
• What/who was the catalyst for the change
• Who was involved - design, implementation, evaluation, (multi-disciplinary)
• How was the impact evaluated
• What was the outcome
• Sustained after initiative
• Is there a programme for improvement

Clinical governance
• Vision:
> What do you see as the main objectives of clinical governance (QI, QA,
control);
> What do you see as the key elements of clinical governance
> Do you see clinical governance as something new
'r What sort of differences in the service do you expect to see when clinical
governance becomes a way of life
'r What is your role as a manager in relation to clinical governance
•r Specific lead (eg: clinical governance committee)

308
• What has the department done so far to implement clinical governance
Strategy
Is there a written plan
What are the key objectives 01 /02 and beyond
- How were these identified
Who was involved in development, how
- How does this link with the business plan,
- Are there timescales for action,
- How will these be monitored
- What are the critical success factors
- Have resources been allocated
- Reporting mechanisms; feedback to staff, feedback up the hierarchy
- Was there a baseline assessment of existing quality systems?

Structure:
Clinical governance lead; co-ordinating group; clinical governance
action plan (key objectives)

Infrastructure:
- Leadership arrangements
- Accountability
- Reporting mechanisms
- Co-ordination mechanisms (clinical audit, complaints, risk management
Awareness raising
- Training for clinical governance, CQI
- Education - CPD
- Involvement; identification of issues, development, implementation,
monitoring,
- Communication and feedback
- Appraisal
- Improvement
- Support
- Multidisciplinary approaches

People:
What have been the response of clinicians to clinical governance - has this been
assessed

Competencies -
Key competencies for clinical governance - existing/to be developed

Culture -
Involvement, multidisciplinary working, CQI, customer focus

What helps you deliver the service / What gets in the way

309
APPENDIX 8i: FOCUS GROUPS DISTRICT NURSES/HEALTH VISITORS

1. Please write down what clinical governance means to you

2. What's been happening in the trust to take clinical governance forward

3. Early road shows; what since

4. Anything more locally

5. If not attended road shows, how have you heard about clinical governance

6. What changes have you noticed in the way you work as a result of clinical
governance

7. What elements get most attention/least attention

8. How does clinical audit happen; what feedback do you get from incident
reports; how are complaints handled - are these discussed in the staff meetings

9. What do you think you think needs to be done differently/happen to make


clinical governance a reality

310
APPENDIX 8j: INTERVIEW ACTING DIVISIONAL MANAGER - MAY 01

Division/Locality
• Strategy - business plan, objectives for this year, quality strategy - objectives
• Structure - decision making group, hierarchy, quality group
• Structure - other groups apart from PCG nurses and PAMS
• Infrastructure:
Decision making; problem solving
Communications, regular meetings, mechanisms for feedback up and down
Appraisal, training, PDPs
Clinical audit, risk management, incident reporting, complaints (and
integration system)
R&D
Access to evidence (library, hardware, skills in EBP)
NICE guidance, NSF; how is this introduced if relevant
User involvement
• People - see service
• Culture: Recent improvements (eg: ICPs,) and CQI, multidisciplinary
working, customer focus

Quality improvement
• Key quality indicators
• What initiatives over last 3 years
• When did these happen
• What/who was the catalyst for the change
• Who was involved - design, implementation, evaluation, (multi-disciplinary)
• How was the impact evaluated
• What was the outcome
• Sustained after initiative
• Is there a programme for improvement

Clinical governance
• Vision:
> What do you see as the main objectives of clinical governance (QI, QA,
control);
> What do you see as the key elements of clinical governance
> Do you see clinical governance as something new
> What sort of differences in the service do you expect to see when clinical
governance becomes a way of life
> What was your role as community manager in relation to clinical governance
> Specific lead

• What has the locality done so far to implement clinical governance


Strategy
Is there a written plan
- What are the key objectives 01/02 and beyond
- How were these identified
- Who was involved in development, how
- How does this link with the business plan,

311
- Are there timescales for action,
How will these be monitored
What are the critical success factors
- Have resources been allocated
- Reporting mechanisms; feedback to staff, feedback up the hierarchy
- Was there a baseline assessment of existing quality systems?

Structure:
- Locality group
- Locality lead

Infrastructure:
- Leadership arrangements
- Accountability
- Reporting mechanisms
- Co-ordination mechanisms (clinical audit, complaints, risk management
Awareness raising
Training for clinical governance, CQI
- Education - CPD
- Involvement; identification of issues, development, implementation,
monitoring,
- Communication and feedback
Appraisal
- Improvement
Support
- Multidisciplinary approaches

People:
What have been the response of clinicians to clinical governance - has this been
assessed

Competencies -
Key competencies for clinical governance - existing/to be developed

Culture -
Involvement, multidisciplinary working, CQI, customer focus
What helps you deliver the service
What gets in the way
What has got in the way of taking clinical governance forward

Divisional clinical governance forum


- Clinical governance lead for division
- Co-ordinating group; TOR
- Key objectives
- Translated into action plan
- Role of group chair - preparation; training
- Role of members - preparation; training
- How will the agenda be formulated
- Reporting mechanisms

312
- Link/integrate with other structures internally and externally (trust
corporate and divisional; PCG)

Serious clinical incidents

• How do you get to hear that incidents have occurred and are being investigated
• How do you get the information around action sets arising from the review
• How is this actionned locally
• Have you had any SCI on your patch
• Tell me about the incident; how are the recommendations being implemented

313
THESIS REFERENCES

Ahire, S. Golhar, D. Waller, M. (1996). Development and Validation of TQM


Implementation Constructs. Decision Sciences. 27(1) 23-56

Alexander, J. Weiner, B. Bogue, R (2001). Changes in the Structure, Composition,


and Activity of Hospital Governing Boards, 1989-1997: Evidence from Two National
Surveys. The Millbank Quarterly. 79(2)253-279

Anderson, D. Ackerman-Anderson, L. (2001). Beyond Change Management. San


Francisco: Jossey-Bass

Anderson, E. Adams, D. (1997). Evaluating the Success of TQM Implementation::


Lessons from Employees. Production and Inventory Management Journal. Fourth
Quarter. 1-6

Anderson, J. Rungtusanatham, M. Schroeder, R (1994). A Theory of Quality


Management Underlying The Deming Management Method. Academy of Management
Review. 19 (3) 472-509

Arndt, M. Bigelow, B. (1995). The implementation of total quality management in


hospitals: How good is the fit? Health Care Management Review. 20(4) 7-14

Arrington, B. Gautam, K. McCabe, W. (1995). Continually Improving Governance.


Hospital & Health Services Administration. 40 (1) 95-110

Asubonteng, P. McCleary, K. Munchus, G. (1996). The evolution of quality in the


US health care industry: an old wine in a new bottle. International Journal of Health
Care Quality Assurance. 9 (3) 11-19

Audit Commission. (1995). Taken on Board. London: HMSO

BAMM. (1998). Clinical Governance in the new NHS. Stockport: BAMM

Bate, P. (1994). Strategies for Cultural Change. Oxford: Butterworth-Heinemann

Bate, P. (2000). Synthesizing Research and Practice: Using the Action Research
Approach in Health Care Settings. Social Policy and Administration. 35 (4) 478-493

Bate, P. Kahn, R Pyle, A. (2000). Culturally sensitive structuring: An action


research-based approach to organization development and design. Public
Administration Quarterly. Winter. 445-470

314
Bate, P. Robert, G. (2002). Studying Health Care "Quality" Qualitatively: The
Dilemmas and Tensions Between Different Forms of Evaluation Research Within the
U.K. National Health Service. Qualitative Health Research. 12(7)966-981

Beckford, J. (1998). Quality: a critical introduction. London: Routledge

Beckhard R. Harris, R. (1897). Organizational Transitions. Reading Mass:


Addison-Wesley Publishing

Berger,A. (1998). Why doesn't audit work? British Medical Journal. 316 875-76

Berwick, D. (1989). Continuous Improvement as an Ideal in Health Care. The New


England Journal of Medicine. 320, (1) 53-56

Berwick, D. (1996). A primer on leading the improvement of systems. British


Medical Journal. 312,619-622

Berwick, D. Godfrey, A. Roessner, J. (1990). Curing Health Care. San Francisco:


Jossey-Bass

Bickman, L. Rog, D. (1998). Introduction. In: Bickman, L. Rog, D. (eds).


Handbook of Applied Social Research Methods. Thousand Oaks: Sage

Black, S. Porter, L. (1996). Identification of the Critical Factors of TQM. Decision


Sciences. 27(1)1-21

Bloor, K. Maynard, A. (1998). Clinical Governance: Clinician, heal thyself?


London: Institute of Health Services Management.

Blumenthal, D. Kilo,C. (1998). A Report Card on Continuous Quality Improvement.


The Milbank Quarterly. 76, (4) 625-648

Boaden, R. (1996). Is total quality management really unique? Total Quality


Management. 7 (5) 553-570

Boaden, R. (1997). What is total quality management... and does it matter? Total
Quality Management. 8 (4) 153-171

Boerstler, H. Foster, R. O'Connor, O'Brien, J. Shortell, S, Carmen, J. Hughes, E.


(1996). Implementation of Total Quality Management: Conventional Wisdom versus
Reality. Hospital and Health Services Administration. 41 (2) 143-159

Brown, L. (1993). Social change through collective reflection with Asian


nongovernmental development organizations. Human Relations. 46 (2) 249-273
Brown, M. (1993). Why does total quality fail in two out of three tries? Journal of
Quality and Participation. March. 80-89

315
Cameron, K. Barnett, C. (2000). Organization Quality as a Cultural Variable: an
Empirical Investigation of Quality Culture, Processes, and Outcomes. In: Cole, R.
Scott, W. (2000). (eds). The Quality Movement and Organization Theory. California:
Sage

Carnall, C. (1999). Managing Change in Organizations. London: Prentice Hall

Carver, J. (1990). Boards that make a difference. San Francisco: Jossey-Bass

Checkland, P. Scholes, J. (1990). Soft Systems Methodology in Action. Chichester:


John Wiley & Sons

CHI website - www.chi.nhs.uk

Conduit, D. Morgan, A. Willetts, J. (1999). Clinical Governance in the NHS Trusts:


results of an early baseline assessment in the Trent Region. Journal of Clinical
Governance. 7 121-123

Crosby, P. (1979). Quality Is Free. New York: Penguin Books

Cummings, T. Worley, C. (2001). Organization Development and Change. Australia:


South Western College Publishing

Cunningham, J. (1993). Action Research and Organizational Development. London:


Praeger

Dale, B. (1994). Managing Quality. New York: Prentice Hall

Dale, B. Boaden, R. (1994). A generic framework for managing quality


improvement. In Dale, B. (ed). Managing Quality. New York: Prentice Hall

Dale, B. Boaden, R. Lascelles, D. (1994). Total Quality Management: an overview.


In: Dale, B. (1994) (ed). Managing Quality. Hertfordshire: Prentice Hall

Dale, B. Cooper, C. (1994). Total Quality Management: Some Common Mistakes


Made By Senior Management. QWTS. March. 4-11

Dash, D. (1999). Current Debates in Action Research. Systemic Practice and Action
Research. 12(5)457-492

Davies, HTO. Mannion, R. (1999). Clinical Governance: Striking a Balance


Between Checking and Trusting. York: Centre for Health Economics

Davies, HTO. Nutley, S. Mannion, R. (2000). Organisational culture and quality of


health care. Quality in Health Care. 9 111-119

316
Davis, T. (1997). Breakdowns in Total Quality Management: An Analysis with
Recommendations. InternationalJournal of Management. 14(1) 13-22

Dawson, S. (1992). Analysing Organisations. London: The MacMillan Press Ltd

Day, P. Klein, R. (2002). Who nose best? Health Service Journal. 112 (5799) 26-
29

Dean, J. Bowen, D. (1994). Management Theory and Total Quality: Improving


Research and Practice through Theory Development. Academy of Management
Review. 19 (3) 392-418

Deming, W.E. (1986). Out of Crisis. Cambridge: Cambridge University Press

Department of Health. (1997). 'The new NHS Modern: Dependable'. London:


HMSO

Department of Health. (1998). A First Class Service. London: HMSO

Department of Health. (1999). HSC 1999/065. Clinical Governance: Quality in the


new NHS. London

Department of Health. (2000). The NHS Plan. A plan for investment; a plan for
reform. London: The Stationary Office

Dewar, S. (1999). Clinical Governance Under Construction. London: King's Fund

Dobbs, M. (1994). Continuous Improvement as Continuous Implementation:


Implementing TQM in Santa Ana. Public Productivity and Management Review. 18
(1) 89-100

Donaldson, L. (1998). Commentary: Clinical Governance and Service Failure in the


NHS. Public Money and Management. Oct-Dec 10-11

Donaldson, L. (1999). Clinical governance - medical practice in a new era. The


Journal of the MDU. 15(2)7-9

Donaldson, L. Muir Gray, J. (1998). Clinical Governance: a quality duty for health
organisations. Quality in Health Care. 7 (Supplement) 37-44

Drucker, P. (1989). The Practice of Management. Heinemann Professional.

Dunphy, D. (1996). Organizational Change in Corporate Settings. Human Relations.


49(5) 541-551

Dunsire, M. (1978). Implementation in a Bureaucracy. Oxford: Martin Robertson

317
Easton, G. Jarrell, S. (2000). Patterns in the Deployment of Total Quality
Management: An Analysis of 44 Leading Companies. In: Cole, R. Scott, W (2000).
(eds). The Quality Movement and Organization Theory. California: Sage

Eden, C. Huxham, C. (1996). Action Research for the Study of Organizations. In:
Clegg, S. Hardy, C. Nord, W. (eds). Handbook of Organization Studies. London:
Sage

Elden, M. Chisholm, R. (1993). Features of Emerging Action Research. Human


Relations. 46(2) 275-298

Elliot, J. (1991). Action Research for Educational Change. Buckingham: Open


University Press

Ellis, R Whittington, D. (1993). Quality Assurance in Health Care: A Handbook.


London: Edward Arnold

Elmore, R (1978). Organizational models of social program implementation. Public


Policy. 26 185-228

Faulkner, W. (1951). Requiem for a Nun. Cited in: Ratcliffe, S (ed). The Oxford
Dictionary of Thematic Quotations. Oxford: The Oxford University Press

Feigenbaum, A. (1991). Total Quality Control. New York: McGraw-Hill

Ferlie, E. (1997). Large-scale organizational and managerial change in health care: a


review of the literature. Journal of Health Services Research and Policy. 2 180-189

Firth-Cozens, J. (1999). Clinical governance development needs in health service


staff. British Journal of Clinical Governance. 4 (4) 128-134

Flood, R (1993). Beyond TQM. Chichester: John Wiley

Foster, M. (1972). An Introduction to the Theory and Practice of Action Research in


Work Organizations. Human Relations. 25 (6) 529-556

Gann, M. Restuccia, J. (1994). Total Quality Management in Health Care: A View of


Current and Potential Research. Medical Care Review. 51 (4) 467-500

Garside P. (1998). Organisational context for quality: lessons from the fields of
organisational development and change management. Quality in Health Care. 7
(Supplement) 8-15

Gersick, C. (1991). Revolutionary Change Theories: A multilevel exploration of the


punctuated equilibrium paradigm. Academy of Management Review. 16(1) 10-36

318
Ghobadian, A. Gallear, D. (1997). TQM and organization size. International Journal
of Operations and Production Management. 17 (2) 121 -163

Gioia, D. Chittipeddi, K. (1991). Sensemaking and sensegiving in strategic change


initiation. Strategic Management Journal. 12433-448

Glover, J. (1993). Achieving the Organizational Change Necessary for Successful


TQM. The International Journal of Quality and Reliability Management. 10 (6) 47-64

Goh, P. Ridgway, K. (1994). The implementation of total quality management in


small and medium sized manufacturing companies. TQM Magazine. 6 (2) 54-60

Goodman, N. (1998). Clinical governance. British Medical Journal. 317.1725-7

Goodman, N. (2002). Clinical governance: vision or mirage? Journal of Evaluation


in Clinical Practice 8 (2) 243-249

Goodstein, L. Burke, W. (1991). Creating Successful Organization Change.


Organizational Dynamics. 19(4)5-17

Grainger, K. Hopkinson, R. Barrett, V. Campbell, C. Chittenden, S. Griffiths, R


Low, D. Parker, J. Roy, A. Thompson, T. Wilson,T. (2002). Implementing clinical
governance - results of a year's programme of semi-structured visits to assess the
development of clinical governance in West Midlands Trusts. British Journal of
Clinical Governance. 7 (3) 177-186

Grant, R Shani, R Krishnan, R (1994). TQM's challenge to management theory


and practice. Sloan Management Review. 35 (2) 25

Greenwood, D. Levin, M. (1998). An Introduction to Action Research: Social


research for social change. Thousand Oaks: Sage

Greenwood, D. Whyte, W. Harkavy, I. (1993). Participatory action research as a


process and as a goal. Human Relations 46 (2) 175-92

Gunn, L. (1978). Why is Implementation So Difficult? Management Services In


Government. 33 169-176

Hackett, M. Spurgeon, P. (1999). Culture, leadership, and power: the key to


changing attitudes and behaviours in trusts. Clinician in Management. 8 27-32

Hackman, J. Wageman, R (1995). Administrative Science Quarterly. 40 309-342

Hallett, L. Thompson, M. (2001). Clinical governance: a practical guide for


managers. London: e map Public Management.

319
Halligan, A. (1999). How the National Clinical Governance Support Team plans to
support the development of clinical governance in the workplace. Journal of Clinical
Governance. 7 Decppl55-157

Hamada, T. (2000). Quality as a Cultural Concept: Messages and Metamessages. In:


Cole, R. Scott, W. (2000). (eds). The Quality Movement and Organization Theory.
California: Sage

Hamel, G. (2001). Revolution vs. Evolution: You Need Both. Harvard Business
Review. 150-125

Harrison, S. (2000). Lewis Gunn and 'perfect implementation1. Clinician in


Management. 949-50

Hart, M. (1996). Improving the quality of NHS out-patient clinics: the applications
and misapplications of TQM. The International Journal of Health Care Quality
Assurance. 9(2)20-27

Hart E. Bond, M. (1995). Action Research for Health and Social Care.
Buckingham: Open University Press

Hawkins, P. (1997). Organizational Culture: Sailing Between Evangelism and


Complexity. Human Relations. 50(4)417-440

Hedrick, T. Bickman, L. Rog, D. (1993). Applied Research Design. Newbury Park :


Sage

Hewer, P. Lugon, M. (2001). Clinical governance - putting it into practice in a trust.


Clinical Governance Bulletin. 2(1) 8-9

Hill,M. (1997). The Policy Process. London: Prentice Hall

Hill, S. Wilkinson, A. (1995). In search of TQM. Employee Relations. 17 (3) 8-25

Hittinger, R. (2001). Implementation of clinical governance in a teaching trust.


Clinical Governance Bulletin. 2(1)9-11

Hogwood, B. Gunn, L. (1984). Policy Analysis for the Real World. Oxford: OUP
Holland, K. Fennell, S. (2000). Clinical governance is "ACE" - using the EFQM
Excellence Model to support baseline assessment. International Journal of Health Care
Quality Assurance. 13 (4) 170-177

Holt, P. (1999). The Rhythm of quality management. British Journal of Health Care
Management. 5 (6) 242-246

320
Hopkinson, R. (1999). Clinical governance: putting it into practice in an acute trust.
Clinician in Management 8 81-88

Huws, R. (2000). Implementing clinical governance. Clinician in Management. 9 39-


43

Ichniowski, C. Shaw, K. (2000). Quality Improvement Practices and Innovative


HRM Practices: New Evidence on Adoption and Effectiveness. In: Cole, R. Scott, W.
(2000). (eds). The Quality Movement and Organization Theory. California: Sage

lies, V. Sutherland, K. (2001). Managing Change in the NHS. London: NCCSDO

Internal Trust Document (1999). The Clinical Governance Report

Internal Trust Document (2000a). The Clinical Governance Development Plan

Internal Trust Document (2000b). The Human Resource Strategy

Jackson, S. (2001). Successfully implementing total quality management tools within


healthcare: what are the key actions? International Journal of Health Care Quality
Assurance. 14(4)157-163

Jen kins, W. (1978). Policy Analysis: A Political and Organisational Perspective.


London: Martin Robertson

Joss, R Kogan, M. (1995). Advancing Quality. Buckingham: Open University


Press

Joss, R. Kogan, M. Henkel, M. (1994). Total Quality Management in the National


Health Service. Final Report of an Evaluation. Brunei University: Centre for the
Evaluation of Public Policy and Practice.

Juran, J. (1988). Juran on Planning for Quality. New York: The Free Press

Kanji, G. (1996). Implementation and pitfalls of total quality management. Total


Quality Management. 7(3) 331-343

Kanji, G. Barker, R (1990). Implementation of total quality management. Total


Quality Management 1 (3) 375-389

Katz, A. (1993). Eight TQM pitfalls. Journal for Quality and Participation.
July/August 24-27

Kerrison, S. Packwood, T. Buxton, M. (1994). Monitoring Medical Audit. In:


Robinson, R. Le Grand, J. (eds) (1994). Evaluating the NHS Reforms. Newbury:
Policy Journals

321
Kim, P. Johnson, D. (1994). Implementing total quality management in the health
care industry. Health Care Supervisor. 12(3) 51-57

Klein, R. (1995). Big Bang Health Care Reform - Does It Work?: The Case of
Britain's 1991 National Health Service Reforms. The Millbank Quarterly Review. 73
(3) 299-337

Klein, R. (1998). Can policy drive quality. Quality in Health Care. 7 (Supplement)
51-53

Kochan, T. Rubenstein, S. (2000). Human Resource Policies and Quality: From


Quality Circles to Organizational Transformation. In: Cole, R. Scott, W. (2000).
(eds). The Quality Movement and Organization Theory. California: Sage

Kolesar, P. (1993). Vision, Values, Milestones: Paul O'Neill Starts Total Quality at
Alcoa. California Management Review. Spring 133-165

Kolesar, P. (1995). Partial Quality Management. Production and Operations


Management. 4(3) 195-200

Kotter, J. (1990). A Force for Change. New York: The Free Press

Kotter, J. (1996). Leading Change. Boston: Harvard Business School Press

Kovner, A. (1990). Improving Hospital Board Effectiveness: An Update. Frontiers


of Health Services Management. 6 (3) 3-27

Krishnan, R. Shani, A. Grant, R. Baer, R. (1993). In search of quality


improvement: Problems of design and implementation. 7 (4) 7-20

Kritchevsky, S. and Simmons, B. (1991). Continuous Improvement: Concepts and


Applications for Physician Care. Journal of the American Medical Association. 266
(13) 1817-1823

Lane, J. (1987). Implementation, accountability and trust. European Journal of


Political Research. 15(5) 527-546

Latham, L. (1996). Cancer Services: What Shropshire People Say. Shropshire Health
Authority

Latham, L. Freeman, T. Walshe, K. Spurgeon, P. Wallace, L. (2000). Clinical


governance in the West Midlands and South West Regions: early progress in NHS
Trusts. Clinician in Management. 9(2)83-91

Ledford, G. Morhman, S. (1993). Self-Design for High Involvement: A Large-Scale


Organizational Change. Human Relations. 46(1) 143-173

322
Lewin, K. (1951). Field Theory in Social Science. New York: Harper & Row

Lewis, S. Saunders, N. Fenton, K. (2002). The magic matrix of clinical governance.


British Journal of Clinical Governance. 7(3) 150-153

Lillrank, P. Shani, R. Kolodny, H. Stymme, B. Figuera, J. Liu, M. (1998).


Learning from the success of continuous improvement programs: an international
comparative study. Research in Organizational Change and Development. 11 47-71

Lincoln, Y. Guba. E. (1985). Naturalistic inquiry. Beverly Hills: Sage

Lipsky, M. (1980). Street-level Bureaucracy: Dilemmas of the individual in public


services. New York: Russell Sage Foundation.

Lloyd, A. (2001). Role of the Chief Executive in Clinical Governance: Some


Practical Guidelines. In: Lugon, M. Seeker-Walker, J. (2002). Advancing Clinical
Governance. London: The Royal Society of Medicine Press Limited.

Lovelock, C. (1992). Managing Services. London: Prentice-Hall

Lugon, M. Seeker-Walker, J. (1999). Clinical Governance: Making it Happen.


London: The Royal Society of Medicine Press

Mann, R. Kehoe, D. (1995). Factors affecting the implementation and success of


TQM. The International Journal of Quality and Reliability Management. 12(1)11

Marris, P. (1993). The Management of Change. In: Mabey, C. and Mayon-White, B.


(eds). Managing Change. Second edition. London: Paul Chapman

Marshall, C. Rossman, G. (1998). Designing Qualitative Research. Thousand Oaks:


Sage

Maxwell, J. (1998). Designing a Qualitative Study. In: Bickman, L. Rog, D. (eds).


Handbook of Applied Social Research Methods. Thousand Oaks: Sage

Maxwell, R. (1984). Quality assessment in health. British Medical Journal. 288


1470-1471

Mays, N. Pope, C. (1995). Qualitative Research: Rigour and qualitative research.


British Medical Journal. 311 109-112

Mays, N. Pope, C. (2000). Assessing quality in qualitative research. British Medical


Journal. 320 50-2

McLaughlin, C. Kaluzny, A. (1990). Total quality management in health: Making it


work. Health Care Management Review. 15(3) 7-14

323
McTaggart, R. (1994). Participatory Action Research: issues in theory and practice.
Educational Action Research. 2 (3) 313-337

Meyer, A. Goes, J. Brooks, G. (1995). Organizations Reacting to Hyperturbulance.


In: Huber, G. Click, W (1995). Organizational Change and Redesign. Oxford: Oxford
University Press

Meyer, J. (2001). Action Research. In: Fulop, N. Alien, P. Clarke, A. Black, N.


(eds). Studying Organisation and Delivery of Health Services. London: Routledge

Miles, M. Huberman, A. (1994). Qualitative Data Analysis. Thousand Oaks: Sage

Miles, R. (1997). Leading Corporate Transformation. San Francisco: Jossey-Bass

Mintzberg, H. (1979). The Structuring of Organizations. London: Prentice-Hall

Morgan, C. Murgatroyd, S. (1994). Total Quality Management in the Public Sector.


Buckingham: Open University Press

Morgan, G. (1986). Images of Organization. California: Sage

Moss, F. (1995). Risk management and the quality of care. In: Vincent, C. (ed)
(1995). Clinical Risk management. London: BMJ

Motwani, J. Sower, V. Brashier, L. (1996). Implementing TQM in the health sector.


Health Care Management Review. 21(1) 73-82

Mullins, L. (1999). Management and Organisational Behaviour. London: Financial


Times Pitman Publishing

Nadler, D. Tush man, M. (1995). The Challenge of Discontinuous Change. In:


Nadler, D. Shaw, R. Walton, A. (eds). Discontinuous Change. San Francisco: Jossey-
Bass

Newall, D. Dale, B. (1991). The introduction and development of a quality


improvement process: a study. International Journal of Production Research. 29 (9)
1747-1760

Nicholls, S. Cullen, R. O'Neill, S. Halligan, A. (2000). Clinical governance its origins


and foundations. British Journal of Clinical Governance. 5 (3)172-178

Nicholls, S. Cullen, R. O'Neill, S. Halligan, A. (2000). Clinical governance: its


origins and foundation. Clinical Performance and Quality Health Care. 8 (3)172-178

Nwabueze, U. Kanji, G. (1997). The implementation of total quality management in


the NHS: how to avoid failure. Total Quality Management. 8 (5) 265-280

324
Oakland J. Porter, L. (1994). Cases in Total Quality Management. Oxford:
Butterworth-Heinemann

Oakland, J. (1995). Total Quality Management. Oxford: Butterworth - Heinemann

Olian, J. Rynes, S. (1991). Making Total Quality Work: Aligning Organizational


Processes, Performance Measures and Stakeholders. Human Resource Management.
30 (3) 303-333

Ovretveit, J. Aslaksen, A (1999). The Quality Journeys of Six Norwegian Hospitals.


Oslo: Norwegian Medical Association

Ovretveit, J. (1992). Health Service Quality. Oxford: Blackwell Scientific


Publications

Ovretveit, J. (1998). Proving and improving the quality of national health services:
past, present and future. In: Spurgeon, P. (ed). The New Face of the NHS. Second
edition. London: The Royal Society of Medicine Press

Ovretveit, J. (1999). Integrated Quality Development in Public Healthcare. Norway:


Norwegian Medical Association

Packwood, T. Pollitt, C. Roberts, S. (1998). Good Medicine? A case study of


business process re-engineering in a hospital. Policy and Politics. 26 (4) 401-415

Parsons, W. (1995). Public Policy. Cheltenham: Edward Elgar.

Pasmore, W. Friedlander, F. (1981). An action research program for increasing


employee involvement in problem-solving. Administrative Science Quarterly. 27,
343-362

Patton, M. (1999). Enhancing the Quality and Credibility of Qualitative Analysis.


Health Services Research. 34 (5) 1189-1208
Patton, M. (2002). Qualitative Research and Evaluation Methods. Thousand Oaks:
Sage
Pendlebury, J. Grouard, B. Meston, F. (1998). Successful Change Management.
Chichester: Wiley

Pettigrew, A. Ferlie, E. McKee, L. (1992). Shaping Strategic Change. London: Sage

Peters T. Waterman, R. (1982). In Search of Excellence. London: Pan Books

Pollitt, C. (1990). Doing business in the temple? Managers and quality assurance in
the public services. Public Administration. 68 (4) 435-452

325
Pollitt, C. (1993). The Struggle for Quality: the case of the National Health Service.
Policy and Politics. 21 (93), 161-170

Pollitt, C. (1996). Business approaches to quality improvement: why are they hard for
the NHS to swallow. Quality in Health Care. 5 104-110

Porter, L. Parker, A. (1993). Total quality management - the critical success factors.
Total Quality Management. 4(1) 13-22

Povey, B. (1996). Continuous Business Improvement. London: McGraw-Hill

Powell, T. (1995). Total Quality Management as Competitive Advantage: A Review


and Empirical Study. Strategic Management Journal. 16 15-37

Rand, J. (1994). Learning comes before ownership. Journal for Quality and
Participation. July/August. 64-68

Reger, R. Gustafason, L. Demaire, S. Mullane, J. (1994). Reframing The


Organization: Why Implementing Total Quality Is Easier Said Than Done. Academy
of Management Review. 19(3) 565-584

Robinson, J. Akers, J. Artzt, E. Poling, EL Galvin, R. Alia ire, P. (1991). An open


letter: TQM on the campus. Harvard Business Review. 69 (6) 94-95

Robson, C. (1993). Real World Research. Oxford: Blackwell

Robson, C. (2000). Small-Scale Evaluation. London: Sage

Romano, C. (1994). Report Card on TQM. Management Review. Jan. 22-25

Sabatier, P. (1986). Top-down and bottom-up approaches to implementation


research: a critical analysis and suggested synthesis. Journal of Public Policy. 6 21-48

Santayaner, G. (1905). The Life of Reason. Cited in: Ratcliffe, S (ed). The Oxford
Dictionary of Thematic Quotations. Oxford: The Oxford University Press

Saraph, J. Benson, P. Schroeder, R. (1989). An instrument for measuring the critical


factors of quality. Decision Sciences. 20810-829

Scally, G. Donaldson, L. (1998). Clinical governance and the drive for quality
improvement in the new NHS in England. British Medical Journal. 317 61-65

Scotland, A. (1998). Clinical governance in the new NHS: an agenda for personal and
organisational development. Clinician in Management. 7 138-141

326
Scott, W. Cole, R. (2000). The Quality Movement and Organization Theory. In:
Cole, R. Scott, W. (2000). (eds). The Quality Movement and Organization Theory.
California: Sage

Shea, C. Ho well, J. (1998). Organizational Antecedents to the Successful


Implementation of Total Quality Management: A Social Cognitive Perspective.
Journal of Quality Management. 3(1) 3-24

Short, P. Rahim, M. (1995). Total quality management in hospitals. Total Quality


Management. 6(3) 255-263

Shortell, S. Levin, D. O'Brien, J. Hughes, E. (1995). Assessing the Evidence on


CQI: Is the Glass Half Empty or Half Full. Hospital and Health Services
Administration. 40 (1) 4-24

Shortell, S. O'Brien, J. Carmen, J. Foster, R. Hughes, E. Boerstler, H. O'Connor,


E. (1995). Assessing the Impact of Continuous Quality Improvement/Total Quality
Management: Concept versus Implementation. Health Services Research. 30 (2) 377-
401

Silverman, D. (2000). Doing Qualitative Research: A Practical Handbook. London:


Sage

Sitkin, S. Sutcliffe, K. Schroeder, R. (1994). Distinguishing Control from Learning


in Total Quality Management: A Contingency Perspective. Academy of Management
Review. 19 (3) 537-564

Snape, E. Redman, T. (1995). Managing human resources for TQM: possibilities and
pitfalls. Employee Relations. 17(3) 42-51

Snowberger, M. (1996). TQM - why and how it was implemented at Gilbarco. GEC
Journal of Research. 13(2) 80-86

Spencer, B. (1994). Models of Organization and Total Quality Management: A


Comparison and Critical Evaluation. Academy of Management Review. 19 (3) 446-
471

Sproull, L. Hofmeister, K. (1986). Thinking About Implementation. Journal of


Management. 12(1) 43-60

Spurgeon, P. (1999). Organisational Development: from a reactive to a proactive


process. In: Mark. A. Dopson, S. (eds) Organisational Behaviour in Healthcare. :
London: MacMillan Business

327
Spurgeon, P. Latham, L. (2003) (In press). Pursuing Clinical Governance Through
Effective Leadership. In Dopson, S. Mark, A. (eds). Leading Healthcare Organisations.
Basingstoke: Macmillan

Stamatis, D. (1994). Total Quality Management and Project Management. Project


Management Journal. XXV (3) 48-54

Sutherland, D. Dawson, S. (1998). Power and quality improvement in the new NHS:
the roles of doctors and managers. Quality in Health Care. 7 (Supplement) 16-23

Tata, J. Prasad, S. (1998). Cultural and structural constraints on total quality


management implementation. Total Quality Management. 9(8) 703-710

Taylor, D. (1996). Quality and professionalism in health care: a review of current


initiatives in the NHS. British Medical Journal. 312 626-629

Taylor, W. (1996). Sectoral differences in total quality management implementation:


the influence of management mind-set. Total Quality Management. 7(3) 235-248

Teixeira, A. (1999). How to navigate the sea of quality management literature.


Strategic Change. 8143-151

The Conference Board. (1993). Does Quality Work? A Review of Relevant Studies.
New York: The Conference Board.

Thomas, M. (2002). The evidence base for clinical governance. Journal of


Evaluation in Clinical Practice 8 (2) 251-254

Thomson, R. (1998). Quality to the fore in health policy - at last. British Medical
Journal. 317,95-96

Tush man, M. O'Reilly, C. (1996). Ambidextrous Organizations: Managing


Evolutionary and Revolutionary Change. California Management Review. 38 (4) 8-
30

Umbdenstock, R. Hageman, W. Amundson, B. (1990). The Five Critical Areas for


Effective Governance of Not-for-Profit Hospitals. Hospital & Health Services
Administration. 35 (4) 481-492

Van de Ven, A. Poole, M. (1995). Explaining Development and Change in


Organizations. Academy of Management Review. 20(3) 510-540

Van Meter, D. Van Horn, C. (1975). The policy implementation process: a


conceptual framework. Administration and Society. 6 445-488

328
Wakefield, D. Wakefield, B. (1993). Overcoming the Barriers to Implementation of
TQM/CQI in Hospitals: Myths and Realities. QRB. March 83-88

Wall, A. (1999). Review: Clinical governance: Making it Happen. Health Service


Journal. 109(5654)30-31

Walsh, P. (1995). Overcoming chronic TQM fatigue. The TQM Magazine. 7(5)58-
64

Walshe, K. (1997). NICE ideas on quality. Health Service Journal. 18 Dec. 20

Walshe, K. (1998a). Cutting to the heart of quality. Health Management. 2 (4) 20-21

Walshe, K. (1998b). Clinical governance: what does it really mean? HSMC


Newsletter. 4 (2) 1-2

Walshe, K. (1998c). The reliability and validity of adverse-event measures of the


quality of healthcare. PhD thesis. University of Birmingham.

Walshe, K. (1999). Baseline assessment for clinical governance: issues, methods and
results. Journal of Clinical Governance. 7 166-171

Walshe, K. (2000a). Systems for clinical governance: evidence of effectiveness.


Journal of Clinical Governance. 8 174-180

Walshe, K. (2000b). Clinical governance: a review of the evidence. Health Services


Management Centre: University of Birmingham

Walshe, K. Dineen, M. (1998). Clinical risk management: making a difference?


Birmingham: The NHS Confederation.

Walshe, K. Freeman, T. Latham, L. Wallace, L. Spurgeon, P. (2000). Clinical


governance: from policy to practice. Birmingham: Health Services Management
Centre

Weick, K. Quinn, R. (1999). Organizational Change and Development. Annual


Review of Psychology. 50. 361-386

Weiner, B. Alexander, J. (1993). Hospital Governance and Quality of Care: A


Critical Review of Transitional Roles. Medical Care Review. 375-409

Weiner, B. Shoetree, S. Alexander, J. (1997). Promoting Clinical Involvement in


Hospital Quality Improvement Efforts: The Effects of Top Management, Board, and
Physician Leadership. Health Services Research. 32 (4) 491-510

329
West, J. Berman, E. Milakovich, M. (1993). Implementing TQM in Local
Government: The Leadership Challenge. Public Productivity and Management
Review. XVI1 (2) 175-189

Whalen, M. Rahim, M. (1994). Common barriers to implementation of a TQM


program. Industrial Management. 36 (2) 19-21

Wilkinson, A. (1995) Re-examining quality management. Review of Employment


Topics. 3 187-211

Wilkinson, A. Witcher, B. (1993). Holistic total quality management must take


account of political processes. Total Quality Management. 4 (1) 47-56

Williams, W. (1980). The Implementation Perspective. Berkeley: University of


California Press

Wilson, D. (1992). A Strategy of Change. London: Routledge

Winter, M. (1999). Clinical governance - getting beyond a new management mantra?


Healthcare Quality. 26-29

Witcher, B. (1995). The Changing Scale of Total Quality Management. Quality


Management Journal. Summer 9-29

Wolman, H. (1981). The Determinants of Program Success and Failure. Journal of


Public Policy. 1 (4) 433-464

Wright, J. Smith, M. Jackson, D. (1999). Clinical governance: principles into


practice. Journal of Management in Medicine. 13 (6) 457-465

Yin, R. (1994). Case Study Research: Design and Methods. Thousand Oaks: Sage

Yin, R. (1999). Enhancing the Quality of Case Studies in Health Services Research.
Health Services Research. 34 (5) 1209-1224

Yong, J. Wilkinson, A. (1999). The state of total quality management: a review. The
International Journal of Human Resource Management. 10(1)137-161

Yong, J. Wilkinson, A. (2001). Rethinking total quality management. Total Quality


Management. 12 (2) 247-258

Yusof, S. Aspinwall, E. (1999). Critical success factors for total quality management
implementation in small and medium enterprises. Total Quality Management. 10
(4&5) S803-809

330
Yusof, S. Aspinwall, E. (2000a). Total quality management implementation
frameworks: comparison and review. Total Quality Management. 11(3) 281-294

Yusof, S. Aspinwall, E. (2000b). A conceptual framework for TQM implementation


forSMEs. The TQM Magazine. 12(1)31-36

Yusof, S. Aspinwall, E. (2000c). Critical success factors in small and medium


enterprises: survey results. Total Quality Management. 11 (4/S&6) S448-462

Zabada, C. Rivers, P. Munchus, G. (1998). Obstacles to the application of total


quality in health-care organizations. Total Quality Management. 9(1) 57-66

Zairi, M. (1994). Measuring Performance for Business Results. London: Chapman


and Hall

331

You might also like